Wednesday, December 29, 2010

Slipping the 'Cognitive Straitjacket' of Psychiatric Diagnosis

Psychiatry's diagnostic bible meets the awkward facts of genetics
By Steven E. Hyman |

It can fairly be said that modern psychiatric diagnosis was “born” in a 1970 paper on schizophrenia.

The authors, Washington University psychiatry professors Eli Robins and Samuel B. Guze, rejected the murky psychoanalytic diagnostic formulations of their time. Instead, they embraced a medical model inspired by the careful 19th-century observational work of Emil Kraepelin, long overlooked during the mid-20th-century dominance of Freudian theory. Mental disorders were now to be seen as distinct categories, much as different bacterial and viral infections produce characteristic diseases that can be seen as distinct “natural kinds.”

Disorders, Robins and Guze argued, should be defined based on phenomenology: clinical descriptions validated by long-term follow-up to demonstrate the stability of the diagnosis over time. With scientific progress, they expected fuller validation of mental disorders to derive from laboratory findings and studies of familial transmission.

This descriptive approach to psychiatric diagnosis -- based on lists of symptoms, their timing of onset, and the duration of illness -- undergirded the American Psychiatric Association’s widely disseminated and highly influential Diagnostic and Statistical Manual of Mental Disorders, first published in 1980. Since then, the original “DSM-III” has yielded two relatively conservative revisions, and right now, the DSM-5 is under construction. Sadly, it is clear that the optimistic predictions of Robins and Guze have not been realized.

Four decades after their seminal paper, there are still no widely validated laboratory tests for any common mental illness. Worse, an enormous number of family and genetic studies have not only failed to validate the major DSM disorders as natural kinds, but instead have suggested that they are more akin to chimaeras. Unfortunately for the multitudes stricken with mental illness, the brain has not given up its secrets easily.

That is not to say that we have made no progress. DNA research has begun to illuminate the complex genetics of mental illness. But what it tells us, I would argue, is that, at least for the purposes of research, the current DSM diagnoses do not work. They are too narrow, too rigid, altogether too limited. Reorganization of the DSM is hardly a panacea, but science cannot thrive if investigators are forced into a cognitive straitjacket.

Before turning to the scientific evidence of fundamental problems with the DSM, let’s first take note of an important problem that the classification has produced for clinicians and patients alike: An individual who receives a single DSM diagnosis very often meets criteria for multiple additional diagnoses (so-called co-occurrence or “comorbidity”), and the pattern of diagnoses often changes over the lifespan. Thus, for example, children and adolescents with a diagnosis of an anxiety disorder often manifest major depression in their later teens or twenties. Individuals with autism spectrum disorders often receive additional diagnoses of attention deficit hyperactivity disorder, obsessive-compulsive disorder, and tic disorders.

Of course, there are perfectly reasonable explanations for comorbidity. One disorder could be a risk factor for another just as tobacco smoking is a risk factor for lung cancer. Alternatively, common diseases in a population could co-occur at random. The problem with the DSM is that many diagnoses co-occur at frequencies far higher than predicted by their population prevalence, and the timing of co-occurrence suggests that one disorder is not likely to be causing the second. For patients, it can be confusing and demoralizing to receive multiple and shifting diagnoses; this phenomenon certainly does not increase confidence in their caregivers.

Family studies and genetics shed light on the apparently high rate of co-occurrence of mental disorders and suggest that it is an artifact of the DSM itself. Genetic studies focused on finding variations in DNA sequences associated with mental disorders have repeatedly found shared genetic risks for both schizophrenia and bipolar disorder. Other studies have found different sequence variations within the same genes to be associated with schizophrenia and autism spectrum disorders.

An older methodology, the study of twins, continues to provide important insight into this muddy genetic picture. Twin studies generally compare the concordance for a disease or other trait within monozygotic twin pairs, who share 100% of their DNA, versus concordance within dizygotic twin pairs, who share on average 50% of their DNA. In a recent article in the American Journal of Psychiatry, a Swedish team of researchers led by Paul Lichtenstein studied 7,982 twin pairs. They found a heritability of 80% for autism spectrum disorders, but also found substantial sharing of genetic risk factors among autism, attention deficit hyperactivity disorder, developmental coordination disorder, tic disorders, and learning disorders.

In another recent article in the American Journal of Psychiatry, Marina Bornovalova and her University of Minnesota colleagues studied 1,069 pairs of 11-year-old twins and their biological parents. They found that parent-child resemblance was accounted for by shared genetic risk factors: in parents, they gave rise to conduct disorder, adult antisocial behavior, alcohol dependence, and drug dependence; in the 11-year-olds these shared factors were manifest as attention deficit hyperactivity disorder, conduct disorder, and oppositional-defiant disorder. (Strikingly, attention deficit disorder co-occurs in both the autism spectrum cluster and disruptive disorder cluster.)

These and many other studies call into question two of the key validators of descriptive psychiatry championed by Robins and Guze. First, DSM disorders do not breed true. What is transmitted across generations is not discrete DSM categories but, perhaps, complex patterns of risk that may manifest as one or more DSM disorders within a related cluster. Second, instead of long-term stability, symptom patterns often change over the life course, producing not only multiple co-occurring diagnoses but also different diagnoses at different times of life.

How can these assertions be explained? In fairness to Robins and Guze, they could not have imagined the extraordinary genetic complexity that produces the risk of many common human ills, including mental disorders. What this means is that common mental disorders appear to be due to different combinations of genes in different families, acting in combination with epigenetics -- gene expression varies even if the underlying DNA sequence is the same -- and non-genetic factors.

In some families, genetic risk for mental disorders seems to be due to many, perhaps hundreds, of small variations in DNA sequence -- often single “letters” in the DNA code. Each may cause a very small increment in risk, but, in infelicitous combinations, can lead to illness. In other families, there may be background genetic risk, but the coup de grace arrives in the form of a relatively large DNA deletion, duplication, or rearrangement. Such “copy number variants” may occur de novo in apparently sporadic cases of schizophrenia or autism.

In sum, it appears that no gene is either necessary or sufficient for risk of a common mental disorder. Finally, a given set of genetic risks may produce different symptoms depending on broad genetic background, early developmental influences, life stage, or diverse environmental factors.

The complex nature of genetic risk offers a possible explanation for comorbidity: what the DSM treats as discrete disorders, categorically separate from health and from each other, are not, in fact, discrete. Instead, schizophrenia, autism-spectrum disorders, certain anxiety disorders, obsessive-compulsive disorder, attention deficit hyperactivity disorder, mood disorders, and others represent families of related disorders with heterogeneous genetic risk factors underlying them. I would hypothesize that what is shared within disorder families, such as the autism spectrum or the obsessive-compulsive disorder spectrum, are abnormalities in neural circuits that underlie different aspects of brain function, from cognition to emotion to behavioral control, and that these circuit abnormalities do not respect the narrow symptoms checklists within the DSM.

The first DSM had many important strengths, but I would argue that part of what went wrong with it was a fairly arbitrary decision: the promulgation of a large number of disorders, despite the early state of the science, and the conceptualization of each disorder as a distinct category. That decision eschewed the possibility that some diagnoses are better represented in terms of quantifiable dimensions, much like the diagnoses of hypertension and diabetes, which are based on measurements on numerical scales.

These fundamental missteps would not have proven so problematic but for the human tendency to treat anything with a name as if it is real. Thus, a scientifically pioneering diagnostic system that should have been treated as a set of testable hypotheses was instead virtually set in stone. DSM categories play a controlling role in clinical communication, insurance reimbursement, regulatory approval of new treatments, grant reviews, and editorial policies of journals. As I have argued elsewhere, the excessive reliance on DSM categories, which are poor mirrors of nature, has limited the scope and thus the utility of scientific questions that could be asked. We now face a knotty problem: how to facilitate science so that DSM-6 does not emerge a decade or two from now a trivially revised descendant of DSM-III, but without disrupting the substantial clinical and administrative uses to which the DSM system is put.

I believe that the most plausible mechanism for repairing this plane while it is still flying is to give new attention to overarching families of disorders, sometimes called meta-structure. In previous editions of the DSM, the chapters were almost an afterthought compared with the individual disorders. It should be possible, without changing the criteria for specific diagnoses, to create chapters of disorders that co-occur at very high rates and that appear to share genetic risk factors based on family, twin, and molecular genetic studies.

This will not be possible for the entire DSM-5, but it would be possible for certain neurodevelopmental disorders, anxiety disorders, the obsessive-compulsive disorder spectrum, so-called externalizing or disruptive disorders (such as antisocial personality disorder and substance use disorders), and others. Scientists could then be invited by funding agencies and journals to be agnostic to the internal divisions within each large cluster, to ignore the over-narrow diagnostic categories. The resulting data could then yield a very different classification by the time the DSM-6 arrives.

Psychiatry has been overly optimistic about progress before, but I would predict that neurobiologically based biomarkers and other objective tests will emerge from current research, along with a greater appreciation of the role of neural circuits in the origins of mental disorders. I would also predict that discrete categories will give way, where appropriate, to quantifiable dimensions. At the very least, the science of mental disorders should be freed from the unintended cognitive shackles bequeathed by the DSM-III experiment.

Thursday, December 9, 2010

Can Psychological Trauma Be Inherited?

By Rick Nauert PhD
Senior News Editor, PsychCentral

Can Psychological Trauma Be Inherited?An emerging topic of investigation looks to determine if post-traumatic stress disorder (PTSD) can be passed to subsequent generations.

Scientists are studying groups with high rates of PTSD, such as the survivors of the Nazi death camps. Adjustment problems of the children of the survivors — the so-called “second generation” — is topic of study for researchers.

Studies suggested that some symptoms or personality traits associated with PTSD may be more common in the second generation than the general population.

It has been assumed that these transgenerational effects reflected the impact of PTSD upon the parent-child relationship rather than a trait passed biologically from parent to child.

However, Dr. Isabelle Mansuy and colleagues provide new evidence in the current issue of Biological Psychiatry that some aspects of the impact of trauma cross generations and are associated with epigenetic changes, i.e., the regulation of the pattern of gene expression, without changing the DNA sequence.

They found that early-life stress induced depressive-like behaviors and altered behavioral responses to aversive environments in mice.

Importantly, these behavioral alterations were also found in the offspring of males subjected to early stress even though the offspring were raised normally without any stress. In parallel, the profile of DNA methylation was altered in several genes in the germline (sperm) of the fathers, and in the brain and germline of their offspring.

“It is fascinating that clinical observations in humans have suggested the possibility that specific traits acquired during life and influenced by environmental factors may be transmitted across generations. It is even more challenging to think that when related to behavioral alterations, these traits could explain some psychiatric conditions in families,” said Dr. Mansuy.

“Our findings in mice provide a first step in this direction and suggest the intervention of epigenetic processes in such phenomenon.”

“The idea that traumatic stress responses may alter the regulation of genes in the germline cells in males means that these stress effects may be passed across generations. It is distressing to think that the negative consequences of exposure to horrible life events could cross generations,” commented Dr. John Krystal, editor of Biological Psychiatry.

“However, one could imagine that these types of responses might prepare the offspring to cope with hostile environments. Further, if environmental events can produce negative effects, one wonders whether the opposite pattern of DNA methylation emerges when offspring are reared in supportive environments.”

To erase a bad memory, first become a child

Editorial: Troops need to remember

IT ADDS new meaning to getting in touch with your inner child. Temporarily returning the brain to a child-like state could help permanently erase a specific traumatic memory. This could help people with post-traumatic stress disorder and phobias.

At the Society of Neuroscience conference in San Diego last month researchers outlined the ways in which they have managed to extinguish basic fear memories.

Most methods rely on a behavioral therapy called extinction, in which physicians repeatedly deliver threatening cues in safe environments in the hope of removing fearful associations. While this can alleviate symptoms, in adults the original fear memory still remains. This means it can potentially be revived in the future.

A clue to permanent erasure comes from research in infant mice. With them, extinction therapy completely erases the fear memory, which cannot be retrieved. Identifying the relevant brain changes in rodents between early infancy and the juvenile stage may help researchers recreate aspects of the child-like system and induce relapse-free erasure in people.

One of the most promising techniques takes advantage of a brief period in which the adult brain resembles that of an infant, in that it is malleable. The process of jogging a memory, called "reconsolidation", seems to make it malleable for a few hours. During this time, the memory can be adapted and even potentially deleted.

Daniela Schiller at New York University and her colleagues tested this theory by presenting volunteers with a blue square at the same time as administering a small electric shock. When the volunteers were subsequently shown the blue square alone, the team measured tiny changes in sweat production, a well-documented fear response.

A day later, Schiller reminded some of the volunteers of the fear memory just once by presenting them with both square and shock, making the memory active. During this window of re-consolidation, the researchers tried to manipulate the memory by repeatedly exposing the volunteers to the blue square alone.

These volunteers produced the sweat response significantly less a day later compared with those who were given extinction therapy without any reconsolidation (Nature, DOI: 10.1038/nature08637).

What's more, their memory loss really was permanent. Schiller later recalled a third of the volunteers from her original experiment. "A year after fear conditioning, those that had [only] extinction showed an elevated response to the square, but those with extinction during reconsolidation showed no fear response," she says.
A year after conditioning, those whose memory had been manipulated showed no fear response

The loss in infant mice of the ability to erase a fearful memory coincides with the appearance in the brain of the perineuronal net (PNN). This is a highly organised glycoprotein structure that surrounds small, connecting neurons in areas of the brain such as the amygdala, the area responsible for processing fear.

This points to a possible role for the PNN in protecting fear memories from erasure in the adult brain. Cyril Herry at the Magendie Neurocentre in Bordeaux, France, and colleagues reasoned that by destroying the PNN you might be able to return the system to an infant-like state. They gave both infant and juvenile rats fear conditioning followed by extinction therapy, then tested whether the fear could be retrieved at a later date. Like infant rats, juvenile rats with a destroyed PNN were not able to retrieve the memory.

Since the PNN can grow back, Herry suggests that in theory you could temporarily degrade the PNN in humans to permanently erase a specific traumatic memory without causing any long-term damage to memory.

"You would have to identify a potential source of trauma, like in the case of soldiers going to war," he says. "These results suggest that if you inject an enzyme to degrade the PNN before a traumatic event you would facilitate the erasure of the memory of that event afterwards using extinction therapy."

For those who already suffer from fear memories, Roger Clem at Johns Hopkins University School of Medicine in Maryland suggests focusing instead on the removal of calcium-permeable AMPA receptors from neurons in the amygdala - a key component of infant memory erasure. Encouraging their removal in adults may increase our ability to erase memories, he says.

"There is a group who do not respond [to traditional trauma therapy]," says Piers Bishop at the charity PTSD Resolution. "A drug approach to memory modification could be considered the humane thing to do sometimes."

Wednesday, October 27, 2010

Can Meditation Change Your Brain?

Contemplative neuroscientists believe it can

Posted on October 27, 2010
From CNN’s Dan Gilgoff

Can people strengthen the brain circuits associated with happiness and positive behavior, just as we’re able to strengthen muscles with exercise?

Richard Davidson, who for decades has practiced Buddhist-style meditation – a form of mental exercise, he says – insists that we can.

And Davidson, who has been meditating since visiting India as a Harvard grad student in the 1970s, has credibility on the subject beyond his own experience.

A trained psychologist based at the University of Wisconsin, Madison, he has become the leader of a relatively new field called contemplative neuroscience – the brain science of meditation.

Over the last decade, Davidson and his colleagues have produced scientific evidence for the theory that meditation – the ancient eastern practice of sitting, usually accompanied by focusing on certain objects - permanently changes the brain for the better.

“We all know that if you engage in certain kinds of exercise on a regular basis you can strengthen certain muscle groups in predictable ways,” Davidson says in his office at the University of Wisconsin, where his research team has hosted scores of Buddhist monks and other meditators for brain scans.

“Strengthening neural systems is not fundamentally different,” he says. “It’s basically replacing certain habits of mind with other habits.”

Contemplative neuroscientists say that making a habit of meditation can strengthen brain circuits responsible for maintaining concentration and generating empathy.

One recent study by Davidson’s team found that novice meditators stimulated their limbic systems – the brain’s emotional network – during the practice of compassion meditation, an ancient Tibetan Buddhist practice.

That’s no great surprise, given that compassion meditation aims to produce a specific emotional state of intense empathy, sometimes call “loving-kindness.”

But the study also found that expert meditators – monks with more than 10,000 hours of practice – showed significantly greater activation of their limbic systems. The monks appeared to have permanently changed their brains to be more empathetic.

An earlier study by some of the same researchers found that committed meditators experienced sustained changes in baseline brain function, meaning that they had changed the way their brains operated even outside of meditation.

These changes included ramped-up activation of a brain region thought to be responsible for generating positive emotions, called the left-sided anterior region. The researchers found this change in novice meditators who’d enrolled in a course in mindfulness meditation – a technique that borrows heavily from Buddhism – that lasted just eight weeks.

But most brain research around meditation is still preliminary, waiting to be corroborated by other scientists. Meditation’s psychological benefits and its use in treatments for conditions as diverse as depression and chronic pain are more widely acknowledged.

Serious brain science around meditation has emerged only in about the last decade, since the birth of functional MRI allowed scientists to begin watching the brain and monitoring its changes in relatively real time.

Beginning in the late 1990s, a University of Pennsylvania-based researcher named Andrew Newberg said that his brain scans of experienced meditators showed the prefrontal cortex – the area of the brain that houses attention – surging into overdrive during meditation while the brain region governing our orientation in time and space, called the superior parietal lobe, went dark. (One of his scans is pictured, above.)

Newberg said his findings explained why meditators are able to cultivate intense concentration while also describing feelings of transcendence during meditation.

But some scientists said Newberg was over-interpreting his brain scans. Others said he failed to specify the kind of meditation he was studying, making his studies impossible to reproduce. His popular books, like Why God Won’t Go Away, caused more eye-rolling among neuroscientists, who said he hyped his findings to goose sales.

“It caused mainstream scientists to say that the only work that has been done in the field is of terrible quality,” says Alasdair Coles, a lecturer in neurology at England’s University of Cambridge.

Newberg, now at Thomas Jefferson University and Hospital in Philadelphia, stands by his research.

And contemplative neuroscience had gained more credibility in the scientific community since his early scans.

One sign of that is increased funding from the National Institutes of Health, which has helped establish new contemplative science research centers at Stanford University, Emory University, and the University of Wisconsin, where the world’s first brain imaging lab with a meditation room next door is now under construction.

The NIH could not provide numbers on how much it gives specifically to meditation brain research but its grants in complementary and alternative medicine – which encompass many meditation studies – have risen from around $300 million in 2007 to an estimated $541 million in 2011.

“The original investigations by people like Davidson in the 1990s were seen as intriguing, but it took some time to be convinced that brain processes were really changing during meditation,” says Josephine Briggs, Director of the NIH’s National Center for Complementary and Alternative Medicine.

Most studies so far have examined so-called focused-attention meditation, in which the practitioner concentrates on a particular subject, such as the breath. The meditator monitors the quality of attention and, when it drifts, returns attention to the object.

Over time, practitioners are supposed to find it easier to sustain attention during and outside of meditation.

In a 2007 study, Davidson compared the attentional abilities of novice meditators to experts in the Tibetan Buddhist tradition. Participants in both groups were asked to practice focused-attention meditation on a fixed dot on a screen while researchers ran fMRI scans of their brains.

To challenge the participants’ attentional abilities, the scientists interrupted the meditations with distracting sounds.

The brain scans found that both experienced and novice meditators activated a network of attention-related regions of the brain during meditation. But the experienced meditators showed more activation in some of those regions.

The inexperienced meditators, meanwhile, showed increased activation in brain regions that have been shown to negatively correlate with sustaining attention. Experienced meditators were better able to activate their attentional networks to maintain concentration on the dot. They had, the study suggested, changed their brains.

The fMRI scans also showed that experienced meditators had less neural response to the distracting noises that interrupted the meditation.

In fact, the more hours of experience a meditator had, the scans found, the less active his or her emotional networks were during the distracting sounds, which meant the easier it was to focus.

More recently, contemplative neuroscience has turned toward compassion meditation, which involves generating empathy through objectless awareness; practitioners call it non-referential compassion meditation.

New neuroscientific interest in the practice comes largely at the urging of the Dalai Lama, the spiritual and political leader of Tibetan Buddhists, for whom compassion meditation is a time-worn tradition.

The Dalai Lama has arranged for Tibetan monks to travel to American universities for brain scans and has spoken at the annual meeting of the Society for Neuroscience, the world’s largest gathering of brain scientists.

A religious leader, the Dalai Lama has said he supports contemplative neuroscience even though scientists are stripping meditation of its Buddhist roots, treating it purely as a mental exercise that more or less anyone can do.

“This is not a project about religion,” says Davidson. “Meditation is mental activity that could be understood in secular terms.”

Still, the nascent field faces challenges. Scientists have scanned just a few hundred brains on meditation do date, which makes for a pretty small research sample. And some scientists say researchers are over eager to use brain science to prove the that meditation “works.”

“This is a field that has been populated by true believers,” says Emory University scientist Charles Raison, who has studied meditation’s effect on the immune system. “Many of the people doing this research are trying to prove scientifically what they already know from experience, which is a major flaw.”

But Davidson says that other types of scientists also have deep personal interest in what they’re studying. And he argues that that’s a good thing.

“There’s a cadre of grad students and post docs who’ve found personal value in meditation and have been inspired to study it scientifically,” Davidson says. “These are people at the very best universities and they want to do this for a career.

“In ten years,” he says, “we’ll find that meditation research has become mainstream.”

Monday, October 25, 2010

Morality: My brain made me do it

Understanding how morality is linked to brain function will require us to rethink our justice system, says Martha J. Farah

By Martha J. Farah, 22 October 2010

AS SCIENCE exposes the gears and sprockets of moral cognition, how will it affect our laws and ethical norms?

We have long known that moral character is related to brain function. One remarkable demonstration of this was provided by Phineas Gage, a 19th-century construction foreman injured in an explosion. After a large iron rod was blown through his head, destroying bits of his prefrontal cortex, Gage was transformed from a conscientious, dependable worker to a selfish and erratic character, described by some as antisocial.

Recent research has shown that psychopaths, who behave antisocially and without remorse, differ from the rest of us in several brain regions associated with self-control and moral cognition (Behavioral Sciences and the Law, vol 26, p 7). Even psychologically normal people who merely score higher in psychopathic traits show distinctive differences in their patterns of brain activation when contemplating moral decisions (Molecular Psychiatry, vol 14, p 5).

The idea that moral behaviour is dependent on brain function presents a challenge to our usual ways of thinking about moral responsibility. A remorseless murderer is unlikely to win much sympathy, but show us that his cold-blooded cruelty is a neuropsychological impairment and we are apt to hold him less responsible for his actions. Presumably for this reason, fMRI evidence was introduced by the defence in a recent murder trial to show that the perpetrator had differences in various brain regions which they argued reduced his culpability. Indeed, neuroscientific evidence has been found to exert a powerful influence over decisions by judges and juries to find defendants "not guilty by reason of insanity" (Behavioral Sciences and the Law, vol 26, p 85).

Outside the courtroom, people tend to judge the behaviour of others less harshly when it is explained in light of physiological, rather than psychological processes (Ethics and Behavior, vol 15, p 139). This is as true for serious moral transgressions, like killing, as for behaviours that are merely socially undesirable, like overeating. The decreased moral stigma surrounding drug addiction is undoubtedly due in part to our emerging view of addiction as a brain disease.

What about our own actions? Might an awareness of the neural causes of behaviour influence our own behaviour? Perhaps so. In a 2008 study, researchers asked subjects to read a passage on the incompatibility of free will and neuroscience from Francis Crick's book The Astonishing Hypothesis (Simon and Schuster, 1995). This included the statement, " 'You', your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules." The researchers found that these people were then more likely to cheat on a computerised test than those who had read an unrelated passage (Psychological Science, vol 19, p 49).

So will the field of moral neuroscience change our laws, ethics and mores? The growing use of brain scans in courtrooms, societal precedents like the destigmatisation of addiction, and studies like those described above seem to say the answer is yes. And this makes sense. For laws and mores to persist, they must accord with our understanding of behaviour. For example, we know that young children have limited moral understanding and self-control, so we do not hold them criminally accountable for their behaviour. To the extent that neuroscience changes our understanding of human behaviour - and misbehaviour - it seems destined to alter society's standards of morality.

Martha J. Farah is the director of the Center for Neuroscience and Society at the University of Pennsylvania in Philadelphia. Her new book is Neuroethics (MIT Press, 2010)

Thursday, October 21, 2010

"Wet Computer" Literally Simulates Brain Cells

Many next-gen supercomputers try to imitate how brain cells communicate and build digital versions of neural networks. Now the BBC brings word of the most ambitious project yet -- a "wet computer" that will literally simulate neurons and signal processing on the chemical level.

By Jeremy Hs, Popular Science

The $2.6 million effort aims to do what existing computers can't, including control tiny molecular robots or direct chemical assembly of nanogears. It may also aid the rise of intelligent drugs that react smartly to chemical signals from the human body.

The biologically-inspired computer does not harness living cells. Instead, it will use chemical versions that still spontaneously form coatings similar to biological cell walls, and can even pass signals between the chemical cells.

Such chemical cells can also undergo a "refractory period" after receiving a chemical signal. No outside signals can influence the cells during that period, and so the self-regulating system prevents an unchecked chain reaction from triggering many connected cells. That level of organization means that such chemical cells could form networks that function like a brain.

Wednesday, October 20, 2010

Extinguishing Fear

Erasing frightening memories may be possible during a brief period after recollection.

By Molly Webster Thursday,
April 22, 2010

When we learn something, for it to become a memory, the event must be imprinted on our brain, a phenomenon known as consolidation. In turn, every time we retrieve a memory, it can be reconsolidated—that is, more infor­mation can be added to it. Now psychologist Liz Phelps of New York University and her team report using this “reconsolidation window” as a drug-free way to erase fearful memories in humans. Although techniques for over­coming fearful memories have existed for some time, these methods do not erase the initial, fearful memory. Rather they leave participants with two memories—one scary, one not—either of which may be called up when a trigger presents itself. But Phelps’s new experiment, which confirms earlier studies in rats, suggests that when a memory is changed during the so-called reconsolidation window, the original one is erased.

Using a mild electric shock, Phelps’s team taught 65 participants to fear certain colored squares as they ap­peared on a screen. Normally, to overcome this type of fear, researchers would show participants the feared squares again without being given a shock, in an effort to create a safe memory of the squares. Phelps’s group did that, but in some cases investigators asked subjects to contemplate their fearful memory for at least 10 minutes before they saw the squares again. These participants actually replaced their old fearful memory with a new, safe memory. When they saw the squares again paired with shocks up to a year later, they were slow to relearn their fear of the squares. In contrast, subjects who created a safe memory of the squares without first contemplating their fearful memory for 10 minutes immediately reactivated their older, fearful memory when they saw a square and got a shock.

The researchers suspect that after calling up a memory, it takes about 10 minutes before the window of op­portunity opens up for the memory to be reconsolidated, or changed, in a meaningful way, Phelps explains. “But there is some combination of spacing and timing that we need to figure out,” she adds—the scientists do not yet know how long the window lasts. Even more intriguing is the role contemplation plays—does sitting and thinking about the fearful memory make it more mal­leable than does simply recalling it? Although questions remain, Phelps and her colleagues hope their work will eventually help people with debilitating phobias or perhaps even post-traumatic stress disorder.