A Scientific Study of the Human Mind and the Understanding of Human Behavior through the analysis and research of Meta Psychology.
Wednesday, December 29, 2010
Slipping the 'Cognitive Straitjacket' of Psychiatric Diagnosis
By Steven E. Hyman |
It can fairly be said that modern psychiatric diagnosis was “born” in a 1970 paper on schizophrenia.
The authors, Washington University psychiatry professors Eli Robins and Samuel B. Guze, rejected the murky psychoanalytic diagnostic formulations of their time. Instead, they embraced a medical model inspired by the careful 19th-century observational work of Emil Kraepelin, long overlooked during the mid-20th-century dominance of Freudian theory. Mental disorders were now to be seen as distinct categories, much as different bacterial and viral infections produce characteristic diseases that can be seen as distinct “natural kinds.”
Disorders, Robins and Guze argued, should be defined based on phenomenology: clinical descriptions validated by long-term follow-up to demonstrate the stability of the diagnosis over time. With scientific progress, they expected fuller validation of mental disorders to derive from laboratory findings and studies of familial transmission.
This descriptive approach to psychiatric diagnosis -- based on lists of symptoms, their timing of onset, and the duration of illness -- undergirded the American Psychiatric Association’s widely disseminated and highly influential Diagnostic and Statistical Manual of Mental Disorders, first published in 1980. Since then, the original “DSM-III” has yielded two relatively conservative revisions, and right now, the DSM-5 is under construction. Sadly, it is clear that the optimistic predictions of Robins and Guze have not been realized.
Four decades after their seminal paper, there are still no widely validated laboratory tests for any common mental illness. Worse, an enormous number of family and genetic studies have not only failed to validate the major DSM disorders as natural kinds, but instead have suggested that they are more akin to chimaeras. Unfortunately for the multitudes stricken with mental illness, the brain has not given up its secrets easily.
That is not to say that we have made no progress. DNA research has begun to illuminate the complex genetics of mental illness. But what it tells us, I would argue, is that, at least for the purposes of research, the current DSM diagnoses do not work. They are too narrow, too rigid, altogether too limited. Reorganization of the DSM is hardly a panacea, but science cannot thrive if investigators are forced into a cognitive straitjacket.
Before turning to the scientific evidence of fundamental problems with the DSM, let’s first take note of an important problem that the classification has produced for clinicians and patients alike: An individual who receives a single DSM diagnosis very often meets criteria for multiple additional diagnoses (so-called co-occurrence or “comorbidity”), and the pattern of diagnoses often changes over the lifespan. Thus, for example, children and adolescents with a diagnosis of an anxiety disorder often manifest major depression in their later teens or twenties. Individuals with autism spectrum disorders often receive additional diagnoses of attention deficit hyperactivity disorder, obsessive-compulsive disorder, and tic disorders.
Of course, there are perfectly reasonable explanations for comorbidity. One disorder could be a risk factor for another just as tobacco smoking is a risk factor for lung cancer. Alternatively, common diseases in a population could co-occur at random. The problem with the DSM is that many diagnoses co-occur at frequencies far higher than predicted by their population prevalence, and the timing of co-occurrence suggests that one disorder is not likely to be causing the second. For patients, it can be confusing and demoralizing to receive multiple and shifting diagnoses; this phenomenon certainly does not increase confidence in their caregivers.
Family studies and genetics shed light on the apparently high rate of co-occurrence of mental disorders and suggest that it is an artifact of the DSM itself. Genetic studies focused on finding variations in DNA sequences associated with mental disorders have repeatedly found shared genetic risks for both schizophrenia and bipolar disorder. Other studies have found different sequence variations within the same genes to be associated with schizophrenia and autism spectrum disorders.
An older methodology, the study of twins, continues to provide important insight into this muddy genetic picture. Twin studies generally compare the concordance for a disease or other trait within monozygotic twin pairs, who share 100% of their DNA, versus concordance within dizygotic twin pairs, who share on average 50% of their DNA. In a recent article in the American Journal of Psychiatry, a Swedish team of researchers led by Paul Lichtenstein studied 7,982 twin pairs. They found a heritability of 80% for autism spectrum disorders, but also found substantial sharing of genetic risk factors among autism, attention deficit hyperactivity disorder, developmental coordination disorder, tic disorders, and learning disorders.
In another recent article in the American Journal of Psychiatry, Marina Bornovalova and her University of Minnesota colleagues studied 1,069 pairs of 11-year-old twins and their biological parents. They found that parent-child resemblance was accounted for by shared genetic risk factors: in parents, they gave rise to conduct disorder, adult antisocial behavior, alcohol dependence, and drug dependence; in the 11-year-olds these shared factors were manifest as attention deficit hyperactivity disorder, conduct disorder, and oppositional-defiant disorder. (Strikingly, attention deficit disorder co-occurs in both the autism spectrum cluster and disruptive disorder cluster.)
These and many other studies call into question two of the key validators of descriptive psychiatry championed by Robins and Guze. First, DSM disorders do not breed true. What is transmitted across generations is not discrete DSM categories but, perhaps, complex patterns of risk that may manifest as one or more DSM disorders within a related cluster. Second, instead of long-term stability, symptom patterns often change over the life course, producing not only multiple co-occurring diagnoses but also different diagnoses at different times of life.
How can these assertions be explained? In fairness to Robins and Guze, they could not have imagined the extraordinary genetic complexity that produces the risk of many common human ills, including mental disorders. What this means is that common mental disorders appear to be due to different combinations of genes in different families, acting in combination with epigenetics -- gene expression varies even if the underlying DNA sequence is the same -- and non-genetic factors.
In some families, genetic risk for mental disorders seems to be due to many, perhaps hundreds, of small variations in DNA sequence -- often single “letters” in the DNA code. Each may cause a very small increment in risk, but, in infelicitous combinations, can lead to illness. In other families, there may be background genetic risk, but the coup de grace arrives in the form of a relatively large DNA deletion, duplication, or rearrangement. Such “copy number variants” may occur de novo in apparently sporadic cases of schizophrenia or autism.
In sum, it appears that no gene is either necessary or sufficient for risk of a common mental disorder. Finally, a given set of genetic risks may produce different symptoms depending on broad genetic background, early developmental influences, life stage, or diverse environmental factors.
The complex nature of genetic risk offers a possible explanation for comorbidity: what the DSM treats as discrete disorders, categorically separate from health and from each other, are not, in fact, discrete. Instead, schizophrenia, autism-spectrum disorders, certain anxiety disorders, obsessive-compulsive disorder, attention deficit hyperactivity disorder, mood disorders, and others represent families of related disorders with heterogeneous genetic risk factors underlying them. I would hypothesize that what is shared within disorder families, such as the autism spectrum or the obsessive-compulsive disorder spectrum, are abnormalities in neural circuits that underlie different aspects of brain function, from cognition to emotion to behavioral control, and that these circuit abnormalities do not respect the narrow symptoms checklists within the DSM.
The first DSM had many important strengths, but I would argue that part of what went wrong with it was a fairly arbitrary decision: the promulgation of a large number of disorders, despite the early state of the science, and the conceptualization of each disorder as a distinct category. That decision eschewed the possibility that some diagnoses are better represented in terms of quantifiable dimensions, much like the diagnoses of hypertension and diabetes, which are based on measurements on numerical scales.
These fundamental missteps would not have proven so problematic but for the human tendency to treat anything with a name as if it is real. Thus, a scientifically pioneering diagnostic system that should have been treated as a set of testable hypotheses was instead virtually set in stone. DSM categories play a controlling role in clinical communication, insurance reimbursement, regulatory approval of new treatments, grant reviews, and editorial policies of journals. As I have argued elsewhere, the excessive reliance on DSM categories, which are poor mirrors of nature, has limited the scope and thus the utility of scientific questions that could be asked. We now face a knotty problem: how to facilitate science so that DSM-6 does not emerge a decade or two from now a trivially revised descendant of DSM-III, but without disrupting the substantial clinical and administrative uses to which the DSM system is put.
I believe that the most plausible mechanism for repairing this plane while it is still flying is to give new attention to overarching families of disorders, sometimes called meta-structure. In previous editions of the DSM, the chapters were almost an afterthought compared with the individual disorders. It should be possible, without changing the criteria for specific diagnoses, to create chapters of disorders that co-occur at very high rates and that appear to share genetic risk factors based on family, twin, and molecular genetic studies.
This will not be possible for the entire DSM-5, but it would be possible for certain neurodevelopmental disorders, anxiety disorders, the obsessive-compulsive disorder spectrum, so-called externalizing or disruptive disorders (such as antisocial personality disorder and substance use disorders), and others. Scientists could then be invited by funding agencies and journals to be agnostic to the internal divisions within each large cluster, to ignore the over-narrow diagnostic categories. The resulting data could then yield a very different classification by the time the DSM-6 arrives.
Psychiatry has been overly optimistic about progress before, but I would predict that neurobiologically based biomarkers and other objective tests will emerge from current research, along with a greater appreciation of the role of neural circuits in the origins of mental disorders. I would also predict that discrete categories will give way, where appropriate, to quantifiable dimensions. At the very least, the science of mental disorders should be freed from the unintended cognitive shackles bequeathed by the DSM-III experiment.
Thursday, December 9, 2010
Can Psychological Trauma Be Inherited?
Senior News Editor, PsychCentral
Can Psychological Trauma Be Inherited?An emerging topic of investigation looks to determine if post-traumatic stress disorder (PTSD) can be passed to subsequent generations.
Scientists are studying groups with high rates of PTSD, such as the survivors of the Nazi death camps. Adjustment problems of the children of the survivors — the so-called “second generation” — is topic of study for researchers.
Studies suggested that some symptoms or personality traits associated with PTSD may be more common in the second generation than the general population.
It has been assumed that these transgenerational effects reflected the impact of PTSD upon the parent-child relationship rather than a trait passed biologically from parent to child.
However, Dr. Isabelle Mansuy and colleagues provide new evidence in the current issue of Biological Psychiatry that some aspects of the impact of trauma cross generations and are associated with epigenetic changes, i.e., the regulation of the pattern of gene expression, without changing the DNA sequence.
They found that early-life stress induced depressive-like behaviors and altered behavioral responses to aversive environments in mice.
Importantly, these behavioral alterations were also found in the offspring of males subjected to early stress even though the offspring were raised normally without any stress. In parallel, the profile of DNA methylation was altered in several genes in the germline (sperm) of the fathers, and in the brain and germline of their offspring.
“It is fascinating that clinical observations in humans have suggested the possibility that specific traits acquired during life and influenced by environmental factors may be transmitted across generations. It is even more challenging to think that when related to behavioral alterations, these traits could explain some psychiatric conditions in families,” said Dr. Mansuy.
“Our findings in mice provide a first step in this direction and suggest the intervention of epigenetic processes in such phenomenon.”
“The idea that traumatic stress responses may alter the regulation of genes in the germline cells in males means that these stress effects may be passed across generations. It is distressing to think that the negative consequences of exposure to horrible life events could cross generations,” commented Dr. John Krystal, editor of Biological Psychiatry.
“However, one could imagine that these types of responses might prepare the offspring to cope with hostile environments. Further, if environmental events can produce negative effects, one wonders whether the opposite pattern of DNA methylation emerges when offspring are reared in supportive environments.”
To erase a bad memory, first become a child
IT ADDS new meaning to getting in touch with your inner child. Temporarily returning the brain to a child-like state could help permanently erase a specific traumatic memory. This could help people with post-traumatic stress disorder and phobias.
At the Society of Neuroscience conference in San Diego last month researchers outlined the ways in which they have managed to extinguish basic fear memories.
Most methods rely on a behavioral therapy called extinction, in which physicians repeatedly deliver threatening cues in safe environments in the hope of removing fearful associations. While this can alleviate symptoms, in adults the original fear memory still remains. This means it can potentially be revived in the future.
A clue to permanent erasure comes from research in infant mice. With them, extinction therapy completely erases the fear memory, which cannot be retrieved. Identifying the relevant brain changes in rodents between early infancy and the juvenile stage may help researchers recreate aspects of the child-like system and induce relapse-free erasure in people.
One of the most promising techniques takes advantage of a brief period in which the adult brain resembles that of an infant, in that it is malleable. The process of jogging a memory, called "reconsolidation", seems to make it malleable for a few hours. During this time, the memory can be adapted and even potentially deleted.
Daniela Schiller at New York University and her colleagues tested this theory by presenting volunteers with a blue square at the same time as administering a small electric shock. When the volunteers were subsequently shown the blue square alone, the team measured tiny changes in sweat production, a well-documented fear response.
A day later, Schiller reminded some of the volunteers of the fear memory just once by presenting them with both square and shock, making the memory active. During this window of re-consolidation, the researchers tried to manipulate the memory by repeatedly exposing the volunteers to the blue square alone.
These volunteers produced the sweat response significantly less a day later compared with those who were given extinction therapy without any reconsolidation (Nature, DOI: 10.1038/nature08637).
What's more, their memory loss really was permanent. Schiller later recalled a third of the volunteers from her original experiment. "A year after fear conditioning, those that had [only] extinction showed an elevated response to the square, but those with extinction during reconsolidation showed no fear response," she says.
A year after conditioning, those whose memory had been manipulated showed no fear response
The loss in infant mice of the ability to erase a fearful memory coincides with the appearance in the brain of the perineuronal net (PNN). This is a highly organised glycoprotein structure that surrounds small, connecting neurons in areas of the brain such as the amygdala, the area responsible for processing fear.
This points to a possible role for the PNN in protecting fear memories from erasure in the adult brain. Cyril Herry at the Magendie Neurocentre in Bordeaux, France, and colleagues reasoned that by destroying the PNN you might be able to return the system to an infant-like state. They gave both infant and juvenile rats fear conditioning followed by extinction therapy, then tested whether the fear could be retrieved at a later date. Like infant rats, juvenile rats with a destroyed PNN were not able to retrieve the memory.
Since the PNN can grow back, Herry suggests that in theory you could temporarily degrade the PNN in humans to permanently erase a specific traumatic memory without causing any long-term damage to memory.
"You would have to identify a potential source of trauma, like in the case of soldiers going to war," he says. "These results suggest that if you inject an enzyme to degrade the PNN before a traumatic event you would facilitate the erasure of the memory of that event afterwards using extinction therapy."
For those who already suffer from fear memories, Roger Clem at Johns Hopkins University School of Medicine in Maryland suggests focusing instead on the removal of calcium-permeable AMPA receptors from neurons in the amygdala - a key component of infant memory erasure. Encouraging their removal in adults may increase our ability to erase memories, he says.
"There is a group who do not respond [to traditional trauma therapy]," says Piers Bishop at the charity PTSD Resolution. "A drug approach to memory modification could be considered the humane thing to do sometimes."
Wednesday, October 27, 2010
Can Meditation Change Your Brain?
Posted on October 27, 2010
From CNN’s Dan Gilgoff
Can people strengthen the brain circuits associated with happiness and positive behavior, just as we’re able to strengthen muscles with exercise?
Richard Davidson, who for decades has practiced Buddhist-style meditation – a form of mental exercise, he says – insists that we can.
And Davidson, who has been meditating since visiting India as a Harvard grad student in the 1970s, has credibility on the subject beyond his own experience.
A trained psychologist based at the University of Wisconsin, Madison, he has become the leader of a relatively new field called contemplative neuroscience – the brain science of meditation.
Over the last decade, Davidson and his colleagues have produced scientific evidence for the theory that meditation – the ancient eastern practice of sitting, usually accompanied by focusing on certain objects - permanently changes the brain for the better.
“We all know that if you engage in certain kinds of exercise on a regular basis you can strengthen certain muscle groups in predictable ways,” Davidson says in his office at the University of Wisconsin, where his research team has hosted scores of Buddhist monks and other meditators for brain scans.
“Strengthening neural systems is not fundamentally different,” he says. “It’s basically replacing certain habits of mind with other habits.”
Contemplative neuroscientists say that making a habit of meditation can strengthen brain circuits responsible for maintaining concentration and generating empathy.
One recent study by Davidson’s team found that novice meditators stimulated their limbic systems – the brain’s emotional network – during the practice of compassion meditation, an ancient Tibetan Buddhist practice.
That’s no great surprise, given that compassion meditation aims to produce a specific emotional state of intense empathy, sometimes call “loving-kindness.”
But the study also found that expert meditators – monks with more than 10,000 hours of practice – showed significantly greater activation of their limbic systems. The monks appeared to have permanently changed their brains to be more empathetic.
An earlier study by some of the same researchers found that committed meditators experienced sustained changes in baseline brain function, meaning that they had changed the way their brains operated even outside of meditation.
These changes included ramped-up activation of a brain region thought to be responsible for generating positive emotions, called the left-sided anterior region. The researchers found this change in novice meditators who’d enrolled in a course in mindfulness meditation – a technique that borrows heavily from Buddhism – that lasted just eight weeks.
But most brain research around meditation is still preliminary, waiting to be corroborated by other scientists. Meditation’s psychological benefits and its use in treatments for conditions as diverse as depression and chronic pain are more widely acknowledged.
Serious brain science around meditation has emerged only in about the last decade, since the birth of functional MRI allowed scientists to begin watching the brain and monitoring its changes in relatively real time.
Beginning in the late 1990s, a University of Pennsylvania-based researcher named Andrew Newberg said that his brain scans of experienced meditators showed the prefrontal cortex – the area of the brain that houses attention – surging into overdrive during meditation while the brain region governing our orientation in time and space, called the superior parietal lobe, went dark. (One of his scans is pictured, above.)
Newberg said his findings explained why meditators are able to cultivate intense concentration while also describing feelings of transcendence during meditation.
But some scientists said Newberg was over-interpreting his brain scans. Others said he failed to specify the kind of meditation he was studying, making his studies impossible to reproduce. His popular books, like Why God Won’t Go Away, caused more eye-rolling among neuroscientists, who said he hyped his findings to goose sales.
“It caused mainstream scientists to say that the only work that has been done in the field is of terrible quality,” says Alasdair Coles, a lecturer in neurology at England’s University of Cambridge.
Newberg, now at Thomas Jefferson University and Hospital in Philadelphia, stands by his research.
And contemplative neuroscience had gained more credibility in the scientific community since his early scans.
One sign of that is increased funding from the National Institutes of Health, which has helped establish new contemplative science research centers at Stanford University, Emory University, and the University of Wisconsin, where the world’s first brain imaging lab with a meditation room next door is now under construction.
The NIH could not provide numbers on how much it gives specifically to meditation brain research but its grants in complementary and alternative medicine – which encompass many meditation studies – have risen from around $300 million in 2007 to an estimated $541 million in 2011.
“The original investigations by people like Davidson in the 1990s were seen as intriguing, but it took some time to be convinced that brain processes were really changing during meditation,” says Josephine Briggs, Director of the NIH’s National Center for Complementary and Alternative Medicine.
Most studies so far have examined so-called focused-attention meditation, in which the practitioner concentrates on a particular subject, such as the breath. The meditator monitors the quality of attention and, when it drifts, returns attention to the object.
Over time, practitioners are supposed to find it easier to sustain attention during and outside of meditation.
In a 2007 study, Davidson compared the attentional abilities of novice meditators to experts in the Tibetan Buddhist tradition. Participants in both groups were asked to practice focused-attention meditation on a fixed dot on a screen while researchers ran fMRI scans of their brains.
To challenge the participants’ attentional abilities, the scientists interrupted the meditations with distracting sounds.
The brain scans found that both experienced and novice meditators activated a network of attention-related regions of the brain during meditation. But the experienced meditators showed more activation in some of those regions.
The inexperienced meditators, meanwhile, showed increased activation in brain regions that have been shown to negatively correlate with sustaining attention. Experienced meditators were better able to activate their attentional networks to maintain concentration on the dot. They had, the study suggested, changed their brains.
The fMRI scans also showed that experienced meditators had less neural response to the distracting noises that interrupted the meditation.
In fact, the more hours of experience a meditator had, the scans found, the less active his or her emotional networks were during the distracting sounds, which meant the easier it was to focus.
More recently, contemplative neuroscience has turned toward compassion meditation, which involves generating empathy through objectless awareness; practitioners call it non-referential compassion meditation.
New neuroscientific interest in the practice comes largely at the urging of the Dalai Lama, the spiritual and political leader of Tibetan Buddhists, for whom compassion meditation is a time-worn tradition.
The Dalai Lama has arranged for Tibetan monks to travel to American universities for brain scans and has spoken at the annual meeting of the Society for Neuroscience, the world’s largest gathering of brain scientists.
A religious leader, the Dalai Lama has said he supports contemplative neuroscience even though scientists are stripping meditation of its Buddhist roots, treating it purely as a mental exercise that more or less anyone can do.
“This is not a project about religion,” says Davidson. “Meditation is mental activity that could be understood in secular terms.”
Still, the nascent field faces challenges. Scientists have scanned just a few hundred brains on meditation do date, which makes for a pretty small research sample. And some scientists say researchers are over eager to use brain science to prove the that meditation “works.”
“This is a field that has been populated by true believers,” says Emory University scientist Charles Raison, who has studied meditation’s effect on the immune system. “Many of the people doing this research are trying to prove scientifically what they already know from experience, which is a major flaw.”
But Davidson says that other types of scientists also have deep personal interest in what they’re studying. And he argues that that’s a good thing.
“There’s a cadre of grad students and post docs who’ve found personal value in meditation and have been inspired to study it scientifically,” Davidson says. “These are people at the very best universities and they want to do this for a career.
“In ten years,” he says, “we’ll find that meditation research has become mainstream.”
Monday, October 25, 2010
Morality: My brain made me do it
By Martha J. Farah, 22 October 2010
AS SCIENCE exposes the gears and sprockets of moral cognition, how will it affect our laws and ethical norms?
We have long known that moral character is related to brain function. One remarkable demonstration of this was provided by Phineas Gage, a 19th-century construction foreman injured in an explosion. After a large iron rod was blown through his head, destroying bits of his prefrontal cortex, Gage was transformed from a conscientious, dependable worker to a selfish and erratic character, described by some as antisocial.
Recent research has shown that psychopaths, who behave antisocially and without remorse, differ from the rest of us in several brain regions associated with self-control and moral cognition (Behavioral Sciences and the Law, vol 26, p 7). Even psychologically normal people who merely score higher in psychopathic traits show distinctive differences in their patterns of brain activation when contemplating moral decisions (Molecular Psychiatry, vol 14, p 5).
The idea that moral behaviour is dependent on brain function presents a challenge to our usual ways of thinking about moral responsibility. A remorseless murderer is unlikely to win much sympathy, but show us that his cold-blooded cruelty is a neuropsychological impairment and we are apt to hold him less responsible for his actions. Presumably for this reason, fMRI evidence was introduced by the defence in a recent murder trial to show that the perpetrator had differences in various brain regions which they argued reduced his culpability. Indeed, neuroscientific evidence has been found to exert a powerful influence over decisions by judges and juries to find defendants "not guilty by reason of insanity" (Behavioral Sciences and the Law, vol 26, p 85).
Outside the courtroom, people tend to judge the behaviour of others less harshly when it is explained in light of physiological, rather than psychological processes (Ethics and Behavior, vol 15, p 139). This is as true for serious moral transgressions, like killing, as for behaviours that are merely socially undesirable, like overeating. The decreased moral stigma surrounding drug addiction is undoubtedly due in part to our emerging view of addiction as a brain disease.
What about our own actions? Might an awareness of the neural causes of behaviour influence our own behaviour? Perhaps so. In a 2008 study, researchers asked subjects to read a passage on the incompatibility of free will and neuroscience from Francis Crick's book The Astonishing Hypothesis (Simon and Schuster, 1995). This included the statement, " 'You', your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules." The researchers found that these people were then more likely to cheat on a computerised test than those who had read an unrelated passage (Psychological Science, vol 19, p 49).
So will the field of moral neuroscience change our laws, ethics and mores? The growing use of brain scans in courtrooms, societal precedents like the destigmatisation of addiction, and studies like those described above seem to say the answer is yes. And this makes sense. For laws and mores to persist, they must accord with our understanding of behaviour. For example, we know that young children have limited moral understanding and self-control, so we do not hold them criminally accountable for their behaviour. To the extent that neuroscience changes our understanding of human behaviour - and misbehaviour - it seems destined to alter society's standards of morality.
Martha J. Farah is the director of the Center for Neuroscience and Society at the University of Pennsylvania in Philadelphia. Her new book is Neuroethics (MIT Press, 2010)
Thursday, October 21, 2010
"Wet Computer" Literally Simulates Brain Cells
By Jeremy Hs, Popular Science
The $2.6 million effort aims to do what existing computers can't, including control tiny molecular robots or direct chemical assembly of nanogears. It may also aid the rise of intelligent drugs that react smartly to chemical signals from the human body.
The biologically-inspired computer does not harness living cells. Instead, it will use chemical versions that still spontaneously form coatings similar to biological cell walls, and can even pass signals between the chemical cells.
Such chemical cells can also undergo a "refractory period" after receiving a chemical signal. No outside signals can influence the cells during that period, and so the self-regulating system prevents an unchecked chain reaction from triggering many connected cells. That level of organization means that such chemical cells could form networks that function like a brain.
Wednesday, October 20, 2010
Extinguishing Fear
By Molly Webster Thursday,
April 22, 2010
When we learn something, for it to become a memory, the event must be imprinted on our brain, a phenomenon known as consolidation. In turn, every time we retrieve a memory, it can be reconsolidated—that is, more information can be added to it. Now psychologist Liz Phelps of New York University and her team report using this “reconsolidation window” as a drug-free way to erase fearful memories in humans. Although techniques for overcoming fearful memories have existed for some time, these methods do not erase the initial, fearful memory. Rather they leave participants with two memories—one scary, one not—either of which may be called up when a trigger presents itself. But Phelps’s new experiment, which confirms earlier studies in rats, suggests that when a memory is changed during the so-called reconsolidation window, the original one is erased.
Using a mild electric shock, Phelps’s team taught 65 participants to fear certain colored squares as they appeared on a screen. Normally, to overcome this type of fear, researchers would show participants the feared squares again without being given a shock, in an effort to create a safe memory of the squares. Phelps’s group did that, but in some cases investigators asked subjects to contemplate their fearful memory for at least 10 minutes before they saw the squares again. These participants actually replaced their old fearful memory with a new, safe memory. When they saw the squares again paired with shocks up to a year later, they were slow to relearn their fear of the squares. In contrast, subjects who created a safe memory of the squares without first contemplating their fearful memory for 10 minutes immediately reactivated their older, fearful memory when they saw a square and got a shock.
The researchers suspect that after calling up a memory, it takes about 10 minutes before the window of opportunity opens up for the memory to be reconsolidated, or changed, in a meaningful way, Phelps explains. “But there is some combination of spacing and timing that we need to figure out,” she adds—the scientists do not yet know how long the window lasts. Even more intriguing is the role contemplation plays—does sitting and thinking about the fearful memory make it more malleable than does simply recalling it? Although questions remain, Phelps and her colleagues hope their work will eventually help people with debilitating phobias or perhaps even post-traumatic stress disorder.
Why "Magical Thinking" Works for Some People
By Piercarlo Valdesolo
Tuesday, October 19, 2010
Ray Allen’s pregame routine never changes. A nap from 11:30am to 1:00pm, chicken and white rice for lunch at 2:30, a stretch in the gym at 3:45, a quick head shave, then practice shots at 4:30. The same amount of shots must be made from the same spots every day – the baselines and elbows of the court, ending with the top of the key. Similar examples of peculiar rituals and regimented routines in athletics abound. Jason Giambi would wear a golden thong if he found himself in a slump at the plate, and Moises Alou, concerned about losing his dexterous touch with the bat, would frequently urinate on his hands. This type of superstitious behavior can veer from the eccentric to the pathological, and though many coaches, teammates and fans snicker and shake their heads, a new study headed by Lysann Damisch at the University of Cologne and recently published in the journal Psychological Science suggests that we should all stop smirking and start rubbing our rabbit’s foot.
When it comes to superstitions, social scientists have generally agreed on one thing: they are fundamentally irrational. “Magical thinking” (as it has been called) is defined as the belief that an object, action or circumstance not logically related to a course of events can influence its outcome. In other words, stepping on a crack cannot, given what we know about the principles of causal relations, have any direct effect on the probability of your mother breaking her back. Those who live in fear of such a tragedy are engaging in magical thought and behaving irrationally.
Yet in their study, Damisch and colleagues challenge the conclusion that superstitious thoughts bear no causal influence on future outcomes. Of course, they were not hypothesizing that the trillions of tiny cracks upon which we tread every day are imbued with some sort of sinister spine-crushing malevolence. Instead, they were interested in the types of superstitions that people think bring them good luck. The lucky hats, the favorite socks, the ritualized warmup routines, the childhood blankies. Can belief in such charms actually have an influence over one’s ability to, say, perform better on a test or in an athletic competition? In other words, is Ray Allen’s performance on the basketball court in some ways dependent on eating chicken and rice at exactly 2:30? Did Jason Giambi’s golden thong actually have a hand in stopping a hitless streak?
To initially test this possibility experimenters brought participants into the lab and told them that they would be doing a little golfing. They were to see how many of 10 putts they could make from the same location. The manipulation was simply this: when experimenters handed the golf ball to the participant they either mentioned that the ball “has turned out to be a lucky ball” in previous trials, or that the ball was simply the one “everyone had used so far”. Remarkably, the mere suggestion that the ball was lucky significantly influenced performance, causing participants to make almost two more putts on average.
Why? Surely it couldn’t be that the same golf ball becomes lucky at the experimenter’s suggestion – there must be an explanation grounded in the psychological influence that belief in lucky charms has on the superstitious. In a follow-up experiment the researchers hypothesized that this kind of magical thinking can actually increase participants’ confidence in their own capabilities. That is, believing in lucky charms would increase participants’ “self-efficacy,” and it is this feeling of “I can do this,” not any magical properties of the object itself, that predict success. To test this, they had participants bring in their own lucky charms from home and assigned them to either a condition where they would be performing a task in the presence of their charm, or a condition where the experimenter removes the charm from the room before the task. Participants rated their perceived level of self-efficacy and then completed a memory task that was essentially a variant of the game Concentration.
And, indeed, the participants who were in the presence of their charm performed better on the memory task and reported increased self-efficacy. A final study sought to determine exactly how the increased confidence that comes along with a lucky charm influences performance. Specifically, was it making participants set loftier goals for themselves? Was it increasing their persistence on the task? Turns out, it’s both. Participants in the charm-present conditions reported setting higher goals on an anagram task and demonstrated increased perseverance on the task (as measured by the amount of time they spent trying to solve it before asking for help).
So what does this all mean? Should you start scouring the earth for four-leaf clovers? Establish a quirky early morning pre-work routine to increase your productivity? Sadly, if you believe the results reported in this article, none of that will do you any good. The influence of the charm depends crucially on your belief in its inherent powers. Once you acknowledge that performance is a function of what goes on in your brain rather than a product of any mystical properties of the object itself, it becomes useless. That feeling of “I can do this” will wither away as soon as you realize that nothing external, nothing mystical, will influence how you perform – it’s just you and your abilities. Like the science of astronomy strips the starry night of its magic, the science of the mind strips your superstitions of their power. You’d be better off following the model of Walt Whitman: throw on your lucky fedora and forget you ever read this article.
Tuesday, October 12, 2010
Moonlighting as an Alchemist (Conjurer of Chemicals)
Sir Isaac Newton was a towering genius in the history of science, he knew he was a genius, and he didn’t like wasting his time. Born on Dec. 25, 1642, the great English physicist and mathematician rarely socialized or traveled far from home. He didn’t play sports or a musical instrument, gamble at whist or gambol on a horse. He dismissed poetry as “a kind of ingenious nonsense,” and the one time he attended an opera he fled at the third act. Newton was unmarried, had no known romantic liaisons and may well have died, at the age of 85, with his virginity intact. “I never knew him to take any recreation or pastime,” said his assistant, Humphrey Newton, “thinking all hours lost that were not spent on his studies.”
No, it wasn’t easy being Newton. Not only did he hammer out the universal laws of motion and gravitational attraction, formulating equations that are still used today to plot the trajectories of space rovers bound for Mars; and not only did he discover the spectral properties of light and invent calculus. Sir Isaac had a whole other full-time career, a parallel intellectual passion that he kept largely hidden from view but that rivaled and sometimes surpassed in intensity his devotion to celestial mechanics. Newton was a serious alchemist, who spent night upon dawn for three decades of his life slaving over a stygian furnace in search of the power to transmute one chemical element into another.
Newton’s interest in alchemy has long been known in broad outline, but the scope and details of that moonlighting enterprise are only now becoming clear, as science historians gradually analyze and publish Newton’s extensive writings on alchemy — a million-plus words from the Newtonian archives that had previously been largely ignored.
Speaking last week at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, William Newman, a professor of the history and philosophy of science at Indiana University in Bloomington, described his studies of Newton’s alchemical oeuvre, and offered insight into the central mystery that often baffles contemporary Newton fans. How could the man who vies in surveys with Albert Einstein for the title of “greatest physicist ever,” the man whom James Gleick has aptly designated “chief architect of the modern world,” have been so swept up in what looks to modern eyes like a medieval delusion? How could the ultimate scientist have been seemingly hornswoggled by a totemic psuedoscience like alchemy, which in its commonest rendering is described as the desire to transform lead into gold? Was Newton mad — perhaps made mad by exposure to mercury, as some have proposed? Was he greedy, or gullible, or stubbornly blind to the truth?
In Dr. Newman’s view, none of the above. Sir Isaac the Alchemist, he said, was no less the fierce and uncompromising scientist than was Sir Isaac, author of the magisterial Principia Mathematica. There were plenty of theoretical and empirical reasons at the time to take the principles of alchemy seriously, to believe that compounds could be broken down into their basic constituents and those constituents then reconfigured into other, more desirable substances.
Miners were pulling up from the ground twisted bundles of copper and silver that were shaped like the stalks of a plant, suggesting that veins of metals and minerals were proliferating underground with almost florid zeal.
Pools found around other mines seemed to have extraordinary properties. Dip an iron bar into the cerulean waters of the vitriol springs of modern-day Slovakia, for example, and the artifact will emerge agleam with copper, as though the dull, dark particles of the original had been elementally reinvented. “It was perfectly reasonable for Isaac Newton to believe in alchemy,” said Dr. Newman. “Most of the experimental scientists of the 17th century did.”
Moreover, while the alchemists of the day may not have mastered the art of transmuting one element into another — an ordeal that we have since learned requires serious equipment like a particle accelerator, or the belly of a star — their work yielded a bounty of valuable spinoffs, including new drugs, brighter paints, stronger soaps and better booze. “Alchemy was synonymous with chemistry,” said Dr. Newman, “and chemistry was much bigger than transmutation.”
For Newton, alchemy may also have proved bigger than chemistry. Dr. Newman argues that Sir Isaac’s alchemical investigations helped yield one of his fundamental breakthroughs in physics: his discovery that white light is a mixture of colored rays, and that a sunbeam prismatically fractured into the familiar rainbow suite called Roy G. Biv can with a lens be resolved to tidy white sunbeam once again. “I would go so far as to say that alchemy was crucial to Newton’s breakthroughs in optics,” said Dr. Newman. “He’s not just passing light through a prism — he’s resynthesizing it.” Consider this a case of “technology transfer,” said Dr. Newman, “from chemistry to physics.”
The conceptual underpinning to the era’s alchemical fixation was the idea of matter as hierarchical and particulate — that tiny, indivisible and semipermanent particles come together to form ever more complex and increasingly porous substances, a notion not so different from the reality revealed by 20th-century molecular biology and quantum physics.
With the right solvents and the perfect reactions, the researchers thought, it should be possible to reduce a substance to its core constituents — its corpuscles, as Newton called them — and then prompt the corpuscles to adopt new configurations and programs. Newton and his peers believed it was possible to prompt metals to grow, or “vegetate,” in a flask. After all, many chemical reactions were known to leave lovely dendritic residues in their wake. Dissolve a pinch of silver and mercury in a solution of nitric acid, drop in a lump of metal amalgam, and soon a spidery, glittering “Tree of Diana” will form on the glass. Or add iron to hydrochloric acid and boil the solution to dryness. Then prepare a powdery silicate mix of sand and potassium carbonate. Put the two together, and you will have a silica garden, in which the ruddy ferric chloride rises and bifurcates, rises and bifurcates, as though it were reaching toward sunlight and bursting into bloom.
Add to this the miners’ finds of tree- and rootlike veins of metals and alchemists understandably concluded that metals must be not only growing underground, but ripening. Hadn’t twined ores of silver and lead been found? Might not the lead be halfway to a mature state of silverdom? Surely there was a way to keep the disinterred metal root balls sprouting in the lab, coaxing their fruit to full succulent ripeness as the noblest of metals — lead into silver, copper to gold?
Well, no. If mineral veins sometimes resemble botanical illustrations, blame it on Earth’s molten nature and fluid mechanics: when seen from above, a branching river also looks like a tree.
Yet the alchemists had their triumphs, inventing brilliant new pigments, perfecting the old — red lead oxide, yellow arsenic sulfide, a little copper and vinegar and you’ve got bright green verdigris. Artists were advised, forget about mixing your own colors: you can get the best from an alchemist. The chemistry lab replaced the monastery garden as a source of new medicines. “If you go to the U.K. today and use the word ‘chemist,’ the assumption is that you’re talking about the pharmacist,” said Dr. Newman. “That tradition goes back to the 17th century.”
Alchemists also became expert at spotting cases of fraud. It was a renowned alchemist who proved that the “miraculous” properties of vitriol springs had nothing to do with true transmutation. Instead, the water’s vitriol, or copper sulfate, would cause iron atoms on the surface of a submerged iron rod to leach into the water, leaving pores that were quickly occupied by copper atoms from the spring.
“There were a lot of charlatans, especially in the noble courts of Europe,” said Dr. Newman. Should an alchemist be found guilty of attempting to deceive the king, the penalty was execution, and in high gilded style. The alchemist would be dressed in a tinsel suit and hanged from a gallows covered in gold-colored foil.
Newton proved himself equally intolerant of chicanery, when, in his waning years, he took a position as Master of the Mint. “In pursuing clippers and counterfeiters, he called on long-nurtured reserves of Puritan anger and righteousness,” writes James Gleick in his biography of Newton.
“He was brutal,” said Mark Ratner, a materials chemist at Northwestern University. “He sentenced people to death for trying to scrape the gold off of coins.” Newton may have been a Merlin, a Zeus, the finest scientist of all time. But make no mistake about it, said Dr. Ratner. “He was not a nice guy.”
Monday, September 6, 2010
Forget What You Know About Good Study Habits
By: BENEDICT CAREY
Every September, millions of parents try a kind of psychological witchcraft, to transform their summer-glazed campers into fall students, their video-bugs into bookworms. Advice is cheap and all too familiar: Clear a quiet work space. Stick to a homework schedule. Set goals. Set boundaries. Do not bribe (except in emergencies).
And check out the classroom. Does Junior’s learning style match the new teacher’s approach? Or the school’s philosophy? Maybe the child isn’t “a good fit” for the school.
Such theories have developed in part because of sketchy education research that doesn’t offer clear guidance. Student traits and teaching styles surely interact; so do personalities and at-home rules. The trouble is, no one can predict how.
Yet there are effective approaches to learning, at least for those who are motivated. In recent years, cognitive scientists have shown that a few simple techniques can reliably improve what matters most: how much a student learns from studying.
The findings can help anyone, from a fourth grader doing long division to a retiree taking on a new language. But they directly contradict much of the common wisdom about good study habits, and they have not caught on.
For instance, instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing.
“We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”
Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.
Ditto for teaching styles, researchers say. Some excellent instructors caper in front of the blackboard like summer-theater Falstaffs; others are reserved to the point of shyness. “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”
But individual learning is another matter, and psychologists have discovered that some of the most hallowed advice on study habits is flat wrong. For instance, many study skills courses insist that students find a specific place, a study room or a quiet corner of the library, to take their work. The research finds just the opposite. In one classic 1978 experiment, psychologists found that college students who studied a list of 40 vocabulary words in two different rooms — one windowless and cluttered, the other modern, with a view on a courtyard — did far better on a test than students who studied the words twice, in the same room. Later studies have confirmed the finding, for a variety of topics.
The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.
“What we think is happening here is that, when the outside context is varied, the information is enriched, and this slows down forgetting,” said Dr. Bjork, the senior author of the two-room experiment.
Varying the type of material studied in a single sitting — alternating, for example, among vocabulary, reading and speaking in a new language — seems to leave a deeper impression on the brain than does concentrating on just one skill at a time. Musicians have known this for years, and their practice sessions often include a mix of scales, musical pieces and rhythmic work. Many athletes, too, routinely mix their workouts with strength, speed and skill drills.
The advantages of this approach to studying can be striking, in some topic areas. In a study recently posted online by the journal Applied Cognitive Psychology, Doug Rohrer and Kelli Taylor of the University of South Florida taught a group of fourth graders four equations, each to calculate a different dimension of a prism. Half of the children learned by studying repeated examples of one equation, say, calculating the number of prism faces when given the number of sides at the base, then moving on to the next type of calculation, studying repeated examples of that. The other half studied mixed problem sets, which included examples all four types of calculations grouped together. Both groups solved sample problems along the way, as they studied.
A day later, the researchers gave all of the students a test on the material, presenting new problems of the same type. The children who had studied mixed sets did twice as well as the others, outscoring them 77 percent to 38 percent. The researchers have found the same in experiments involving adults and younger children.
“When students see a list of problems, all of the same kind, they know the strategy to use before they even read the problem,” said Dr. Rohrer. “That’s like riding a bike with training wheels.” With mixed practice, he added, “each problem is different from the last one, which means kids must learn how to choose the appropriate procedure — just like they had to do on the test.”
These findings extend well beyond math, even to aesthetic intuitive learning. In an experiment published last month in the journal Psychology and Aging, researchers found that college students and adults of retirement age were better able to distinguish the painting styles of 12 unfamiliar artists after viewing mixed collections (assortments, including works from all 12) than after viewing a dozen works from one artist, all together, then moving on to the next painter.
The finding undermines the common assumption that intensive immersion is the best way to really master a particular genre, or type of creative work, said Nate Kornell, a psychologist at Williams College and the lead author of the study. “What seems to be happening in this case is that the brain is picking up deeper patterns when seeing assortments of paintings; it’s picking up what’s similar and what’s different about them,” often subconsciously.
Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out.
“With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”
When the neural suitcase is packed carefully and gradually, it holds its contents for far, far longer. An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.
No one knows for sure why. It may be that the brain, when it revisits material at a later time, has to relearn some of what it has absorbed before adding new stuff — and that that process is itself self-reinforcing.
“The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”
That’s one reason cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.
Dr. Roediger uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle alters that property: “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.
In one of his own experiments, Dr. Roediger and Jeffrey Karpicke, also of Washington University, had college students study science passages from a reading comprehension test, in short study periods. When students studied the same material twice, in back-to-back sessions, they did very well on a test given immediately afterward, then began to forget the material.
But if they studied the passage just once and did a practice test in the second session, they did very well on one test two days later, and another given a week later.
“Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”
Of course, one reason the thought of testing tightens people’s stomachs is that tests are so often hard. Paradoxically, it is just this difficulty that makes them such effective study tools, research suggests. The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,” is evident in daily life. The name of the actor who played Linc in “The Mod Squad”? Francie’s brother in “A Tree Grows in Brooklyn”? The name of the co-discoverer, with Newton, of calculus?
The more mental sweat it takes to dig it out, the more securely it will be subsequently anchored.
None of which is to suggest that these techniques — alternating study environments, mixing content, spacing study sessions, self-testing or all the above — will turn a grade-A slacker into a grade-A student. Motivation matters. So do impressing friends, making the hockey team and finding the nerve to text the cute student in social studies.
“In lab experiments, you’re able to control for all factors except the one you’re studying,” said Dr. Willingham. “Not true in the classroom, in real life. All of these things are interacting at the same time.”
But at the very least, the cognitive techniques give parents and students, young and old, something many did not have before: a study plan based on evidence, not schoolyard folk wisdom, or empty theorizing.
Monday, August 30, 2010
An idle brain may be the self's workshop
The resting brain is anything but idle — that simple proposition would be clear if you could peer into Mike Mrazek's noggin as he putters around his kitchen preparing his daily morning feast of scrambled eggs, oatmeal and fresh fruit.
As he plods through his quotidian ritual of gathering ingredients, cutting, chopping, bringing the pan to the correct temperature and boiling water for tea, Mrazek's thoughts, too, are something of a scrambled feast, as he later recounts.
Childhood memories jostle against thoughts of his girlfriend's progress on a cross-country journey.
Reflections on the tomatoes in his garden give way to a rehearsal of a meeting he's having later on at the university.
A flashback to his sister teasing him about his breakfast routine turns into an observation he could make while leading a meditation session in the evening.
Until recently, scientists would have found little of interest in the purposeless, mind-wandering spaces between Mrazek's conscious breakfast-making tasks — they were just the brain idling between meaningful activity.
But in the span of a few short years, they have instead come to view mental leisure as important, purposeful work — work that relies on a powerful and far-flung network of brain cells firing in unison.
Neuroscientists call it the "default mode network."
Individually, the brain regions that make up that network have long been recognized as active when people recall their pasts, project themselves into future scenarios, impute motives and feelings to other people, and weigh their personal values.
But when these structures hum in unison — and scientists have found that when we daydream, they do just that — they function as our brain's "neutral" setting. Understanding that setting may do more than lend respectability to the universal practice of zoning out: It may one day help diagnose and treat psychiatric conditions as diverse as Alzheimer's disease, autism, depression and schizophrenia — all of which disrupt operations in the default mode network.
Beyond that lies an even loftier promise. As neuroscientists study the idle brain, some believe they are exploring a central mystery in human psychology: where and how our concept of "self" is created, maintained, altered and renewed.
After all, though our minds may wander when in this mode, they rarely wander far from ourselves, as Mrazek's mealtime introspection makes plain.
That's in sharp contrast to the pattern struck by the brain when hard at work: In this mode, introspection is suppressed while we attend to pressing business — we "lose ourselves" in work. As we do so, scientists see the default mode network go quiet and other networks come alive.
Neuroscientists have long resisted discussions of "self" as either hopelessly woolly-headed or just too difficult to tackle, says Jonathan Schooler, a psychologist at UC Santa Barbara who studies the wandering mind (with the assistance of Mrazek, a graduate student he advises).
But now, he says, research on the default mode network and mind-wandering has helped focus neuroscientists' attention to our rich inner world and raises the prospect that our sense of self, our existence as a separate being, can be observed, measured and discussed with rigor.
The idea that there may be a physical structure in the brain in which we unconsciously define who we are "would warm Freud's heart," says Dr. Marcus E. Raichle, a neurologist at Washington University in St. Louis who has pioneered work in this fledgling field. Sigmund Freud, the Austrian father of modern psychiatry, spoke exhaustively of the power of the unconscious mind in shaping our behavior and often surmised that the workings of that force would someday be revealed by scientists.
"People talk about the self and ask how it achieves some realization in the brain," Raichle says. The default mode network, he adds, "seems to be a critical element of that organization. It captures many of the features of how we think of ourselves as the self."
Changing thinking
In the last two decades, neuroscientists have identified many regions of the brain that are activated during purposeful tasks — when we count, navigate our environment, process input from our senses or perform complex motor skills.
But until very recently, the ebb and flow of thoughts — the stream of consciousness that makes Mrazek human and whose content is unique to him among humans — was the dead zone. Like geneticists who for years dismissed genetic material with no known function as "junk DNA," neuroscientists spent years dismissing the "idle" brain as just that: idle, its content just so much meaningless filler.
But in 2001, Raichle and his team began publishing neuroimaging studies that suggest different.
During tasks requiring focused attention, regions specialized to the tasks at hand became active in the subjects whose brains were being scanned. But as those men and women mentally relaxed between tasks inside the scanners, Raichle saw that the specialized regions went quiet — and a large and different cluster of brain structures consistently lighted up.
Raichle was particularly interested in a portion of the brain called the medial parietal cortex as a sort of central hub of this activity. He knew the area tended to become active when a person recalled his past.
And his work uncovered another key node in this curious circuit: the medial prefrontal cortex, a uniquely human structure that comes alive when we try to imagine what others are thinking.
Each region, Raichle realized, had a feature in common — it was focused on the self, and on the personal history and relationships by which we define ourselves as individuals.
As studies continued, scientists noticed some interesting facts.
They saw that the brain parts constituting the default mode network are uniquely vulnerable to the tangles, plaques and metabolic disturbances of Alzheimer's disease — an illness that starts by stealing one's memory and eventually robs its victims of their sense of self.
This, Raichle and colleagues would argue, suggests how important the default mode network is in making us who we are.
They saw that when operating, this network guzzles fuel at least as voraciously as do the networks that are at work when we engage in hard mental labor. That, along with other evidence, suggests to Raichle that when the default mode network is engaged, there's more than a mental vacation taking place.
So what is it doing?
Working vacation
Raichle suspects that during these moments of errant thought, the brain is forming a set of mental rules about our world, particularly our social world, that help us navigate human interactions and quickly make sense of and react to information — about a stranger's intentions, a child's next move, a choice before us — without having to run a complex and conscious calculation of all our values, expectations and beliefs.
Raichle says such mental shortcuts are necessary because the brain cannot possibly take in all the detail available to our senses at any given moment. The default mode network, he proposes, keeps a template handy that lets us assume a lot about ourselves and the people and environment we interact with.
Raichle points to another odd distinction of the default mode network — one that suggests it plays a central role in our functioning. Its central hub has two separate sources of blood supply, making it far less vulnerable than most other regions of the brain to damage from a stroke.
"That's an insurance policy: This area is critically important," he says.
Neuroscientists suspect that the default mode network may speak volumes about our mental health, based on studies in the last three years that suggest it is working slightly differently in people with depression, autism and other disorders. (See related story.)
That fact underscores a point: Just as sleep appears to play an important role in learning, memory consolidation and maintaining the body's metabolic function, some scientists wonder whether unstructured mental time — time to zone out and daydream — might also play a key role in our mental well-being. If so, that's a cautionary tale for a society that prizes productivity and takes a dim view of mind-wandering.
Such social pressure, Schooler says, overlooks the lessons from studies on the resting brain — that zoning out and daydreaming, indulged in at appropriate times, might serve a larger purpose in keeping us healthy and happy.
"People have this fear of being inadequately engaged, and as a consequence they overlook how engaging their own minds can be," Schooler says. "Each one of us can be pretty good company to ourselves if we allow our minds to go there."
Saturday, August 28, 2010
You were anesthetized during surgery: Does that mean you forgot everything that happened?
Wednesday, August 11, 2010
Self-Serve Brains
Bruce Bower
The concept of identity theft assumes an entirely new meaning for people with brain injuries that rob them of their sense of self—the unspoken certainty that one exists as a person in a flesh—bounded body with a unique set of life experiences and relationships. Consider the man who, after sustaining serious brain damage, insisted that his parents, siblings, and friends had been replaced by look-alikes whom he had never met. Everyone close to him had become a familiar-looking stranger. Another brain-injured patient asserted that his physicians, nurses, and physical therapists were actually his sons, daughters-in-law, and coworkers. He identified himself as an ice skater whom he had seen on a television program.
The sense of "I" can also go partially awry. After a stroke had left one of her arms paralyzed, a woman reported that the limb was no longer part of her body. She told a physician that she thought of the arm as "my pet rock."
Other patients bequeath their physical infirmities to phantom children. For instance, a woman blinded by a brain tumor became convinced that it was her child who was sick and blind, although the woman had no children.
These strange transformations and extensions of personal identity are beginning to yield insights into how the brain contributes to a sense of self, says neuroscientist Todd E. Feinberg of Beth Israel Medical Center in New York City. Thanks to technology that literally gets inside people's heads, researchers now are probing how the brain contributes to a sense of self and to perceptions of one's body and its control. Scientists expect that their efforts to shed light on the vexing nature of consciousness, as well as on the roots of mental disorders, such as schizophrenia, characterized by disturbed self-perception.
I spy
Scholars have argued for more than 300 years about whether a unified sense of self exists at all. A century ago, Sigmund Freud developed his concept of ego, a mental mechanism for distinguishing one's body and thoughts from those of other people. Around the same time, psychologist William James disagreed, writing that each person's "passing states of consciousness" create a false sense that an "I" or an ego runs the mental show.
Researchers still debate whether the self is the internal engine of willful behavior or simply a useful fiction that makes a person feel responsible for his or her actions. Some investigators argue that each person harbors many selves capable of emerging in different situations and contexts.
Regardless of philosophical differences, Feinberg notes, evidence suggests that the brain's right hemisphere often orchestrates basic knowledge about one's self, just as the left hemisphere usually assumes primary responsibility for language.
Disorders of the self caused by brain damage fall into two main categories, Feinberg proposes. Some patients lose their personal connection to significant individuals or entities, such as the man who thought everyone he knew was a familiar stranger and the woman who regarded her lifeless arm as a pet rock. Other patients perceive personal connections where they don't exist, such as the man who saw his medical caretakers as family and coworkers and the woman who mentally conceived a phantom daughter.
In both categories, Feinberg says, "right brain damage is much more likely than left brain damage to cause lasting disturbances of the normal relationship between individuals and their environments."
Other neuroscientists take a similar view. According to brain-imaging studies conducted by researchers including Jean Decety and Jessica A. Sommerville, both of the University of Washington in Seattle, during the past 3 years, a right brain network located mainly in the frontal lobe organizes neural efforts aimed at discerning one's body and thoughts. That network overlaps a brain circuit that plays a role in identifying others, perhaps contributing to the two-sided nature of the self as "special and social, unique and shared," Decety and Sommerville said in a seminal 2003 article.
The right me
In order to coordinate the relationship between the self and the world, the brain takes sides, according to work by Feinberg and Julian Paul Keenan of Montclair State University in New Jersey. They analyzed patterns of brain damage in 29 previously published cases of disordered selves. Injury to the frontal region of the right hemisphere occurred in 28 people, compared with left-frontal damage in 14.
Ten of the patients had also incurred injuries to other parts of the right brain, compared with three individuals who displayed damage in other left brain areas, Feinberg and Keenan report in the December 2005 Consciousness and Cognition. Research in the past decade on the recognition of one's face reached similar conclusions. In a study directed by Keenan, adults with no known brain impairment viewed images that gradually transformed from their own faces into the face of a famous person such as Marilyn Monroe or Bill Clinton. Participants alternated using their left or right hands to hit keys that indicated whether they saw themselves or a famous person in each composite image.
When responding with their left hands, volunteers identified themselves in composite images more often than when they used their right hands. Since each side of the brain controls movement on the opposite side of the body, the left-handed results implicated the right brain in self-recognition.
Similar findings came from epileptic patients who underwent a medical procedure in which one brain hemisphere at a time was anesthetized. Keenan and his colleagues showed each patient an image that blended features of his or her own face with facial features of a famous person and later asked whose face the patient had seen. When tested with only the right brain awake, most patients reported that they had seen their own faces. When only the left brain was active, they usually recalled having seen the famous face.
A brain-scan investigation of 10 healthy adults, published in the April 15, 2005 NeuroImage, also implicates the right hemisphere in self-recognition. A team led by Lucina Uddin of the University of California, Los Angeles showed volunteers a series of images that, to varying degrees, blended their own faces with those of same-sex coworkers. Participants pressed keys indicating whether they saw themselves or a coworker in each image.
Pronounced blood flow, a sign of heightened neural activity, appeared in certain parts of the right hemisphere only when the participants recognized themselves, Uddin's group reports. Previous studies in monkeys indicated that these areas of the brain contain so-called mirror neurons, which respond similarly when an animal executes an action or observes another animal perform the same action
A right brain network of these mirror neurons maintains an internal self-image for comparison with faces that one sees, Uddin and her colleagues propose.
Still, not everyone regards the right brain as central to the self. Todd F. Heatherton of Dartmouth College in Hanover, N.H., and his coworkers reported in 2003 on a patient who had had surgery to disconnect the bundle of nerve fibers that connects the neural hemispheres. That split-brain patient recognized himself in images that blended his features with those of one of the researchers only when the images appeared in his right visual field and were thus handled by his left brain.
"Recognition of the self is one of the most basic, yet poorly understood, cognitive operations," Uddin says.
Losing control
Chris Frith, a neuroscientist at University College London, has long wondered why people diagnosed with schizophrenia often experience their own actions as being controlled by others. A person with this severe mental disorder may report, for example, that space aliens ordered him to behave destructively.
Fifteen years ago, Frith thought that schizophrenia robbed people of the ability to monitor their intentions to act. If their behavior came as a complete surprise, they might attribute it to external forces.
Frith abandoned that idea after reading neurologists' reports of a strange condition called anarchic-hand syndrome. Damage to motor areas on one side of the brain leaves these patients unable to control the actions of the hand on the opposite side of the body. For example, when one patient tried to soap a washcloth with his right hand, his left hand, much to his chagrin, kept putting the soap back in its dish. Another patient used one hand to remove the other from doorknobs, which it repeatedly grabbed as he walked by doors.
Despite being unaware of any intention to use a hand in these ways, anarchic-hand patients don't experience their behavior as controlled by space aliens or another outside entity—they just try to correct their wayward hands.
Frith now suspects that anarchic-hand syndrome and schizophrenia's delusions of being controlled by others share a neural defect that makes it seem like one's movements occur passively. However, people with schizophrenia mistakenly perceive the passive movements as having been intentional.
In support of this possibility, Frith and his colleagues find that when shown scenes of abstract shapes moving across a computer screen, patients with schizophrenia, but not mentally healthy volunteers, attribute good and bad intentions to these shapes. Patients with schizophrenia may monitor their own actions in excruciating detail for signs of external control, Frith suggests.
In general, people rarely think about their selves but act as if such entities must exist. "The normal mark of the self in action is that we have very little experience of it," Frith says.
Harvard University psychologist Daniel Wegner goes further. Expanding the view of William James, Wegner argues that the average person's sense of having a self that consciously controls his or her actions is an illusion. This controversial proposal builds on an experiment conducted more than 20 years ago by neurophysiologist Benjamin Libet of the University of California, San Francisco.
Libet found that although volunteers' conscious decisions to perform a simple action preceded the action itself, they occurred just after a distinctive burst of electrical activity in the brain signaled the person's readiness to move. In other words, people decided to act only after their brains had unconsciously prepared them to do so.
Wegner has since performed experiments demonstrating the ease with which people claim personal responsibility for actions that they have not performed. In one study, participants looked in a mirror at the movements of an experimenter's arms situated where their own arms would be. When the arms moved according to another researcher's instructions, volunteers reported that they had willed the movements.
Feinberg says that these findings offer no reason to write off the self as a mental mirage.
Waist not
A young woman stands in neuroscientist J. Henrik Ehrsson's laboratory at London's University College and places her palms on her waist. Cuffs placed over her wrists begin to vibrate tendons just under the skin, creating the sensation that her hands are bending inward. At the same time, the woman feels her waist and hips shrink by several inches to accommodate the imagined hand movements. Dr. Ehrsson's illusory instant-waist-loss program lasts only about 30 seconds.
Ehrsson and his coworkers used a brain-imaging machine to measure blood flow in the brains of 24 people as they experienced this illusion. Parts of the left parietal cortex, located near the brain's midpoint, displayed especially intense activity as volunteers felt their waists contract, the scientists report in the December 2005 PloS Biology.
The greater the parietal response, the more waist shrinkage the individual reported.
The scientists suspect that the activated parietal areas integrate sensory information from different body parts, a key step in constructing an internal image of one's body size and shape. When the brain receives a message that the hands are bending into the waist, it adjusts the internal body image accordingly, Ehrsson's team hypothesizes.
The brain can adjust its internal body map in a matter of minutes, the experiment demonstrates. Researchers who similarly induced illusions of expanding fingers came to that same conclusion.
The possibility that the brain can redraw body image in dramatic ways resonates with neuroscientist Miguel A.L. Nicolelis of Duke University Medical Center in Durham, N.C., and his colleagues. They've found that after monkeys learn to alter their brain activity to control a robotic arm, the animals' brains show the same activity pattern as when they move their own limbs.
Nicolelis' team reported in 2003 that the researchers had implanted electrodes in the frontal and parietal lobes of the brains of two female rhesus monkeys that used a joystick to control a cursor on a computer screen. That action maneuvered a robotic arm in another room. The animals gradually learned to modulate their brain signals to reposition the cursor, without moving a muscle.
Electrode data show that, after training, many neurons that formerly emitted synchronized signals as the monkeys manually manipulated the joystick to control the robotic arm also did so when the animals performed the same task mentally. Those results appeared in the May 11, 2005 Journal of Neuroscience.
The monkeys assimilated into their neural self-images a tool that they had learned to use proficiently, Nicolelis suggests. Apes and people possess an even stronger capacity for integrating tools into the brain's definition of self, in his view. This process may underlie the acquisition of expertise.
. "Our brains' representations of our bodies are adaptable enough to incorporate any tools that we create to interact with the environment, from a robot appendage to a computer keyboard or a tennis racket," Nicolelis says.
Self doubts
Despite the proliferation of such studies, the self's special status in the brain is far from assured. After reviewing relevant brain imaging and psychology studies, neuroscientists Seth J. Gillihan and Martha J. Farah, both of the University of Pennsylvania in Philadelphia, found little compelling evidence for brain networks devoted solely to physical or psychological aspects of the self.
At most, work such as Feinberg's with brain-damaged patients indicates that singular brain networks distinguish between one's limbs and those of other people, the researchers say. There are also suggestions that other brain areas foster a sense of control over one's limb movements, Gillihan and Farah reported in the January 2005 Psychological Bulletin.
Still, much of what we typically think of as "the self" may not be assignable to brain states or structures, in their view.
Feinberg argues that each of the increasingly complex levels of the brain—including the brain stem, the limbic system, and the cortex—contributes to intentional actions and to perceiving meaning in the world, the main ingredients of an "inner I."
Brain-damaged patients vividly illustrate the self's resiliency, Feinberg adds. While injury to the right frontal brain transforms some patients' identities in odd ways, other comparably injured patients somehow maintain their old selves.
A person's coping style and emotional resources usually influence responses to right brain damage, according to Feinberg's clinical observations. For example, one patient, a young man living half a world away from his family, referred to his paralyzed left arm as his brother's arm.
Feinberg asked the man what it meant to him to possess his sibling's arm rather than his own. "It makes me feel good," the man responded, in a voice choked with emotion. "Having my brother's arm makes me feel closer to my family."