Wednesday, October 27, 2010

Can Meditation Change Your Brain?

Contemplative neuroscientists believe it can

Posted on October 27, 2010
From CNN’s Dan Gilgoff

Can people strengthen the brain circuits associated with happiness and positive behavior, just as we’re able to strengthen muscles with exercise?

Richard Davidson, who for decades has practiced Buddhist-style meditation – a form of mental exercise, he says – insists that we can.

And Davidson, who has been meditating since visiting India as a Harvard grad student in the 1970s, has credibility on the subject beyond his own experience.

A trained psychologist based at the University of Wisconsin, Madison, he has become the leader of a relatively new field called contemplative neuroscience – the brain science of meditation.

Over the last decade, Davidson and his colleagues have produced scientific evidence for the theory that meditation – the ancient eastern practice of sitting, usually accompanied by focusing on certain objects - permanently changes the brain for the better.

“We all know that if you engage in certain kinds of exercise on a regular basis you can strengthen certain muscle groups in predictable ways,” Davidson says in his office at the University of Wisconsin, where his research team has hosted scores of Buddhist monks and other meditators for brain scans.

“Strengthening neural systems is not fundamentally different,” he says. “It’s basically replacing certain habits of mind with other habits.”

Contemplative neuroscientists say that making a habit of meditation can strengthen brain circuits responsible for maintaining concentration and generating empathy.

One recent study by Davidson’s team found that novice meditators stimulated their limbic systems – the brain’s emotional network – during the practice of compassion meditation, an ancient Tibetan Buddhist practice.

That’s no great surprise, given that compassion meditation aims to produce a specific emotional state of intense empathy, sometimes call “loving-kindness.”

But the study also found that expert meditators – monks with more than 10,000 hours of practice – showed significantly greater activation of their limbic systems. The monks appeared to have permanently changed their brains to be more empathetic.

An earlier study by some of the same researchers found that committed meditators experienced sustained changes in baseline brain function, meaning that they had changed the way their brains operated even outside of meditation.

These changes included ramped-up activation of a brain region thought to be responsible for generating positive emotions, called the left-sided anterior region. The researchers found this change in novice meditators who’d enrolled in a course in mindfulness meditation – a technique that borrows heavily from Buddhism – that lasted just eight weeks.

But most brain research around meditation is still preliminary, waiting to be corroborated by other scientists. Meditation’s psychological benefits and its use in treatments for conditions as diverse as depression and chronic pain are more widely acknowledged.

Serious brain science around meditation has emerged only in about the last decade, since the birth of functional MRI allowed scientists to begin watching the brain and monitoring its changes in relatively real time.

Beginning in the late 1990s, a University of Pennsylvania-based researcher named Andrew Newberg said that his brain scans of experienced meditators showed the prefrontal cortex – the area of the brain that houses attention – surging into overdrive during meditation while the brain region governing our orientation in time and space, called the superior parietal lobe, went dark. (One of his scans is pictured, above.)

Newberg said his findings explained why meditators are able to cultivate intense concentration while also describing feelings of transcendence during meditation.

But some scientists said Newberg was over-interpreting his brain scans. Others said he failed to specify the kind of meditation he was studying, making his studies impossible to reproduce. His popular books, like Why God Won’t Go Away, caused more eye-rolling among neuroscientists, who said he hyped his findings to goose sales.

“It caused mainstream scientists to say that the only work that has been done in the field is of terrible quality,” says Alasdair Coles, a lecturer in neurology at England’s University of Cambridge.

Newberg, now at Thomas Jefferson University and Hospital in Philadelphia, stands by his research.

And contemplative neuroscience had gained more credibility in the scientific community since his early scans.

One sign of that is increased funding from the National Institutes of Health, which has helped establish new contemplative science research centers at Stanford University, Emory University, and the University of Wisconsin, where the world’s first brain imaging lab with a meditation room next door is now under construction.

The NIH could not provide numbers on how much it gives specifically to meditation brain research but its grants in complementary and alternative medicine – which encompass many meditation studies – have risen from around $300 million in 2007 to an estimated $541 million in 2011.

“The original investigations by people like Davidson in the 1990s were seen as intriguing, but it took some time to be convinced that brain processes were really changing during meditation,” says Josephine Briggs, Director of the NIH’s National Center for Complementary and Alternative Medicine.

Most studies so far have examined so-called focused-attention meditation, in which the practitioner concentrates on a particular subject, such as the breath. The meditator monitors the quality of attention and, when it drifts, returns attention to the object.

Over time, practitioners are supposed to find it easier to sustain attention during and outside of meditation.

In a 2007 study, Davidson compared the attentional abilities of novice meditators to experts in the Tibetan Buddhist tradition. Participants in both groups were asked to practice focused-attention meditation on a fixed dot on a screen while researchers ran fMRI scans of their brains.

To challenge the participants’ attentional abilities, the scientists interrupted the meditations with distracting sounds.

The brain scans found that both experienced and novice meditators activated a network of attention-related regions of the brain during meditation. But the experienced meditators showed more activation in some of those regions.

The inexperienced meditators, meanwhile, showed increased activation in brain regions that have been shown to negatively correlate with sustaining attention. Experienced meditators were better able to activate their attentional networks to maintain concentration on the dot. They had, the study suggested, changed their brains.

The fMRI scans also showed that experienced meditators had less neural response to the distracting noises that interrupted the meditation.

In fact, the more hours of experience a meditator had, the scans found, the less active his or her emotional networks were during the distracting sounds, which meant the easier it was to focus.

More recently, contemplative neuroscience has turned toward compassion meditation, which involves generating empathy through objectless awareness; practitioners call it non-referential compassion meditation.

New neuroscientific interest in the practice comes largely at the urging of the Dalai Lama, the spiritual and political leader of Tibetan Buddhists, for whom compassion meditation is a time-worn tradition.

The Dalai Lama has arranged for Tibetan monks to travel to American universities for brain scans and has spoken at the annual meeting of the Society for Neuroscience, the world’s largest gathering of brain scientists.

A religious leader, the Dalai Lama has said he supports contemplative neuroscience even though scientists are stripping meditation of its Buddhist roots, treating it purely as a mental exercise that more or less anyone can do.

“This is not a project about religion,” says Davidson. “Meditation is mental activity that could be understood in secular terms.”

Still, the nascent field faces challenges. Scientists have scanned just a few hundred brains on meditation do date, which makes for a pretty small research sample. And some scientists say researchers are over eager to use brain science to prove the that meditation “works.”

“This is a field that has been populated by true believers,” says Emory University scientist Charles Raison, who has studied meditation’s effect on the immune system. “Many of the people doing this research are trying to prove scientifically what they already know from experience, which is a major flaw.”

But Davidson says that other types of scientists also have deep personal interest in what they’re studying. And he argues that that’s a good thing.

“There’s a cadre of grad students and post docs who’ve found personal value in meditation and have been inspired to study it scientifically,” Davidson says. “These are people at the very best universities and they want to do this for a career.

“In ten years,” he says, “we’ll find that meditation research has become mainstream.”

Monday, October 25, 2010

Morality: My brain made me do it

Understanding how morality is linked to brain function will require us to rethink our justice system, says Martha J. Farah

By Martha J. Farah, 22 October 2010

AS SCIENCE exposes the gears and sprockets of moral cognition, how will it affect our laws and ethical norms?

We have long known that moral character is related to brain function. One remarkable demonstration of this was provided by Phineas Gage, a 19th-century construction foreman injured in an explosion. After a large iron rod was blown through his head, destroying bits of his prefrontal cortex, Gage was transformed from a conscientious, dependable worker to a selfish and erratic character, described by some as antisocial.

Recent research has shown that psychopaths, who behave antisocially and without remorse, differ from the rest of us in several brain regions associated with self-control and moral cognition (Behavioral Sciences and the Law, vol 26, p 7). Even psychologically normal people who merely score higher in psychopathic traits show distinctive differences in their patterns of brain activation when contemplating moral decisions (Molecular Psychiatry, vol 14, p 5).

The idea that moral behaviour is dependent on brain function presents a challenge to our usual ways of thinking about moral responsibility. A remorseless murderer is unlikely to win much sympathy, but show us that his cold-blooded cruelty is a neuropsychological impairment and we are apt to hold him less responsible for his actions. Presumably for this reason, fMRI evidence was introduced by the defence in a recent murder trial to show that the perpetrator had differences in various brain regions which they argued reduced his culpability. Indeed, neuroscientific evidence has been found to exert a powerful influence over decisions by judges and juries to find defendants "not guilty by reason of insanity" (Behavioral Sciences and the Law, vol 26, p 85).

Outside the courtroom, people tend to judge the behaviour of others less harshly when it is explained in light of physiological, rather than psychological processes (Ethics and Behavior, vol 15, p 139). This is as true for serious moral transgressions, like killing, as for behaviours that are merely socially undesirable, like overeating. The decreased moral stigma surrounding drug addiction is undoubtedly due in part to our emerging view of addiction as a brain disease.

What about our own actions? Might an awareness of the neural causes of behaviour influence our own behaviour? Perhaps so. In a 2008 study, researchers asked subjects to read a passage on the incompatibility of free will and neuroscience from Francis Crick's book The Astonishing Hypothesis (Simon and Schuster, 1995). This included the statement, " 'You', your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules." The researchers found that these people were then more likely to cheat on a computerised test than those who had read an unrelated passage (Psychological Science, vol 19, p 49).

So will the field of moral neuroscience change our laws, ethics and mores? The growing use of brain scans in courtrooms, societal precedents like the destigmatisation of addiction, and studies like those described above seem to say the answer is yes. And this makes sense. For laws and mores to persist, they must accord with our understanding of behaviour. For example, we know that young children have limited moral understanding and self-control, so we do not hold them criminally accountable for their behaviour. To the extent that neuroscience changes our understanding of human behaviour - and misbehaviour - it seems destined to alter society's standards of morality.

Martha J. Farah is the director of the Center for Neuroscience and Society at the University of Pennsylvania in Philadelphia. Her new book is Neuroethics (MIT Press, 2010)

Thursday, October 21, 2010

"Wet Computer" Literally Simulates Brain Cells

Many next-gen supercomputers try to imitate how brain cells communicate and build digital versions of neural networks. Now the BBC brings word of the most ambitious project yet -- a "wet computer" that will literally simulate neurons and signal processing on the chemical level.

By Jeremy Hs, Popular Science

The $2.6 million effort aims to do what existing computers can't, including control tiny molecular robots or direct chemical assembly of nanogears. It may also aid the rise of intelligent drugs that react smartly to chemical signals from the human body.

The biologically-inspired computer does not harness living cells. Instead, it will use chemical versions that still spontaneously form coatings similar to biological cell walls, and can even pass signals between the chemical cells.

Such chemical cells can also undergo a "refractory period" after receiving a chemical signal. No outside signals can influence the cells during that period, and so the self-regulating system prevents an unchecked chain reaction from triggering many connected cells. That level of organization means that such chemical cells could form networks that function like a brain.

Wednesday, October 20, 2010

Extinguishing Fear

Erasing frightening memories may be possible during a brief period after recollection.

By Molly Webster Thursday,
April 22, 2010

When we learn something, for it to become a memory, the event must be imprinted on our brain, a phenomenon known as consolidation. In turn, every time we retrieve a memory, it can be reconsolidated—that is, more infor­mation can be added to it. Now psychologist Liz Phelps of New York University and her team report using this “reconsolidation window” as a drug-free way to erase fearful memories in humans. Although techniques for over­coming fearful memories have existed for some time, these methods do not erase the initial, fearful memory. Rather they leave participants with two memories—one scary, one not—either of which may be called up when a trigger presents itself. But Phelps’s new experiment, which confirms earlier studies in rats, suggests that when a memory is changed during the so-called reconsolidation window, the original one is erased.

Using a mild electric shock, Phelps’s team taught 65 participants to fear certain colored squares as they ap­peared on a screen. Normally, to overcome this type of fear, researchers would show participants the feared squares again without being given a shock, in an effort to create a safe memory of the squares. Phelps’s group did that, but in some cases investigators asked subjects to contemplate their fearful memory for at least 10 minutes before they saw the squares again. These participants actually replaced their old fearful memory with a new, safe memory. When they saw the squares again paired with shocks up to a year later, they were slow to relearn their fear of the squares. In contrast, subjects who created a safe memory of the squares without first contemplating their fearful memory for 10 minutes immediately reactivated their older, fearful memory when they saw a square and got a shock.

The researchers suspect that after calling up a memory, it takes about 10 minutes before the window of op­portunity opens up for the memory to be reconsolidated, or changed, in a meaningful way, Phelps explains. “But there is some combination of spacing and timing that we need to figure out,” she adds—the scientists do not yet know how long the window lasts. Even more intriguing is the role contemplation plays—does sitting and thinking about the fearful memory make it more mal­leable than does simply recalling it? Although questions remain, Phelps and her colleagues hope their work will eventually help people with debilitating phobias or perhaps even post-traumatic stress disorder.

Why "Magical Thinking" Works for Some People

There is actually some science behind "magical thinking" and the edge that it can give people.

By Piercarlo Valdesolo
Tuesday, October 19, 2010

Ray Allen’s pregame routine never changes. A nap from 11:30am to 1:00pm, chicken and white rice for lunch at 2:30, a stretch in the gym at 3:45, a quick head shave, then practice shots at 4:30. The same amount of shots must be made from the same spots every day – the baselines and elbows of the court, ending with the top of the key. Similar examples of peculiar rituals and regimented routines in athletics abound. Jason Giambi would wear a golden thong if he found himself in a slump at the plate, and Moises Alou, concerned about losing his dexterous touch with the bat, would frequently urinate on his hands. This type of superstitious behavior can veer from the eccentric to the pathological, and though many coaches, teammates and fans snicker and shake their heads, a new study headed by Lysann Damisch at the University of Cologne and recently published in the journal Psychological Science suggests that we should all stop smirking and start rubbing our rabbit’s foot.

When it comes to superstitions, social scientists have generally agreed on one thing: they are fundamentally irrational. “Magical thinking” (as it has been called) is defined as the belief that an object, action or circumstance not logically related to a course of events can influence its outcome. In other words, stepping on a crack cannot, given what we know about the principles of causal relations, have any direct effect on the probability of your mother breaking her back. Those who live in fear of such a tragedy are engaging in magical thought and behaving irrationally.

Yet in their study, Damisch and colleagues challenge the conclusion that superstitious thoughts bear no causal influence on future outcomes. Of course, they were not hypothesizing that the trillions of tiny cracks upon which we tread every day are imbued with some sort of sinister spine-crushing malevolence. Instead, they were interested in the types of superstitions that people think bring them good luck. The lucky hats, the favorite socks, the ritualized warmup routines, the childhood blankies. Can belief in such charms actually have an influence over one’s ability to, say, perform better on a test or in an athletic competition? In other words, is Ray Allen’s performance on the basketball court in some ways dependent on eating chicken and rice at exactly 2:30? Did Jason Giambi’s golden thong actually have a hand in stopping a hitless streak?

To initially test this possibility experimenters brought participants into the lab and told them that they would be doing a little golfing. They were to see how many of 10 putts they could make from the same location. The manipulation was simply this: when experimenters handed the golf ball to the participant they either mentioned that the ball “has turned out to be a lucky ball” in previous trials, or that the ball was simply the one “everyone had used so far”. Remarkably, the mere suggestion that the ball was lucky significantly influenced performance, causing participants to make almost two more putts on average.

Why? Surely it couldn’t be that the same golf ball becomes lucky at the experimenter’s suggestion – there must be an explanation grounded in the psychological influence that belief in lucky charms has on the superstitious. In a follow-up experiment the researchers hypothesized that this kind of magical thinking can actually increase participants’ confidence in their own capabilities. That is, believing in lucky charms would increase participants’ “self-efficacy,” and it is this feeling of “I can do this,” not any magical properties of the object itself, that predict success. To test this, they had participants bring in their own lucky charms from home and assigned them to either a condition where they would be performing a task in the presence of their charm, or a condition where the experimenter removes the charm from the room before the task. Participants rated their perceived level of self-efficacy and then completed a memory task that was essentially a variant of the game Concentration.

And, indeed, the participants who were in the presence of their charm performed better on the memory task and reported increased self-efficacy. A final study sought to determine exactly how the increased confidence that comes along with a lucky charm influences performance. Specifically, was it making participants set loftier goals for themselves? Was it increasing their persistence on the task? Turns out, it’s both. Participants in the charm-present conditions reported setting higher goals on an anagram task and demonstrated increased perseverance on the task (as measured by the amount of time they spent trying to solve it before asking for help).

So what does this all mean? Should you start scouring the earth for four-leaf clovers? Establish a quirky early morning pre-work routine to increase your productivity? Sadly, if you believe the results reported in this article, none of that will do you any good. The influence of the charm depends crucially on your belief in its inherent powers. Once you acknowledge that performance is a function of what goes on in your brain rather than a product of any mystical properties of the object itself, it becomes useless. That feeling of “I can do this” will wither away as soon as you realize that nothing external, nothing mystical, will influence how you perform – it’s just you and your abilities. Like the science of astronomy strips the starry night of its magic, the science of the mind strips your superstitions of their power. You’d be better off following the model of Walt Whitman: throw on your lucky fedora and forget you ever read this article.

Tuesday, October 12, 2010

Moonlighting as an Alchemist (Conjurer of Chemicals)

By NATALIE ANGIER

Sir Isaac Newton was a towering genius in the history of science, he knew he was a genius, and he didn’t like wasting his time. Born on Dec. 25, 1642, the great English physicist and mathematician rarely socialized or traveled far from home. He didn’t play sports or a musical instrument, gamble at whist or gambol on a horse. He dismissed poetry as “a kind of ingenious nonsense,” and the one time he attended an opera he fled at the third act. Newton was unmarried, had no known romantic liaisons and may well have died, at the age of 85, with his virginity intact. “I never knew him to take any recreation or pastime,” said his assistant, Humphrey Newton, “thinking all hours lost that were not spent on his studies.”

No, it wasn’t easy being Newton. Not only did he hammer out the universal laws of motion and gravitational attraction, formulating equations that are still used today to plot the trajectories of space rovers bound for Mars; and not only did he discover the spectral properties of light and invent calculus. Sir Isaac had a whole other full-time career, a parallel intellectual passion that he kept largely hidden from view but that rivaled and sometimes surpassed in intensity his devotion to celestial mechanics. Newton was a serious alchemist, who spent night upon dawn for three decades of his life slaving over a stygian furnace in search of the power to transmute one chemical element into another.

Newton’s interest in alchemy has long been known in broad outline, but the scope and details of that moonlighting enterprise are only now becoming clear, as science historians gradually analyze and publish Newton’s extensive writings on alchemy — a million-plus words from the Newtonian archives that had previously been largely ignored.

Speaking last week at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, William Newman, a professor of the history and philosophy of science at Indiana University in Bloomington, described his studies of Newton’s alchemical oeuvre, and offered insight into the central mystery that often baffles contemporary Newton fans. How could the man who vies in surveys with Albert Einstein for the title of “greatest physicist ever,” the man whom James Gleick has aptly designated “chief architect of the modern world,” have been so swept up in what looks to modern eyes like a medieval delusion? How could the ultimate scientist have been seemingly hornswoggled by a totemic psuedoscience like alchemy, which in its commonest rendering is described as the desire to transform lead into gold? Was Newton mad — perhaps made mad by exposure to mercury, as some have proposed? Was he greedy, or gullible, or stubbornly blind to the truth?

In Dr. Newman’s view, none of the above. Sir Isaac the Alchemist, he said, was no less the fierce and uncompromising scientist than was Sir Isaac, author of the magisterial Principia Mathematica. There were plenty of theoretical and empirical reasons at the time to take the principles of alchemy seriously, to believe that compounds could be broken down into their basic constituents and those constituents then reconfigured into other, more desirable substances.

Miners were pulling up from the ground twisted bundles of copper and silver that were shaped like the stalks of a plant, suggesting that veins of metals and minerals were proliferating underground with almost florid zeal.

Pools found around other mines seemed to have extraordinary properties. Dip an iron bar into the cerulean waters of the vitriol springs of modern-day Slovakia, for example, and the artifact will emerge agleam with copper, as though the dull, dark particles of the original had been elementally reinvented. “It was perfectly reasonable for Isaac Newton to believe in alchemy,” said Dr. Newman. “Most of the experimental scientists of the 17th century did.”

Moreover, while the alchemists of the day may not have mastered the art of transmuting one element into another — an ordeal that we have since learned requires serious equipment like a particle accelerator, or the belly of a star — their work yielded a bounty of valuable spinoffs, including new drugs, brighter paints, stronger soaps and better booze. “Alchemy was synonymous with chemistry,” said Dr. Newman, “and chemistry was much bigger than transmutation.”

For Newton, alchemy may also have proved bigger than chemistry. Dr. Newman argues that Sir Isaac’s alchemical investigations helped yield one of his fundamental breakthroughs in physics: his discovery that white light is a mixture of colored rays, and that a sunbeam prismatically fractured into the familiar rainbow suite called Roy G. Biv can with a lens be resolved to tidy white sunbeam once again. “I would go so far as to say that alchemy was crucial to Newton’s breakthroughs in optics,” said Dr. Newman. “He’s not just passing light through a prism — he’s resynthesizing it.” Consider this a case of “technology transfer,” said Dr. Newman, “from chemistry to physics.”

The conceptual underpinning to the era’s alchemical fixation was the idea of matter as hierarchical and particulate — that tiny, indivisible and semipermanent particles come together to form ever more complex and increasingly porous substances, a notion not so different from the reality revealed by 20th-century molecular biology and quantum physics.

With the right solvents and the perfect reactions, the researchers thought, it should be possible to reduce a substance to its core constituents — its corpuscles, as Newton called them — and then prompt the corpuscles to adopt new configurations and programs. Newton and his peers believed it was possible to prompt metals to grow, or “vegetate,” in a flask. After all, many chemical reactions were known to leave lovely dendritic residues in their wake. Dissolve a pinch of silver and mercury in a solution of nitric acid, drop in a lump of metal amalgam, and soon a spidery, glittering “Tree of Diana” will form on the glass. Or add iron to hydrochloric acid and boil the solution to dryness. Then prepare a powdery silicate mix of sand and potassium carbonate. Put the two together, and you will have a silica garden, in which the ruddy ferric chloride rises and bifurcates, rises and bifurcates, as though it were reaching toward sunlight and bursting into bloom.

Add to this the miners’ finds of tree- and rootlike veins of metals and alchemists understandably concluded that metals must be not only growing underground, but ripening. Hadn’t twined ores of silver and lead been found? Might not the lead be halfway to a mature state of silverdom? Surely there was a way to keep the disinterred metal root balls sprouting in the lab, coaxing their fruit to full succulent ripeness as the noblest of metals — lead into silver, copper to gold?

Well, no. If mineral veins sometimes resemble botanical illustrations, blame it on Earth’s molten nature and fluid mechanics: when seen from above, a branching river also looks like a tree.

Yet the alchemists had their triumphs, inventing brilliant new pigments, perfecting the old — red lead oxide, yellow arsenic sulfide, a little copper and vinegar and you’ve got bright green verdigris. Artists were advised, forget about mixing your own colors: you can get the best from an alchemist. The chemistry lab replaced the monastery garden as a source of new medicines. “If you go to the U.K. today and use the word ‘chemist,’ the assumption is that you’re talking about the pharmacist,” said Dr. Newman. “That tradition goes back to the 17th century.”

Alchemists also became expert at spotting cases of fraud. It was a renowned alchemist who proved that the “miraculous” properties of vitriol springs had nothing to do with true transmutation. Instead, the water’s vitriol, or copper sulfate, would cause iron atoms on the surface of a submerged iron rod to leach into the water, leaving pores that were quickly occupied by copper atoms from the spring.

“There were a lot of charlatans, especially in the noble courts of Europe,” said Dr. Newman. Should an alchemist be found guilty of attempting to deceive the king, the penalty was execution, and in high gilded style. The alchemist would be dressed in a tinsel suit and hanged from a gallows covered in gold-colored foil.

Newton proved himself equally intolerant of chicanery, when, in his waning years, he took a position as Master of the Mint. “In pursuing clippers and counterfeiters, he called on long-nurtured reserves of Puritan anger and righteousness,” writes James Gleick in his biography of Newton.

“He was brutal,” said Mark Ratner, a materials chemist at Northwestern University. “He sentenced people to death for trying to scrape the gold off of coins.” Newton may have been a Merlin, a Zeus, the finest scientist of all time. But make no mistake about it, said Dr. Ratner. “He was not a nice guy.”