Sunday, December 7, 2008

Broad Use of Brain Boosters?

Use of drugs to enhance memory and concentration should be permitted, experts say.
By Emily Singer

Off-label use of stimulants, such as Ritalin, is on the rise among college students. Studies show that 5 percent to 15 percent of students use prescription drugs as study aids, and surveys suggest the practice may be common among academics as well. The trend has sparked debates over how and when these cognitive enhancers should be used. Military personnel routinely use stimulants while on active duty, but should that practice also be permitted among surgeons working long shifts? What about scientists working late nights in the lab? Or students taking exams?

A commentary appearing today online in the journal Nature advocates for broad access to brain-boosting drugs. According to the piece, written by a group of ethicists, psychologists, and cognitive neuroscientists, "cognitive enhancement, unlike enhancement for sports competitions, could lead to substantive improvements in the world." While opponents have argued that the use of performance-enhancing drugs is unfair and could undermine the value of hard work, the authors say that these drugs fall into the same category as more common efforts to increase brain function, such as drinking a cup of coffee, or getting a good night's sleep, and thus should be regulated accordingly.

One of the biggest concerns associated with broad access to these drugs is that people will feel pressured to take them to get ahead, or just to keep up. An informal survey conducted by Nature last year of 1,400 people from 60 countries found that 20 percent of respondents engaged in off-label use of drugs to enhance concentration and memory. Ritalin was the most popular, followed by Adderall. Both are prescribed for ADHD. The survey confirmed the potential for peer pressure; while 85 percent of respondents said that the use of these drugs by children under the age of 16 should be restricted, a third said they would feel pressure to give them to their children if others were using them.

The authors of the commentary also note that if cognitive enhancers are to be used more broadly, more extensive study of the risks and benefits of the drugs is sorely needed. The side effects of long-term stimulant use, especially in children, are not yet known. And the potential for dependence and abuse has not been well documented.

Saturday, December 6, 2008

They killed their neighbors: genocide's foot soldiers

By Courtney Yager
CNN

Adolf Hitler, Pol Pot, Slobodan Milosevic. They are household names, infamous for masterminding genocide. But who were the foot soldiers who did the dirty work?

In many cases they were equally notorious in their communities because they were the friends, neighbors and co-workers of those they raped, slaughtered and buried alive.

Nusreta Sivac watched ordinary people become killers while imprisoned in a concentration camp in Bosnia.

She saw prisoners beaten beyond recognition and watched camp guards force a Muslim prisoner to rape a Muslim woman in front of everyone.

She was shocked to see people she knew running the camp. "They acted as if they had never seen me before," she said. "It was difficult for me to understand how people could turn into beasts overnight." VideoWatch as Nusreta tells her story »

While some perpetrators participate unwillingly -- they are forced to kill or face death themselves -- many ordinary people are manipulated into participating in the killing machine voluntarily.

Researchers say most perpetrators of genocide were not destined for murder and had never killed before.

"You don't have to be mentally ill or even innately evil or criminal. You can be ordinary, no better or worse than you or me, and commit killing or genocide," said Harvard psychiatrist Robert Lifton, who has studied Nazi doctors.

"The truth is that we all have the possibility for genocidal behavior."

Experts have reached a troubling conclusion: It was actually very easy for the architects of genocide to find more than enough ordinary people to do the killing.

Genocide is often the result of a "perfect storm." A country reeling from political and economic turmoil, a fanatical leader promising to make things better and a vulnerable population targeted for blame -- all combine in a blueprint for mass murder.

Architects of genocide use the same tools to execute their plan.

Group identity

Millions have been killed for being religious, ethnic or simply educated. Group identity is one of the foundations of genocide. This allegiance makes it easier for extremist leaders to stoke age-old animosities between groups.

"We all divide the world into 'us' and 'them,' " said psychologist Ervin Staub, author of "The Roots of Evil: The Origins of Genocide and Other Group Violence."

"Some people are like 'us' because of nationality, religion, race, etc. Those that are not like 'us' are 'them.' "

"Group identity intensifies during difficult times," Staub said.

Jean-Bosco Bizimana, a Rwandan Hutu, slaughtered his Tutsi neighbors 14 years ago. Leaders of the genocide exploited the history of hatred between the Hutus and Tutsis to pit them against each other. But before the genocide, the two groups had overcome their hostility to live peacefully together.

"We were manipulated," Bizimana said. "The government pushed us to kill. Before that, we intermarried, we helped each other in daily life and we shared everything. We ourselves can't even believe what happened."

Bizimana's wife said her husband, "would go around with the mob, and to show them he was part of it, he would kill."

Perpetrators don't want to be seen as weak, and in a mob mentality, individual guilt seems to disappear.

"People will do almost anything in a group and will do anything not to be rejected," said psychologist Philip Zimbardo, a professor emeritus at Stanford and famous for his 1971 Stanford Prison Experiment, which divided student volunteers into "prisoners" and "guards" and showed how easily people could be induced to commit sadistic acts.

"They give up a sense of personal accountability and diffuse responsibility to the leader."

Propaganda and dehumanization

Genocidal regimes use propaganda to incite hatred. During the genocide in Bosnia, for example, a fictitious news report said Muslims were feeding Serb children to animals at the Sarajevo zoo.

When people feel threatened and endangered, they can be led to kill. "Most genocides are shaped on [a perceived need for] self-defense," said Christopher Browning, a University of North Carolina history professor who studied a Nazi police battalion.

Bizimana said Rwandan government radio broadcasts led him to kill. "When instructions come from the government, we believed it was the right thing to do," he said.

"People tend to believe the world is a just place," psychologist Staub said. So the targeted group "is seen as though they did something to deserve the suffering."

The propaganda machine portrays the victim group as less than human. In Rwanda, the Hutus called their Tutsi neighbors 'cockroaches.' In Cambodia, the Khmer Rouge said their victims were "worms." To the Nazis, Jews were "vermin."

Dehumanization is the most powerful psychological tool used in all mass murder and genocides, Zimbardo said. "Dehumanization blurs your vision. You look at these people and you do not see them as human."

Instead, the enemy is treated as a germ -- as something to eradicate, or else face the threat of infection.

"Purification is at the heart of genocide," said Harvard's Lifton. "In that purification ... [the killers] are healing."

Recently discovered photos show Nazi officers at a retreat near Auschwitz relaxing as though they are taking a break from a routine job, not an extermination factory. "In order to carry out the function of killing, one must instill in that environment a sense of ordinariness," said Lifton. VideoWatch images of Nazis relaxing »

In the end, the masterminds of genocide see their visions play out: Foot soldiers carry out the mission and entire populations are displaced or killed.

Perpetrators and victims don't realize what they're involved in until it's too late, said Ben Kiernan, director of the Genocide Studies Program at Yale University.

"It's a conspiracy, a silent secret plan to set up a situation whereby the victims, who are unsuspecting, are brought into a conflict with a large number of people, many of whom are also unsuspecting," Kiernan said.

Looking back at their crimes, some perpetrators are now sorry for their actions, including Bizimana. "What we did to them in the past was very bad," he said. "Deep in my heart, I regret it."

Bizimana has since reconciled with his surviving Tutsi neighbors, and is trying to build unity in his country.

"What happened," he vows, "will never happen again."

Tuesday, December 2, 2008

1 in 5 Young Adults Have Personality Disorder, Study Finds

Tuesday , December 02, 2008

AP

CHICAGO —
Almost one in five young American adults has a personality disorder that interferes with everyday life, and even more abuse alcohol or drugs, researchers reported Monday in the most extensive study of its kind.

The disorders include problems such as obsessive or compulsive tendencies and anti-social behavior that can sometimes lead to violence. The study also found that fewer than 25 percent of college-aged Americans with mental problems get treatment.

One expert said personality disorders may be overdiagnosed. But others said the results were not surprising since previous, less rigorous evidence has suggested mental problems are common on college campuses and elsewhere.

Experts praised the study's scope — face-to-face interviews about numerous disorders with more than 5,000 young people ages 19 to 25 — and said it spotlights a problem college administrators need to address.

Study co-author Dr. Mark Olfson of Columbia University and New York State Psychiatric Institute called the widespread lack of treatment particularly worrisome. He said it should alert not only "students and parents, but also deans and people who run college mental health services about the need to extend access to treatment."

Counting substance abuse, the study found that nearly half of young people surveyed have some sort of psychiatric condition, including students and non-students.

Personality disorders were the second most common problem behind drug or alcohol abuse as a single category. The disorders include obsessive, anti-social and paranoid behaviors that are not mere quirks but actually interfere with ordinary functioning.

The study authors noted that recent tragedies such as fatal shootings at Northern Illinois University and Virginia Tech have raised awareness about the prevalence of mental illness on college campuses.

They also suggest that this age group might be particularly vulnerable.

"For many, young adulthood is characterized by the pursuit of greater educational opportunities and employment prospects, development of personal relationships, and for some, parenthood," the authors said. These circumstances, they said, can result in stress that triggers the start or recurrence of psychiatric problems.

The study was released Monday in Archives of General Psychiatry. It was based on interviews with 5,092 young adults in 2001 and 2002.

Olfson said it took time to analzye the data, including weighting the results to extrapolate national numbers. But the authors said the results would probably hold true today.

The study was funded with grants from the National Institutes of Health, the American Foundation for Suicide Prevention and the New York Psychiatric Institute.

Dr. Sharon Hirsch, a University of Chicago psychiatrist not involved in the study, praised it for raising awareness about the problem and the high numbers of affected people who don't get help.

Imagine if more than 75 percent of diabetic college students didn't get treatment, Hirsch said. "Just think about what would be happening on our college campuses."

The results highlight the need for mental health services to be housed with other medical services on college campuses, to erase the stigma and make it more likely that people will seek help, she said.

In the study, trained interviewers, but not psychiatrists, questioned participants about symptoms. They used an assessment tool similar to criteria doctors use to diagnose mental illness.

Dr. Jerald Kay, a psychiatry professor at Wright State University and chairman of the American Psychiatric Association's college mental health committee, said the assessment tool is considered valid and more rigorous than self-reports of mental illness. He was not involved in the study.

Personality disorders showed up in similar numbers among both students and non-students, including the most common one, obsessive compulsive personality disorder. About 8 percent of young adults in both groups had this illness, which can include an extreme preoccupation with details, rules, orderliness and perfectionism.

Kay said the prevalence of personality disorders was higher than he would expect and questioned whether the condition might be overdiagnosed.

All good students have a touch of "obsessional" personality that helps them work hard to achieve. But that's different from an obsessional disorder that makes people inflexible and controlling and interferes with their lives, he explained.

Obsessive compulsive personality disorder differs from the better known OCD, or obsessive-compulsive disorder, which features repetitive actions such as hand-washing to avoid germs.

OCD is thought to affect about 2 percent of the general population. The study didn't examine OCD separately but grouped it with all anxiety disorders, seen in about 12 percent of college-aged people in the survey.

The overall rate of other disorders was also pretty similar among college students and non-students.

Substance abuse, including drug addiction, alcoholism and other drinking that interferes with school or work, affected nearly one-third of those in both groups.

Slightly more college students than non-students were problem drinkers — 20 percent versus 17 percent. And slightly more non-students had drug problems — nearly 7 percent versus 5 percent.

In both groups, about 8 percent had phobias and 7 percent had depression.

Bipolar disorder was slightly more common in non-students, affecting almost 5 percent versus about 3 percent of students.

Wednesday, November 12, 2008

Jacking into the Brain--Is the Brain the Ultimate Computer Interface?

How far can science advance brain-machine interface technology? Will we one day pipe the latest blog entry or NASCAR highlights directly into the human brain as if the organ were an outsize flash drive?

By Gary Stix

The cyberpunk science fiction that emerged in the 1980s routinely paraded “neural implants” for hooking a computing device directly to the brain: “I had hundreds of megabytes stashed in my head,” proclaimed the protagonist of “Johnny Mnemonic,” a William Gibson story that later became a wholly forgettable movie starring Keanu Reeves.

The genius of the then emergent genre (back in the days when a megabyte could still wow) was its juxtaposition of low-life retro culture with technology that seemed only barely beyond the capabilities of the deftest biomedical engineer. Although the implants could not have been replicated at the Massachusetts Institute of Technology or the California Institute of Technology, the best cyberpunk authors gave the impression that these inventions might yet materialize one day, perhaps even in the reader’s own lifetime.

In the past 10 years, however, more realistic approximations of technologies originally evoked in the cyberpunk literature have made their appearance. A person with electrodes implanted inside his brain has used neural signals alone to control a prosthetic arm, a prelude to allowing a human to bypass limbs immobilized by amyotrophic lateral sclerosis or stroke. Researchers are also investigating how to send electrical messages in the other direction as well, providing feedback that enables a primate to actually sense what a robotic arm is touching.

But how far can we go in fashioning replacement parts for the brain and the rest of the nervous system? Besides controlling a computer cursor or robot arm, will the technology somehow actually enable the brain’s roughly 100 billion neurons to function as a clandestine repository for pilfered industrial espionage data or another plot element borrowed from Gibson?

Will Human Become Machine?
Today’s Hollywood scriptwriters and futurists, less skilled heirs of the original cyberpunk tradition, have embraced these neurotechnologies. The Singularity Is Near, scheduled for release next year, is a film based on the ideas of computer scientist Ray Kurzweil, who has posited that humans will eventually achieve a form of immortality by transferring a digital blueprint of their brain into a computer or robot.

Yet the dream of eternity as a Max Headroom–like avatar trapped inside a television set (or as a copy-and-paste job into the latest humanoid bot) remains only slightly less distant than when René Descartes ruminated on mind-body dualism in the 17th century. The wholesale transfer of self—a machine-based facsimile of the perception of the ruddy hues of a sunrise, the constantly shifting internal emotional palette and the rest of the mix that combines to evoke the uniquely subjective sense of the world that constitutes the essence of conscious life—is still nothing more than a prop for fiction writers.

Hoopla over thought-controlled prostheses, moreover, obscures the lack of knowledge of the underlying mechanisms of neural functioning needed to feed information into the brain to re-create a real-life cyberpunk experience. “We know very little about brain circuits for higher cognition,” says Richard A. Andersen, a neuroscientist at Caltech.

What, then, might realistically be achieved by interactions between brains and machines? Do the advances from the first EEG experiment to brain-controlled arms and cursors suggest an inevitable, deterministic progression, if not toward a Kurzweilian singularity, then perhaps toward the possibility of inputting at least some high-level cognitive information into the brain? Could we perhaps download War and Peace or, with a nod to The Matrix, a manual of how to fly a
helicopter? How about inscribing the sentence “See Spot run” into the memory of someone who is unconscious of the transfer? How about just the word “see”?

These questions are not entirely academic, although some wags might muse that it would be easier just to buy a pair of reading glasses and do things the old-fashioned way. Even if a pipeline to the cortex remains forever a figment of science fiction, an understanding of how photons, sound waves, scent molecules and pressure on the skin get translated into lasting memories will be more than mere cyberpunk entertainment. A neural prosthesis built from knowledge of these underlying processes could help stroke victims or Alz­heimer’s patients form new memories.

Primitive means of jacking in already reside inside the skulls of thousands of people. Deaf or profoundly hearing-impaired individuals carry cochlear implants that stimulate the auditory nerve with sounds picked up by a microphone—a device that neuroscientist Michael S. Gaz­zaniga of the University of California, Santa Barbara, has characterized as the first successful neuroprosthesis in humans. Arrays of electrodes that serve as artificial retinas are in the laboratory. If they work, they might be tweaked to give humans night vision.

The more ambitious goal of linking Amazon.com directly to the hippocampus, a neural structure involved with forming memories, requires technology that has yet to be invented. The bill of particulars would include ways of establishing reliable connections between neurons and the extracranial world—and a means to translate a digital version of War and Peace into the language that neurons use to communicate with one another. An inkling of how this might be done can be sought by examining leading work on brain-machine interfaces.

Your Brain on Text
Jacking text into the brain requires consideration of whether to insert electrodes directly into tissue, an impediment that might make neural implants impractical for anyone but the disabled. As has been known for nearly a century, the brain’s electrical activity can be detected without cracking bone. What looks like a swimming cap studded with electrodes can transmit signals from a paralyzed patient, thereby enabling typing of letters on a screen or actual surfing of the Web. Niels Birbaumer of the University of Tübingen in Germany, a leading developer of the technology, asserts that trial-and-error stimulation of the cortex using a magnetic signal from outside the skull, along with the electrode cap to record which neurons are activated, might be able to locate the words “see” or “run.” Once mapped, these areas could be fired up again to evoke those memories—at least in theory.

Some neurotechnologists think that if particular words reside in specific spots in the brain (which is debatable), finding those spots would probably require greater precision than is afforded by a wired swim cap. One of the ongoing experiments with invasive implants could possibly lead to the needed fine-level targeting. Philip R. Kennedy of Neural Signals and his colleagues designed a device that records the output of neurons. The hookup lets a stroke victim send a signal, through thought alone, to a computer that interprets it as, say, a vowel, which can then be vocalized by a speech synthesizer, a step toward forming whole words. This type of brain-machine interface might also eventually be used for activating individual neurons.

Still more precise hookups might be furnished by nanoscale fibers, measuring 100 nanometers or less in diameter, which could easily tap into single neurons because of their dimensions and their electrical and mechanical properties. Jun Li of Kansas State University and his colleagues have crafted a brushlike structure in which nano­fiber bristles serve as electrodes for stimulating or receiving neural signals. Li foresees it as a way to stimulate neurons to allay Parkinson’s disease or depression, to control a prosthetic arm or even to flex astronauts’ muscles during long spaceflights to prevent the inevitable muscle wasting that occurs in zero gravity.

Learning the Language
Fulfilling the fantasy of inputting a calculus text—or even plugging in Traveler’s French before going on vacation—would require far deeper insight into the brain signals that encode language and other neural representations.

Unraveling the neural code is one of the most imposing challenges in neuroscience—and, to misappropriate Freud, would likely pave a royal road to an understanding of consciousness. Theorists have advanced many differing ideas to explain how the billions of neurons and trillions of synapses that connect them can ping meaningful messages to one another. The oldest is that the code corresponds to the rate of firing of the voltage spikes generated by a neuron.

Whereas the rate code may suffice for some stimuli, it might not be enough for booting a Marcel Proust or a Richard Feynman, supplying a mental screen capture of a madeleine cake or the conceptual abstraction of a textbook of differential equations. More recent work has focused on the precise timing of the intervals between each spike (temporal codes) and the constantly changing patterns of how neurons fire together (population codes).

Some help toward downloading to the brain might come from a decadelong endeavor to build an artificial hippocampus to help people with memory deficits, which may have the corollary benefit of helping researchers gain insights into the coding process. A collaboration between the University of Southern California and Wake Forest University has worked to fashion a replacement body part for this memory-forming brain structure. The hippocampus, seated deep within the brain’s temporal lobe, sustains damage in stroke or Alzheimer’s. An electronic bypass of a damaged hippocampus could restore the ability to create new memories. The project, funded by the National Science Foundation and the Defense Advanced Research Projects Agency, might eventually go further, enhancing normal memory or helping to deduce the particular codes needed for high-­level cognition.

The two groups—led by Theodore W. Berger at U.S.C. and Samuel Deadwyler at Wake Forest—are preparing a technical paper showing that an artificial hippocampus took over from the biological organ the task of consolidating a rat’s memory of pressing a lever to receive a drop of water. Normally the hippocampus emits signals that are relayed to cortical areas responsible for storing the long-term memory of an experience. For the experiment, a chemical temporarily incapacitated the hippocampus. When the rat pressed the correct bar, electrical input from sensory and other areas of the cortex were channeled through a microchip, which, the scientists say, dispatched the same signals the hippocampus would have sent. A demonstration that an artificial device mimicked hippocampal output would mark a step toward deducing the underlying code that could be used to create a memory in the motor cortex—and perhaps one day to unravel ciphers for even higher-level behaviors.

If the codes for the sentence “See Spot run”—or perhaps an entire technical manual—could be ascertained, it might, in theory, be possible to input them directly to an electrode array in the hippocampus (or cortical areas), evoking the scene in The Matrix in which instructions for flying a helicopter are downloaded by cell phone. Artificial hippocampus research postulates a scenario only slightly more prosaic. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” says Berger.

The seeming simplicity of the model of neural input envisaged by artificial hippocampus-related studies may raise more questions than it answers. Would such an implant overwrite existing memories? Would the code for the sentence “See Spot run” be the same for me as it is for you or, for that matter, a native Kurdish speaker? Would the hippocampal codes merge cleanly with other circuitry that provides the appropriate context, a semantic framework, for the sentence? Would “See Spot run” be misinterpreted as a laundry mishap instead of a trotting dog?

Some neuroscientists think the language of the brain may not be deciphered until understanding moves beyond the reading of mere voltage spikes. “Just getting a lot of signals and trying to understand what these signals mean and correlating them with particular behavior is not going to solve it,” notes Henry Markram, director of neuroscience and technology at the Swiss Federal Institute of Technology in Lausanne. A given input into a neuron or groups of neurons can produce a particular output—conversion of sensory inputs to long-term memory by the hippocampus, for instance—through many different pathways. “As long as there are lots of different ways to do it, you’re not even close,” he says.

The Blue Brain Project, which Markram heads, is an attempt that began in 2005 to use supercomputer-based simulations to reverse-engineer the brain at the molecular and cellular levels—modeling first the simpler rat organ and then the human version to unravel the underlying function of neural processes. The latter task awaits a computer that boasts a more than 1,000-fold improvement over the processing power of current supercomputers. The actual code, when it does emerge, may be structured very differently from what appears in today’s textbooks. “I think there will be a conceptual breakthrough that will have significant implications for how we think of reality,” Markram says. “It will be quite a profound thing. That’s probably why it’s such an intractable problem.”

The challenge involved in figuring out how to move information into the brain suggests a practical foreseeable limit for how far neurotechnology might be advanced. The task of forming the multitude of connections that make a memory is vastly different from magnetizing a set of bits on a hard disk. “Complex information like the contents of a book would require the interactions of a very large number of brain cells over a very large area of the nervous system,” observes neuroscientist John P. Donoghue of Brown University. “Therefore, you couldn’t address all of them, getting them to store in their connections the correct kind of information. So I would say based on current knowledge, it’s not possible.”

Writing to the brain may remain a dream lost in cyberspace. But the seeming impossibility does not make Donoghue less sanguine about ultimate expectations for feeding information the other way and developing brain-controlled prostheses for the severely disabled. He has been a leader in studies to implant an array of multiple electrodes into the brain that can furnish a direct line from the cortex to a prosthetic arm or even a wheelchair.

Donoghue predicts that in the next five years brain-machine interfaces will let a paralyzed person pick up a cup and take a drink of water and that, in some distant future, these systems might be further refined so that a person with an upper spinal cord injury might accomplish the unthinkable, perhaps even playing a game of basketball with prosthetics that would make a reality of The Six Million Dollar Man, the 1970s television series. Even without an information pipeline into the brain, disabled patients and basic researchers might still reap the benefits of lesser substitutes. Gert Pfurtscheller of the Graz University of Technology in Austria and his colleagues reported last year on a patient with a spinal cord injury who was able, merely by thinking, to traverse a virtual environment, moving from one end to the other of a simulated street. Duke University’s Miguel A. L. Nicolelis, another pioneer in brain-machine interfaces, has begun to explore how monkeys connected to brain-controlled prosthetic devices begin to develop a kinesthetic awareness, a sense of movement and touch, that is completely separate from sensory inputs into their biological bodies. “There’s some physiological evidence that during the experiment they feel more connected to the robots than to their own bodies,” he says.

The most important consequences of these investigations may be something other than neural implants and robotic arms. An understanding of central nervous system development acquired by the Blue Brain Project or another simulation may let educators understand the best ways to teach children and determine at what point a given pedagogical technique should be applied. “You can build an educational development program that is engineered to, in the shortest possible time, allow you to acquire certain capabilities,” Markram says. If he is right, research on neural implants and brain simulations will produce more meaningful practical benefits than dreams of the brain as a flash drive drawn from 20th-century science-fiction literature.

Note: This article was originally published with the title, "Jacking Into the Brain".

Tuesday, November 11, 2008

WW II vet held in Nazi slave camp breaks silence: 'Let it be known'

* World War II vet held in slave camp witnessed Nazi atrocities first-hand
* Anthony Acevedo, 84, was one of 350 U.S. soldiers held at Buchenwald subcamp
* Only about 165 survived captivity and their subsequent death march, he says
* Survivors signed documents never to speak; Acevedo says now people "must know"

By Wayne Drash, Thelma Gutierrez and Sara Weisfeldt

LOMA LINDA, California (CNN) -- Anthony Acevedo thumbs through the worn, yellowed pages of his diary emblazoned with the words "A Wartime Log" on its cover. It's a catalog of deaths and atrocities he says were carried out on U.S. soldiers held by Nazis at a slave labor camp during World War II -- a largely forgotten legacy of the war.

Acevedo pauses when he comes across a soldier with the last name of Vogel.

"He died in my arms. He wouldn't eat. He didn't want to eat," says Acevedo, now 84 years old. "He said, 'I want to die! I want to die! I want to die!' "

The memories are still fresh, some 60 years later. Acevedo keeps reading his entries, scrawled on the pages with a Schaeffer fountain pen he held dear. See inside Acevedo's diary »

He was one of 350 U.S. soldiers held at Berga am Elster, a satellite camp of the Nazis' notorious Buchenwald concentration camp. The soldiers, working 12-hour days, were used by the German army to dig tunnels and hide equipment in the final weeks of the war. Less than half of the soldiers survived their captivity and a subsequent death march, he says.

Acevedo shows few emotions as he scans the pages of his diary. But when he gets to one of his final entries, the decades of pent-up pain, the horror witnessed by a 20-year-old medic, are too much.

"We were liberated today, April the 23, 1945," he reads.

His body shakes, and he begins sobbing. "Sorry," he says, tears rolling down his face. "I'm sorry." VideoWatch Acevedo's emotional account of being freed »

Acevedo's story is one that was never supposed to be told. "We had to sign an affidavit ... [saying] we never went through what we went through. We weren't supposed to say a word," he says.

The U.S. Army Center of Military History provided CNN a copy of the document signed by soldiers at the camp before they were sent back home. "You must be particularly on your guard with persons representing the press," it says. "You must give no account of your experience in books, newspapers, periodicals, or in broadcasts or in lectures."

The document ends with: "I understand that disclosure to anyone else will make me liable to disciplinary action."

The information was kept secret "to protect escape and evasion techniques and the names of personnel who helped POW escapees," said Frank Shirer, the chief historian at the U.S. Army Center for Military History.

Acevedo sees it differently. For a soldier who survived one of the worst atrocities of mankind, the military's reaction is still painful to accept. "My stomach turned to acid, and the government didn't care. They didn't give a hullabaloo."

It took more than 50 years, he says, before he received 100 percent disability benefits from the U.S. Department of Veterans Affairs.

Despite everything Acevedo endured during the war, little had prepared him for his own father's attitude toward his capture. "My dad told me I was a coward," he says.

"I turned around and got my duffel bag, my luggage, and said, 'This is it, Father. I'm not coming back.' So I took the train the following day, and I didn't see my parents for years, because I didn't want to see them. I felt belittled."

For decades, Acevedo followed the rules and kept his mouth shut. His four children didn't know the extent of his war experience. He says he felt stymied because of the document he signed. "You never gave it a thought because of that paper."

Now, he says it's too important to be forgotten. In recent years, he's attended local high schools to tell his story to today's generation.

"Let it be known," he says. "People have to know what happened."

Born July 31, 1924, in San Bernardino, California, Anthony C. Acevedo is what is known in today's parlance as a "citizen child" -- one who was born in the United States to parents from Mexico. iReport: Tell us your war stories

A Mexican-American, he was schooled in Pasadena, California, but couldn't attend the same classes as his white peers. "We couldn't mix with white people," he says. Both of his parents were deported to Mexico in 1937, and he went with them.

Acevedo returned to the States when he was 17, he says, because he wanted to enlist in the U.S. Army. He received medical training in Illinois before being sent to the European theater.

A corporal, he served as a medic for the 275th Infantry Regiment of the 70th Infantry Division. Acevedo was captured at the Battle of the Bulge after days of brutal firefights with Nazis who surrounded them. He recalls seeing another medic, Murry Pruzan, being gunned down.

"When I saw him stretched out there in the snow, frozen," Acevedo says, shaking his head. "God, that's the only time I cried when I saw him. He was stretched out, just massacred by a machine gun with his Red Cross band."

He pauses. "You see all of them dying out there in the fields. You have to build a thick wall."

Acevedo was initially taken to a prison camp known as Stalag IX-B in Bad Orb, Germany, where thousands of American, French, Italian and Russian soldiers were held as prisoners of war. Acevedo's diary entry reads simply: "Was captured the 6th of January 1945."

For the next several months, he would be known by the Germans only as Prisoner Number 27016. One day while in Stalag IX-B, he says, a German commander gathered American soldiers and asked all Jews "to take one step forward." Few willingly did so. VideoWatch Acevedo describe being selected as an "undesirable" »

Jewish soldiers wearing Star of David necklaces began yanking them off, he says. About 90 Jewish soldiers and another 260 U.S. soldiers deemed "undesirables" -- those who "looked like Jews" -- were selected. Acevedo, who is not Jewish, was among them.

They were told they were being sent to "a beautiful camp" with a theater and live shows.

"It turned out to be the opposite," he says. "They put us on a train, and we traveled six days and six nights. It was a boxcar that would fit heads of cattle. They had us 80 to a boxcar. You couldn't squat. And there was little tiny windows that you could barely see through."

It was February 8, 1945, when they arrived. The new camp was known as Berga am Elster, a subcamp of Buchenwald, the Nazi concentration camp where tens of thousands of Jews and other political prisoners were killed under Adolf Hitler's regime. PhotoSee the horrors of Buchenwald »

Acevedo says he was one of six medics among the 350 U.S. soldiers at Berga. Political prisoners from other countries were held at Berga separate from the Americans. "We didn't mingle with them at all," he says, adding that the U.S. soldiers worked in the same tunnels as the other political prisoners.

"We were all just thin as a rail."

The U.S. prisoners, Acevedo says, were given 100 grams of bread per week made of redwood sawdust, ground glass and barley. Soup was made from cats and rats, he says. Eating dandelion leaves was considered a "gourmet meal."

If soldiers tried to escape, they would be shot and killed. If they were captured alive, they would be executed with gunshots to their foreheads, Acevedo says. Wooden bullets, he says, were used to shatter the inside of their brains. Medics were always asked to fill the execution holes with wax, he says.

"Prisoners were being murdered and tortured by the Nazis. Many of our men died, and I tried keeping track of who they were and how they died."

The soldiers were forced to sleep naked, two to a bunk, with no blankets. As the days and weeks progressed, his diary catalogs it all. The names, prisoner numbers and causes of death are listed by the dozens in his diary. He felt it was his duty as a medic to keep track of everyone.

"I'm glad I did it," he says.

As a medic, he says, he heard of other more horrific atrocities committed by the Nazis at camps around them. "We heard about experiments that they were doing -- peeling the skins of people, humans, political prisoners, making lampshades." VideoWatch Acevedo talk about Nazi atrocities »

He and the other soldiers were once taken to what Acevedo believes was the main camp of Buchenwald, about 30 miles (48 kilometers) from Berga. They noticed large pipes coming from one building.

"We thought we were going to be gassed when we were told to take our clothes off," he says. "We were scared. We were stripped."

"Rumors were around that this was where the political prisoners would be suffocated with gas." It turned out to be a shower, the only time during their captivity they were allowed to bathe.

The main Buchenwald camp was officially liberated on April 11, 1945. But the camp and its subcamps were emptied of tens of thousands of prisoners as American troops neared. The U.S. troops held at the Berga compound were no exception.

"Very definite that we are moving away from here and on foot. This isn't very good for our sick men. No drinking water and no latrines," Acevedo wrote in his diary on April 4, 1945.

He says they began a death march of 217 miles (349 kilometers) that would last three weeks. More than 300 U.S. soldiers were alive at the start of the march, he says; about 165 were left by the end, when they were finally liberated.

Lines of political prisoners in front of them during the march caught the full brunt of angry Nazi soldiers.

"We saw massacres of people being slaughtered off the highway. Women, children," he says. "You could see people of all ages, hanging on barbed wire."

One of his diary entries exemplifies an extraordinary patriotism among soldiers, even as they were being marched to their deaths. "Bad news for us. President Roosevelt's death. We all felt bad about it. We held a prayer service for the repose of his soul," Acevedo wrote on April 13, 1945.

It adds, "Burdeski died today."

To this day, Acevedo still remembers that soldier. He wanted to perform a tracheotomy using his diary pen to save Burdeski, a 41-year-old father of six children. A German commander struck Acevedo in the jaw with a rifle when he asked.

"I'll never forget," he says.

On a recent day, about a dozen prisoners of war held during World War II and their liberators gathered at the Jerry L. Pettis Memorial Veterans Medical Center in Loma Linda, California. Many applauded Acevedo for his heroics.

"Those of us in combat have our own heroes, and those are the medics. And that's Antonio. Thank you, Antonio," one of the men said.

The men gathered there nodded their heads. Two stood to shake Acevedo's hand.

"The people that are in this room really are an endangered species," another man said. "When they're gone, they're gone. ... That is why they should be honored and put in history for generations to come, because there are not that many of them left."

Donald George sat next to Acevedo. The two were captured about a half-mile apart during the Battle of the Bulge. "It's hard to explain how it is to be sitting with a bunch of people that you know they've been through the same thing you've been through," George said.

"Some of us want to talk about it, and some of us don't. Some of us want to cry about it once in a while, and some of us won't. But it's all there," he said.

"We still like to come and be together a couple times a month," George added, before Acevedo finished his sentence: "To exchange what you are holding back inside."

Acevedo says the world must never forget the atrocities of World War II and that for killing 6 million Jews, Hitler was the worst terrorist of all time. He doesn't want the world to ever slide backward.

His message on this Veterans Day, he says, is never to hold animosity toward anybody.

"You only live once. Let's keep trucking. If we don't do that, who's going to do it for us? We have to be happy. Why hate?" he says. "The world is full of hate, and yet they don't know what they want."

Thursday, November 6, 2008

Why Do We Forget Things?

The brain can store a vast number of memories, so why can't we find these memories when we need to? A new study provides insights into this question.

By Edward K. Vogel and Trafton Drew

Our brains are crammed with a massive amount of memories that we have formed over a lifetime of experiences. These memories range from the profound (who am I and how did I get here?) to the most trivial (the license plate of the car at a stoplight). Furthermore, our memories also vary considerably in their precision. Parents, for instance, often know the perils of a fuzzy memory when shopping for a birthday gift for their child: remembering that their son wanted the G.I. Joe with Kung Fu Grip rather than the regular G.I. Joe could make an enormous difference in how well the gift is received. Thus, the “fuzziness” of our memory can often be just as important in our daily lives as being able to remember lots and lots of information in the first place.

Different Levels of Detail for Different Types of Memory?
In the past several decades, cognitive psychologists have determined that there are two primary memory systems in the human mind: a short-term, or “working,” memory that temporarily holds information about just a few things that we are currently thinking about; and a long-lasting memory that can hold massive amounts of information gained through a lifetime of thoughts and experiences. These two memory systems are also thought to differ in the level of detail they provide: working memory provides sharp detail about the few things we are presently thinking about, whereas long-term memory provides a much fuzzier picture about lots of different things we have seen or experienced. That is, although we can hold lots of things in long-term memory, the details of the memory aren’t always crystal-clear and are often limited to just the gist of what we saw or what happened.

A recently published study by Timothy F. Brady, a cognitive neuroscientist at the Massachusetts Institute of Technology, and colleagues suggests that these long-term memories may not be nearly as fuzzy as once thought, however. In their work, the researchers asked subjects to try to remember 3,000 pictures of common objects—including items such as backpacks, remote controls and toasters—that were presented one at a time for just a few seconds each. At the end of this viewing phase, the researchers tested subjects’ memory for each object by showing them two objects and asking which one they had seen before. Not surprisingly, subjects were exceptionally good (more than 90 percent correct) even though there were thousands of objects to remember. This high success rate attests to the massive storage ability of long-term memory. What was most surprising, however, was the amazing level of detail that the subjects had for all of these memories. The subjects were just as good at telling the difference between two pictures of the same object even when the objects differed in an extremely subtle manner, such as a pair of toasters with slightly different slices of bread.

If It’s Not Fuzzy, Why Do We Still Forget Things?
This new work provides compelling evidence that the enormous amount of information we hold in long-term memory is not so uncertain after all. It seems that we actually hold representations of things we’ve seen in a fairly detailed and precise form.

Of course, this finding raises the obvious question: if our memories aren’t all that fuzzy, then why do we often forget the details of things we want to remember? One explanation is that, although the brain contains detailed representations of lots of different events and objects, we can’t always find that information when we want it. As this study reveals, if we’re shown an object, we can often be very accurate and precise at being able to say whether we’ve seen it before. If we’re in a toy store and trying to remember what it was that our son wanted for his birthday, however, we need to be able to voluntarily search our memory for the right answer—without being prompted by a visual reminder. It seems that it is this voluntary searching mechanism that’s prone to interference and forgetfulness. At least that’s our story when we come home without the Kung Fu Grip G.I. Joe.

Are you a scientist? Have you recently read a peer-reviewed paper that you want to write about? Then contact Mind Matters editor Jonah Lehrer, the science writer behind the blog The Frontal Cortex and the book Proust Was a Neuroscientist.

Tuesday, October 28, 2008

Are You Evil? Profiling That Which Is Truly Wicked

A cognitive scientist employs malevolent logic to define the dark side of the human psyche

By Larry Greenemeier

TROY, N.Y.—The hallowed halls of academia are not the place you would expect to find someone obsessed with evil (although some students might disagree). But it is indeed evil—or rather trying to get to the roots of evil—that fascinates Selmer Bringsjord, a logician, philosopher and chairman of Rensselaer Polytechnic Institute's Department of Cognitive Science here. He's so intrigued, in fact, that he has developed a sort of checklist for determining whether someone is demonic, and is working with a team of graduate students to create a computerized representation of a purely sinister person.

"I've been working on what is evil and how to formally define it," says Bringsjord, who is also director of the Rensselaer AI & Reasoning Lab (RAIR). "It's creepy, I know it is."

To be truly evil, someone must have sought to do harm by planning to commit some morally wrong action with no prompting from others (whether this person successfully executes his or her plan is beside the point). The evil person must have tried to carry out this plan with the hope of "causing considerable harm to others," Bringsjord says. Finally, "and most importantly," he adds, if this evil person were willing to analyze his or her reasons for wanting to commit this morally wrong action, these reasons would either prove to be incoherent, or they would reveal that the evil person knew he or she was doing something wrong and regarded the harm caused as a good thing.

Bringsjord's research builds on earlier definitions put forth by San Diego State University philosophy professor J. Angelo Corlett as well as the late sociopolitical philosophers and psychologists, Joel Feinberg and Erich Fromm, but most significantly by psychiatrist and author M. Scott Peck in his 1983 book, People of the Lie, The Hope for Healing Human Evil. After reading Peck's tome about clinically evil people, "I thought it would be interesting to come up with formal structures that define evil," Bringsjord says, "and, ultimately, to create a purely evil character the way a creative writer would."

He and his research team began developing their computer representation of evil by posing a series of questions beginning with the basics—name, age, sex, etcetera—and progressing to inquiries about this fictional person's beliefs and motivations.

This exercise resulted in "E," a computer character first created in 2005 to meet the criteria of Bringsjord's working definition of evil. Whereas the original E was simply a program designed to respond to questions in a manner consistent with Bringsjord's definition, the researchers have since given E a physical identity: It's a relatively young, white man with short black hair and dark stubble on his face. Bringsjord calls E's appearance "a meaner version" of the character Mr. Perry in the 1989 movie Dead Poets Society. "He is a great example of evil," Bringsjord says, adding, however, that he is not entirely satisfied with this personification and may make changes.

The researchers have placed E in his own virtual world and written a program depicting a scripted interview between one of the researcher's avatars and E. In this example, E is programmed to respond to questions based on a case study in Peck's book that involves a boy whose parents gave him a gun that his older brother had used to commit suicide.

The researchers programmed E with a degree of artificial intelligence to make "him" believe that he (and not the parents) had given the pistol to the distraught boy, and then asked E a series of questions designed to glean his logic for doing so. The result is a surreal simulation during which Bringsjord's diabolical incarnation attempts to produce a logical argument for its actions: The boy wanted a gun, E had a gun, so E gave the boy the gun.

Bringsjord and his team by the end of the year hope to have completed the fourth generation of E, which will be able to use artificial intelligence and a limited set of straightforward English (no slang, for example) to "speak" with computer users.

Following the path of a true logician, Bringsjord's interest in the portrayal of virtuousness and evil in literature led to his interest in software that helps writers develop ideas and create stories; this, in turn, spurred him to develop his own software for simulating human behavior, both good and odious, says Barry Smith, a distinguished professor of bioinformatics and ontology at the State University of New York at Buffalo who is familiar with Bringsjord's work. "He's known as someone on the fringe of philosophy and computer science."

Bringsjord and Smith both have an interest in finding ways to better understand human behavior, and their work has attracted the attention of the intelligence community, which is seeking ways to successfully analyze the information they gather on potential terrorists. "To solve problems in intelligence analysis, you need more accurate representations of people," Smith says. "Selmer is trying to build really good representations of human beings in all of their subtlety."

Bringsjord acknowledges that the endeavor to create pure evil, even in a software program, does raise ethical questions, such as, how researchers could control an artificially intelligent character like E if "he" was placed in a virtual world such as Second Life, a Web-based program that allows people to create digital representations of themselves and have those avatars interact in a number of different ways.

"I wouldn't release E or anything like it, even in purely virtual environments, without engineered safeguards," Bringsjord says. These safeguards would be a set of ethics written into the software, something akin to author Isaac Asimov's "Three Laws of Robotics" that prevent a robot from harming humans, requires a robot to obey humans, and instructs a robot to protect itself—as long as that does not violate either or both of the first two laws.

Monday, October 27, 2008

Why can't I remember that thing, person, task?

By Cathryn Jakobson Ramin

I'd barely crossed the threshold of middle age. As a journalist, I was invested in staying smart and quick, mistress of my good brain and sardonic tongue. But almost overnight, I found that I was missing critical information -- the names of people and places, the titles of books and movies.

Worse, I had the attention span of a flea. I was having trouble keeping track of my calendar, and my sense of direction had disappeared. The change was so dramatic that sometimes I felt foreign to myself.

Over the course of a few years, as friends and relatives moved into their 40s and 50s, I realized that I was part of a large group of people who were struggling to keep up. I was determined to find a plausible explanation for what was happening to my brain and, by extension, to middle-aged minds in general.

As a first step, I began to study and categorize midlife mental lapses as if they were so many butterflies. Oprah.com: Keep your memory strong as you age

• There was Colliding-Planets Syndrome, which occurs when you fail to grasp, until too late, that you've scheduled a child's orthodontist appointment in the suburbs for the same hour as a business meeting in the city.

• Quick-Who-Is-She Dysfunction surfaces when you are face-to-face with someone whose name stubbornly refuses to come to mind.

• What-Am-I-Doing-Here Paranoia leaves you standing empty-handed in a doorway, trying to figure out what you've come for.

• The Damn-It-They-Were-Just-in-My-Hand Affliction leads to panicky moments spent looking for your favorite new sunglasses, when all the while they're on top of your head.

• And Wrong-Vessel Disorder results in placing the ice cream in the pantry rather than the freezer.

In the past decade, cognitive neuroscientists have learned that much of what we blame on fading memory in midlife can be more accurately attributed to failing attention. Physiological changes in the brain's frontal lobes make it harder to maintain attention in the face of distractions, explains Cheryl Grady, Ph.D., a neuroscientist and assistant director of the Rotman Research Institute in Toronto.

When the frontal lobes are in top form, they're adept at figuring out what's important for the job at hand and what's irrelevant blather; a sort of neural "bouncer" automatically keeps out unnecessary information. In middle age, that bouncer takes a lot of coffee breaks. Instead of focusing on the report that's due, you find yourself wondering what's for dinner. Even background noise -- the phone chatter of the co-worker in the next cubicle --can impair your ability to concentrate on the task before you.

When the neural bouncer slacks off, the cognitive scratch pad called working memory (which allows us to manipulate and prioritize information, and remember the thread of an argument) is quickly overwhelmed. You know the feeling: You can't absorb one more shred of information, so you erect a sturdy wall, neatly deflecting your husband's announcement that he'll be working late --an announcement you later swear he never made. "Metaphorically speaking," writes social theorist David Shenk in his book "Data Smog," "we plug up our ears, pinch our noses, cover our eyes& and step into a bodysuit lined with protective padding."

As you age, you may also notice that information that once popped into your head in milliseconds now shows up in its own sweet time. Denise Park, Ph.D., a cognitive neuroscientist at the University of Illinois at Urbana-Champaign, has found that while processing speed begins to decline in your late 20s, typically you don't feel the effect until your 40s or 50s. And then you feel as though you're wading through mental Jell-O. It's tough to acknowledge that your brain is aging right along with your abs, but in both cases you can put up a fight.

Quick-Who-Is-She Dysfunction

One type of forgetfulness is so prevalent, not to mention demoralizing, that just about everyone over 40 complains about it. I refer to the very public cognitive failure known as blocking, or blanking, when names refuse to come to mind and words dart in and out of consciousness, hiding in dark closets just when you need them.

In his landmark book, "The Seven Sins of Memory," the eminent Harvard memory expert Daniel Schacter, Ph.D., notes that the concept of blocking exists in at least 45 languages. The Cheyenne used an expression, "Navonotootse`a", which translates "I have lost it on my tongue." In Korean it is Hyeu kkedu-te mam-dol-da, which in English means "sparkling at the end of my tongue."

In midlife, resolving the "tip of the tongue" dilemma grows increasingly challenging. In the split second between your query -- "What do you call that sleek, dark purple vegetable?" -- and the response -- "eggplant" --your aging brain delivers quantities of unsolicited information.

Often, notes Schacter, "people can produce virtually everything they know about a person...nearly everything they know about a word except its label." The brain volunteers words that begin with the same letter, items that are the same color or shape, and, my favorite, words with the same number of syllables -- all of which gum up the works.

Unfortunately, blocking is most common in social situations, when anxiety and distraction combine to kidnap a chunk of your already challenged working memory. Roman aristocrats avoided the problem by always traveling with a nomenclator, an alert slave whose duty it was to supply his master with the names of acquaintances as they were encountered.

In the film "The Devil Wears Prada", magazine editor Miranda Priestly relies on her young assistant, Andy Sachs, to produce the names of party guests. Absent such a companion, Barbara Wallraff, senior editor and columnist for The Atlantic, sought suggestions from her readers on how to describe what transpires when you're introducing two people but have blocked their names. One reader suggested "whomnesia." Another proposed "mumbleduction."

With planning, many instances of Quick-Who-Is-She Dysfunction can be eradicated. Before you go to see the eighth-grade play, where you will sit among people you've known since your kids were in kindergarten, take 15 minutes to look over the school directory. You may avoid the embarrassment suffered by my friend Victor, an economist, when he introduced himself to a woman at Back to School Night who reminded him that the year before, at the same event, they'd spent a pleasant hour chatting about their shared alma mater.

Writing down a few key phrases on an index card before putting yourself in a cognitively challenging situation can ward off word loss. Before heading to your book group, take a moment to review the names of the characters and the plot of the fat novel you finished two weeks ago and barely remember. The other members will thank you. If words go missing anyway, grab for a synonym. Staying on the trail like a bloodhound only exacerbates the problem.

Colliding-Planets Syndrome

To your distress, you discover that you agreed to attend your friend Sarah's 50th birthday party on the same night you're supposed to be at a convention in Las Vegas. Now, how did that happen? If I had to guess, I'd say that you said yes to Sarah's birthday ("Of course, I wouldn't miss it!") when you were nowhere near your calendar. If you want to eliminate Colliding-Planets Syndrome, that calendar must be your new best friend.

Don't get cocky and put off entering a date, even if it's just for coffee the following day. Mark A. McDaniel, Ph.D., a professor of psychology at Washington University in St. Louis and an expert in human learning and memory, found that in the face of even a brief delay, older adults have much more difficulty than younger ones keeping in mind a task to be accomplished in the future.

Refuse to agree to anything, ever, without a calendar in front of you.
And don't write down cryptic things like "Starbucks," because you'll draw a blank on which café you meant, and sit for a long time in the wrong one. Where you write things down matters: Multiple calendars -- home, work, school -- can only lead to trouble.

But what about things you must remember to do in the short term, like returning the nurse-practitioner's call in 15 minutes or putting money in the parking meter in a half hour? These are what Daniel Schacter calls time-based commitments, and putting them on your calendar isn't likely to help unless you habitually check it every five minutes.

On less than an hour's notice, my most responsible friend, Jane, agreed to pick up her neighbor's son at school when she collected her own brood. Knowing she had to make it to soccer and ballet in Los Angeles traffic, she was the first in the carpool line, where she efficiently loaded her kids and took off. The neighbor's child sat waiting on a bench until teachers phoned his mother, who had nothing nice to say when she got in touch with Jane.

In midlife we have trouble remembering to do things at specific times because we're at the mercy of a million environmental distractions. One of Denise Park's studies demonstrated that elderly subjects were more likely to remember to take their medication on schedule than middle-aged subjects, because in midlife the crush of extenuating circumstances often got in the way.

To remember to make that call to the nurse practitioner, Schacter told me, you're going to need an unmistakable cue, one that will be both available and informative. An alarm clock on the desk in front of you can do the job, but under no circumstances should you permit yourself to switch off the clock and finish just one more thing before you pick up the phone. And don't count on your PDA: You've heard those bleeps and blurps so often, you've learned to ignore them.

Damn-It-They-Were-Just-in-My-Hand Affliction

Even the most meticulously managed PDA won't work if you misplace it. And as luck would have it, the items we lose most often -- keys, glasses, wallets, cell phones, planners -- are the ones that are crucial to our survival.

This abject failure to keep track of our belongings may emerge from the brain's talent for forecasting the future. The neocortex, a long-term storage facility, constantly predicts how we'll behave in specific situations, explains Jeff Hawkins in his book "On Intelligence." Instead of reinventing the wheel every time we do something familiar, the brain chooses from a library of existing patterns, based on choices we've made before. A novel event -- a man with a gun -- gets the brain's full attention, but when we're merely lugging groceries into the house, we shift into autopilot. And autopilot is the mode in which we're likely to misplace things.

The problem can be remedied, but only with a preemptive strike. Awareness is essential: When the phone rings as you're entering the house loaded down with groceries, don't drop your keys on the counter, where they will be buried in the day's mail, making you frantically late for your dinner engagement. If you can't immediately hang the keys on the hook where they belong, keep them on your person until you can; one woman I know slips them into her bra, creating a silhouette so inelegant that she can't possibly forget where she put them.

Give up your habit of tucking important items into indiscriminate pockets of your purse or briefcase. Choose one secure zone -- front, zippered -- where you always keep your boarding pass or passport, and never alter it. You'll save yourself the discomfort of searching high and low under the stern surveillance of security personnel.

Wrong-Vessel Disorder

When you lose track of what you intended to say or do, you've had what cognitive psychologists call a prospective lapse. Wrong-Vessel Disorder is a manifestation of this problem: With the best intentions, you absentmindedly place your cell phone in your briefcase, which has many of the same attributes as your purse. Saturday morning, when you reach into your bag and come up empty, you're mystified. Because you're barely conscious when it strikes, it's hard to fend off Wrong-Vessel Disorder. You just have to laugh.

But prospective failures also show up as What-Am-I-Doing-Here Paranoia: Suddenly, as if someone depressed the power button on the remote, you go blank. The minigaps, where you march purposefully to the kitchen, only to stand there and scratch your head, are irritating; the yawning caverns can really shake your confidence. Fran, the marketing director of a local bank, was bright-eyed and ready to give her quarterly presentation before the board -- until somewhere in midsentence, three out of six points eluded her, an experience that made her realize that her days of winging it were over.

Mark McDaniel observes that younger adults make use of robust working memory, relying on a little voice that automatically whispers "get milk, get milk, get milk," all the way home. In midlife that voice is easily interrupted ("Oh, look, it's raining! Now, where did I put that umbrella?") -- at least until you're in the driveway. If you can send the voice back into the game, you'll avoid a lot of extra trips to the store. I've stuck Post-it notes on the steering wheel, which makes driving awkward, but at least I don't return home with the FedEx package still beside me on the front seat.

What-Am-I-Doing-Here Paranoia

When what you forget is not a grocery item but an idea, you've no alternative but to backtrack mentally. It's vaguely amusing to do this with a friend at lunch -- "What on earth were we talking about?" -- but in a professional situation it hurts. With a little digging, you can often extract a key idea that lingers in your working memory and, from there, reconstruct the context of the discussion. In such cases, it is helpful to have a stockpile of useful phrases, conversation fillers that buy you time. "Do you see what I mean?" works well, as does my friend Jeff's old standby, delivered with the greatest sincerity: "Now that's very interesting," even when it isn't.

When a colleague stood me up for breakfast, after exchanging no fewer than nine e-mails about where and when earlier in the week, I wasn't upset -- I was as curious as a botanist who has come upon a valuable specimen. How had it happened? Had planets collided yet again? In a classic demonstration of autopilot, he'd exited the commuter train, jumped on the subway, and gone straight to work, failing to stop at the café across the street from the station where we'd planned to meet. When I phoned his cell, it took him several seconds to realize his mistake, at which point he howled in dismay.

He didn't want to talk about it, but nevertheless I probed. "Wait," I said, "let's dissect it. How did it start?"

As was his habit, he had carefully printed out his schedule the previous night before leaving work, he explained. Then he packed up his briefcase and departed, leaving the piece of paper in the printer. From that moment on, our breakfast appointment never crossed his mind. "Is this normal?" he asked. It was normal, I assured him, in that it happened regularly to people in midlife. But that didn't mean he had to sit back and take it. It was time to make a stand.

Monday, October 20, 2008

Imaging the Unconscious

Functional magnetic resonance imaging could bring psychoanalysis into the 21st century.
By Emily Singer

More than one hundred years ago, Sigmund Freud proposed his pioneering theory that hidden desires in our subconscious drive much of human behavior. While those theories have fallen out of favor in recent decades, scientists are now revisiting some of them -- with new brain imaging tools. The hope is that having a direct window into the brain's hidden processes will shed new light on anxiety disorders, and perhaps help to assess how well behavioral therapies, such as psychoanalysis, target the intricacies of the unconscious mind.

"One of the reasons people departed from Freudian concepts was because they weren't very testable," says Ronald Cohen, professor of psychiatry at Brown University in Providence, RI. "These types of [imaging] experiments would potentially be a more direct way of testing ideas that rose out of traditional psychoanalytic theory."

One of Freud's theories held that after a traumatic event, people might unconsciously associate a normally benign stimulus, say, a friendly golden retriever, with a previously fearful event, such as getting bitten by a Rottweiler. This theory seems to be true in the case of post-traumatic stress disorder (PTSD). Harmless sights and sounds, for instance, such as a bus traveling on down a street, can trigger a panic attack in someone with PTSD who was once involved in a bus crash. Furthermore, the sufferer may not be immediately able to pinpoint the cause of his or her anxiety attack.

Now scientists are using brain imaging techniques to explore how the unconscious fear signal may be turned up in people with PTSD and other anxiety disorders. To study the brain processes underlying anxiety, researchers use functional magnetic resonance imaging (fMRI) to measure a person's brain activity while he or she looks at threatening signals, such as a picture of a fearful face. These frightening pictures will spark activity in the part of the brain known as the amygdala, which is part of the evolutionarily ancient brain involved in processing emotion and fear. To study the unconscious aspects of fear and anxiety, the researchers flash the ominous picture so quickly that subjects don't consciously notice it -- the brain reacts to the image, even though the person cannot determine whether or not they actually saw it.

Last year, Amit Etkin and collaborators at Columbia University showed that people who score high on anxiety tests have a stronger amygdala response to fearful faces when those images are presented below the level of conscious perception than people who score lower on the tests. Their findings suggest that the way people respond unconsciously to the world around them could also affect their daily anxiety levels.

Now the Columbia researchers want to determine if this lab observation can be used therapeutically. To do so, they plan to study 25 people with generalized anxiety disorder, first to determine whether this exaggerated amygdala response is present in people with the disorder, then to see if cognitive behavioral therapy -- one of the best-established forms of talk therapy -- can reduce the exaggerated unconscious response.

"We can use imaging as a way of evaluating the outcome of therapy," says Eric Kandel, a Nobel-prize winning neuroscientist at Columbia University who's collaborating on the project. "Maybe we can take people who have a large [anxiety] signal and turn it down as the result of a therapeutic experience," he says.

People with PTSD show a similar exaggerated amygdala response to fearful faces. Jorge Armony, the Canada Research Chair in Affective Neuroscience at McGill University in Montreal, is studying both PTSD patients and people who have recently experienced a traumatic event and may develop PTSD. Armony and his team want to see if they can use the amygdala signal and other factors to predict who is vulnerable to the disorder and who will be resistant to therapy. "After 6 to 12 months, some people recover -- what's the difference between people who recover and people who don't?" Armony asks.

While fMRI measures of unconscious processes are useful for studying populations of people with an illness, they're not yet precise enough to diagnose an individual with a particular disorder, says Armony. "We can say that [statistically] a person with PTSD will have an exaggerated amygdala response, but that doesn't mean that everyone will have it."

Hans Breiter, a neuroscientist at Harvard Medical School, one of the first researchers to study amygdala activity with fMRI in the mid-1990s, agrees that a more extensive evaluation of the neurological changes in psychiatric disorders is necessary before the technique can have clinical applications. "This approach is promising and is the right first step, but scientists will need to study larger numbers of people with fMRI to get a better sense of the variability in brain functions that underlie anxiety and depression," he says. "They may have very different [brain activity patterns] and may have very different therapeutic needs." He predicts those larger-scale studies will happen within the next five years.

Breiter and other scientists are optimistic that fMRI can one day be used to evaluate the benefits of therapy, but they say it's unclear what brain signals, conscious or unconscious ones, will be the most effective measure.

"The question still remains, how important are these subconscious phenomena?" says Cohen at Brown. "From a cognitive behavioral perspective, the conscious aspects of depression and anxiety are more important."

Both Etkin at Columbia and Armony at McGill are also using fMRI to study conscious processes, such as attention, in people with anxiety disorders; and they plan to examine how these different factors may be important in different anxiety-related diseases, such as depression and eating disorders.

"There's information processing going on in the brain that's completely outside of awareness, which previously we could only investigate with psychoanalysis," says Tom Insel, director of the National Institutes of Mental Health in Bethesda, MD. "Now you can track [those processes] with neuro-imaging -- a tool that may be much more compelling."

Man 'roused from coma' by a magnetic field

NewScientist.com

A volunteer models the TMS device which is worn over the front of the head to stimulate the underlying brain tissue
Enlarge image
A volunteer models the TMS device which is worn over the front of the head to stimulate the underlying brain tissue

JOSH VILLA was 26 and driving home after a drink with a friend on 28 August 2005 when his car mounted the kerb and flipped over. Villa was thrown through the windscreen, suffered massive head injuries and fell into a coma.

Almost a year later, there was little sign of improvement. "He would open his eyes, but he was not responsive to any external stimuli in his environment," says Theresa Pape of the US Department of Veterans Affairs in Chicago, who helped treat him.

Usually there is little more that can be done for people in this condition. Villa was to be sent home to Rockford, Illinois, where his mother, Laurie McAndrews, had volunteered to care for him.

But Pape had a different suggestion. She enrolled him in a six-week study in which an electromagnetic coil was held over the front of his head to stimulate the underlying brain tissue. Such transcranial magnetic stimulation (TMS) has been investigated as a way of treating migraine, stroke, Parkinson's disease and depression, with some promising results, but this is the first time it has been used as a potential therapy for someone in a coma-like state.

The rapidly changing magnetic fields that the coil creates can be used either to excite or inhibit brain cells - making it easier or harder for them to communicate with one another. In Villa's case, the coil was used to excite brain cells in the right prefrontal dorsolateral cortex. This area has strong connections to the brainstem, which sends out pulses to the rest of the brain that tell it to pay attention. "It's like an 'OK, I'm awake' pulse," says Pape.

At first, there was little change in Villa's condition, but after around 15 sessions something happened. "You started talking to him and he would turn his head and look at you," says McAndrews. "That was huge."

Villa started obeying one-step commands, such as following the movement of a thumb and speaking single words. "They were very slurred but they were there," says Pape, who presented her findings this month at an international meeting on brain stimulation at the University of Göttingen, Germany. "He'd say like 'erm', 'help', 'help me'."

After the 30 planned sessions the TMS was stopped. Without it, Villa became very tired and his condition declined a little, but he was still much better than before. Six weeks later he was given another 10 sessions, but there were no further improvements and he was sent home, where he remains today.

Villa is by no means cured. But he is easier to care for and can interact with visitors such as his girlfriend, who has stuck by him following the accident. "When you talk to him he will move his mouth to show he is listening," McAndrews says. "If I ask him, 'Do you love me?' he'll do two slow eye blinks, yes. Some people would say it's not much, but he's improving and that's the main thing."

John Whyte of the Moss Rehabilitation Research Institute in Philadelphia, Pennsylvania, cautions that as intriguing as Villa's case is, it alone does not show that TMS is a useful treatment. "Even after eight months, it is not uncommon for patients to transition from the vegetative to the minimally conscious state without any particular intervention," he points out. He says TMS merits further investigation, along with other experimental treatments such as drugs which have temporarily roused three men from a coma, and deep brain stimulation, an invasive technique that roused a man out of a minimally conscious state.

"This is the first and very interesting use of repetitive TMS in coma," says Steven Laureys of the Coma Research Group at the University of Liège in Belgium. Our understanding of disorders of consciousness is so limited that even a single study can provide new insights, he says.

Pape acknowledges that further studies are needed to demonstrate that TMS really is beneficial, though she is convinced that it helped Villa. He had only been given a 20 to 40 per cent chance of long-term recovery, and until he was given TMS his functioning had not improved since about four months after the accident. What's more, after the 15th TMS session, he improved incrementally with each session - further evidence that TMS was the cause.

Pape hopes to begin treating a second patient in a coma-like state later this year. This time she plans to adjust the number of pulses of TMS in each train, and to alter the gap between pulses to see if there is an optimum interval.

McAndrews is also in no doubt that her son's quality of life has improved as a result of TMS. "Before I felt like he was not responsive, that he was depressed almost. Now you move him around and he complains - he can show emotions on that level."

See "Editorial: improving the lot of coma patients"
A gentle current helps when words are hard to find

People with Alzheimer's disease got better at a word-recognition task after their brains were stimulated with an electric current.

Like transcranial magnetic stimulation or TMS (see main story), transcranial direct current stimulation (tDCS) aims to activate or inhibit areas of the brain by making it easier or harder for the brain cells to fire. While TMS involves holding a current-carrying coil over the subject's head, tDCS, which has previously shown promise in treating pain and depression (New Scientist, 5 April 2006, p 34), uses electrodes to send a current of 1 to 2 milliamps through the skull.

In Alzheimer's, the temporoparietal areas of the brain, which are involved in memory and recognition, are known to be less active than in healthy people. So Alberto Priori at the University of Milan, Italy, and his colleagues used tDCS to stimulate these areas. They asked 10 people with mild to moderate Alzheimer's to perform a word-recognition and a visual-attention task, before and after receiving tDCS or a sham treatment.

With tDCS, word recognition improved by 17 per cent, but there was no improvement in visual attention. Word recognition worsened when tDCS was used to inhibit neurons, and there was no change when the sham treatment was applied (Neurology, vol 71, p 493).

"Our findings are consistent with evidence that tDCS improves cognitive functions in healthy subjects and in patients with neurological disorders," Priori says. He is now running a larger study to confirm the results, and to find out how long the improvement lasts.


From issue 2678 of New Scientist magazine, 15 October 2008, page 8-9
Close this window
Printed on Thu Oct 16 05:09:08 BST 2008

Thursday, October 9, 2008

Natural-Born Liars

Why do we lie, and why are we so good at it? Because it works

By David Livingstone Smith
Deception runs like a red thread throughout all of human history. It sustains literature, from Homer's wily Odysseus to the biggest pop novels of today. Go to a movie, and odds are that the plot will revolve around deceit in some shape or form. Perhaps we find such stories so enthralling because lying pervades human life. Lying is a skill that wells up from deep within us, and we use it with abandon. As the great American observer Mark Twain wrote more than a century ago: "Everybody lies ... every day, every hour, awake, asleep, in his dreams, in his joy, in his mourning. If he keeps his tongue still his hands, his feet, his eyes, his attitude will convey deception." Deceit is fundamental to the human condition.

Research supports Twain's conviction. One good example was a study conducted in 2002 by psychologist Robert S. Feldman of the University of Massachusetts Amherst. Feldman secretly videotaped students who were asked to talk with a stranger. He later had the students analyze their tapes and tally the number of lies they had told. A whopping 60 percent admitted to lying at least once during 10 minutes of conversation, and the group averaged 2.9 untruths in that time period. The transgressions ranged from intentional exaggeration to flat-out fibs. Interestingly, men and women lied with equal frequency; however, Feldman found that women were more likely to lie to make the stranger feel good, whereas men lied most often to make themselves look better.

In another study a decade earlier by David Knox and Caroline Schacht, both now at East Carolina University, 92 percent of college students confessed that they had lied to a current or previous sexual partner, which left the husband-and-wife research team wondering whether the remaining 8 percent were lying. And whereas it has long been known that men are prone to lie about the number of their sexual conquests, recent research shows that women tend to underrepresent their degree of sexual experience. When asked to fill out questionnaires on personal sexual behavior and attitudes, women wired to a dummy polygraph machine reported having had twice as many lovers as those who were not, showing that the women who were not wired were less honest. It's all too ironic that the investigators had to deceive subjects to get them to tell the truth about their lies.

These references are just a few of the many examples of lying that pepper the scientific record. And yet research on deception is almost always focused on lying in the narrowest sense-literally saying things that aren't true. But our fetish extends far beyond verbal falsification. We lie by omission and through the subtleties of spin. We engage in myriad forms of nonverbal deception, too: we use makeup, hairpieces, cosmetic surgery, clothing and other forms of adornment to disguise our true appearance, and we apply artificial fragrances to misrepresent our body odors. We cry crocodile tears, fake orgasms and flash phony "have a nice day" smiles. Out-and-out verbal lies are just a small part of the vast tapestry of human deceit.

The obvious question raised by all of this accounting is: Why do we lie so readily? The answer: because it works. The Homo sapiens who are best able to lie have an edge over their counterparts in a relentless struggle for the reproductive success that drives the engine of evolution. As humans, we must fit into a close-knit social system to succeed, yet our primary aim is still to look out for ourselves above all others. Lying helps. And lying to ourselves--a talent built into our brains--helps us accept our fraudulent behavior.

Passport to Success

If this bald truth makes any one of us feel uncomfortable, we can take some solace in knowing we are not the only species to exploit the lie. Plants and animals communicate with one another by sounds, ritualistic displays, colors, airborne chemicals and other methods, and biologists once naively assumed that the sole function of these communication systems was to transmit accurate information. But the more we have learned, the more obvious it has become that nonhuman species put a lot of effort into sending inaccurate messages.

The mirror orchid, for example, displays beautiful blue blossoms that are dead ringers for female wasps. The flower also manufactures a chemical cocktail that simulates the pheromones released by females to attract mates. These visual and olfactory cues keep hapless male wasps on the flower long enough to ensure that a hefty load of pollen is clinging to their bodies by the time they fly off to try their luck with another orchid in disguise. Of course, the orchid does not "intend" to deceive the wasp. Its fakery is built into its physical design, because over the course of history plants that had this capability were more readily able to pass on their genes than those that did not. Other creatures deploy equally deceptive strategies. When approached by an erstwhile predator, the harmless hog-nosed snake flattens its head, spreads out a cobralike hood and, hissing menacingly, pretends to strike with maniacal aggression, all the while keeping its mouth discreetly closed.

These cases and others show that nature favors deception because it provides survival advantages. The tricks become increasingly sophisticated the closer we get to Homo sapiens on the evolutionary chain. Consider an incident between Mel and Paul:

Mel dug furiously with her bare hands to extract the large succulent corm from the rock-hard Ethiopian ground. It was the dry season and food was scarce. Corms are edible bulbs somewhat like onions and are a staple during these long, hard months. Little Paul sat nearby and surreptitiously observed Mel's labors. Paul's mother was out of sight; she had left him to play in the grass, but he knew she would remain within earshot in case he needed her. Just as Mel managed, with a final pull, to yank her prize out of the earth, Paul let out an ear-splitting cry that shattered the peace of the savannah. His mother rushed to him. Heart pounding and adrenaline pumping, she burst upon the scene and quickly sized up the situation: Mel had obviously harassed her darling child. Shrieking, she stormed after the bewildered Mel, who dropped the corm and fled. Paul's scheme was complete. After a furtive glance to make sure nobody was looking, he scurried over to the corm, picked up his prize and began to eat. The trick worked so well that he used it several more times before anyone wised up.

The actors in this real-life drama were not people. They were Chacma baboons, described in a 1987 article by primatologists Richard W. Byrne and Andrew Whiten of the University of St. Andrews in Scotland for i magazine and later recounted in Byrne's 1995 book The Thinking Ape (Oxford University Press). In 1983 Byrne and Whiten began noticing deceptive tactics among the mountain baboons in Drakensberg, South Africa. Catarrhine primates, the group that includes the Old World monkeys, apes and ourselves, are all able to tactically dupe members of their own species. The deceptiveness is not built into their appearance, as with the mirror orchid, nor is it encapsulated in rigid behavioral routines like those of the hog-nosed snake. The primates' repertoires are calculated, flexible and exquisitely sensitive to shifting social contexts.

Byrne and Whiten catalogued many such observations, and these became the basis for their celebrated Machiavellian intelligence hypothesis, which states that the extraordinary explosion of intelligence in primate evolution was prompted by the need to master ever more sophisticated forms of social trickery and manipulation. Primates had to get smart to keep up with the snowballing development of social gamesmanship.

The Machiavellian intelligence hypothesis suggests that social complexity propelled our ancestors to become progressively more intelligent and increasingly adept at wheeling, dealing, bluffing and conniving. That means human beings are natural-born liars. And in line with other evolutionary trends, our talent for dissembling dwarfs that of our nearest relatives by several orders of magnitude.

The complex choreography of social gamesmanship remains central to our lives today. The best deceivers continue to reap advantages denied to their more honest or less competent peers. Lying helps us facilitate social interactions, manipulate others and make friends.

There is even a correlation between social popularity and deceptive skill. We falsify our r¿¿¿m¿¿¿to get jobs, plagiarize essays to boost grade-point averages and pull the wool over the eyes of potential sexual partners to lure them into bed. Research shows that liars are often better able to get jobs and attract members of the opposite sex into relationships. Several years later Feldman demonstrated that the adolescents who are most popular in their schools are also better at fooling their peers. Lying continues to work. Although it would be self-defeating to lie all the time (remember the fate of the boy who cried, "Wolf!"), lying often and well remains a passport to social, professional and economic success.

Fooling Ourselves
Ironically, the primary reasons we are so good at lying to others is that we are good at lying to ourselves. There is a strange asymmetry in how we apportion dishonesty. Although we are often ready to accuse others of deceiving us, we are astonishingly oblivious to our own duplicity. Experiences of being a victim of deception are burned indelibly into our memories, but our own prevarications slip off our tongues so easily that we often do not notice them for what they are.

The strange phenomenon of self-deception has perplexed philosophers and psychologists for more than 2,000 years. On the face of it, the idea that a person can con oneself seems as nonsensical as cheating at solitaire or embezzling money from one's own bank account. But the paradoxical character of self-deception flows from the idea, formalized by French polymath escartes in the 17th century, that human minds are transparent to their owners and that introspection yields an accurate understanding of our own mental life. As natural as this perspective is to most of us, it turns out to be deeply misguided.

If we hope to understand self-deception, we need to draw on a more scientifically sound conception of how the mind works. The brain comprises a number of functional systems. The system responsible for cognition--the thinking part of the brain--is somewhat distinct from the system that produces conscious experiences. The relation between the two systems can be thought of as similar to the relation between the processor and monitor of a personal computer. The work takes place in the processor; the monitor does nothing but display information the processor transfers to it. By the same token, the brain's cognitive systems do the thinking, whereas consciousness displays the information that it has received. Consciousness plays a less important role in cognition than previously expected.

This general picture is supported by a great deal of experimental evidence. Some of the most remarkable and widely discussed studies were conducted several decades ago by neuroscientist Benjamin Libet, now professor emeritus at the University of California at San Diego. In one experiment, Libet placed subjects in front of a button and a rapidly moving clock and asked them to press the button whenever they wished and to note the time, as displayed on the clock, the moment they felt an impulse to press the button. Libet also attached electrodes over the motor cortex, which controls movement, in each of his subjects to monitor the electrical tension that mounts as the brain prepares to initiate an action. He found that our brains begin to prepare for action just over a third of a second before we consciously decide to act. In other words, despite appearances, it is not the conscious mind that decides to perform an action: the decision is made unconsciously. Although our consciousness likes to take the credit (so to speak), it is merely informed of unconscious decisions after the fact. This study and others like it suggest that we are systematically deluded about the role consciousness plays in our lives. Strange as it may seem, consciousness may not do any-thing except display the results of unconscious cognition.

This general model of the mind, supported by various experiments beyond Libet's, gives us exactly what we need to resolve the paradox of self-deception--at least in theory. We are able to deceive ourselves by invoking the equivalent of a cognitive filter between unconscious cognition and conscious awareness. The filter preempts information before it reaches consciousness, preventing selected thoughts from proliferating along the neural pathways to awareness.

Solving the Pinocchio Problem
But why would we filter information? Considered from a biological perspective, this notion presents a problem. The idea that we have an evolved tendency to deprive ourselves of information sounds wildly implausible, self-defeating and biologically disadvantageous. But once again we can find a clue from Mark Twain, who bequeathed to us an amazingly insightful explanation. "When a person cannot deceive himself," he wrote, "the chances are against his being able to deceive other people." Self-deception is advantageous because it helps us lie to others more convincingly. Concealing the truth from ourselves conceals it from others.

In the early 1970s biologist Robert L. Trivers, now at Rutgers University, put scientific flesh on Twain's insight. Trivers made the case that our flair for self-deception might be a solution to an adaptive problem that repeatedly faced ancestral humans when they attempted to deceive one another. Deception can be a risky business. In the tribal, hunter-gatherer bands that were presumably the standard social environment in which our hominid ancestors lived, being caught red-handed in an act of deception could result in social ostracism or banishment from the community, to become hyena bait. Because our ancestors were socially savvy, highly intelligent primates, there came a point when they became aware of these dangers and learned to be self-conscious liars.

This awareness created a brand-new problem. Uncomfortable, jittery liars are bad liars. Like Pinocchio, they give themselves away by involuntary, nonverbal behaviors. A good deal of experimental evidence indicates that humans are remarkably adept at making inferences about one another's mental states on the basis of even minimal exposure to nonverbal information. As Freud once commented, "No mortal can keep a secret. If his lips are silent, he chatters with his fingertips; betrayal oozes out of him at every pore." In an effort to quell our rising anxiety, we may automatically raise the pitch of our voice, blush, break out into the proverbial cold sweat, scratch our nose or make small movements with our feet as though barely squelching an impulse to flee.

Alternatively, we may attempt to rigidly control the tone of our voice and, in an effort to suppress telltale stray movements, raise suspicion by our stiff, wooden bearing. In any case, we sabotage our own efforts to deceive. Nowadays a used-car salesman can hide his shifty eyes behind dark sunglasses, but this cover was not available during the Pleistocene epoch. Some other solution was required.

Natural selection appears to have cracked the Pinocchio problem by endowing us with the ability to lie to ourselves. Fooling ourselves allows us to selfishly manipulate others around us while remaining conveniently innocent of our own shady agendas.

If this is right, self-deception took root in the human mind as a tool for social manipulation. As Trivers noted, biologists propose that the overriding function of self-deception is the more fluid deception of others. Self-deception helps us ensnare other people more effectively. It enables us to lie sincerely, to lie without knowing that we are lying. There is no longer any need to put on an act, to pretend that we are telling the truth. Indeed, a self-deceived person is actually telling the truth to the best of his or her knowledge, and believing one's own story makes it all the more persuasive.

Although Trivers's thesis is difficult to test, it has gained wide currency as the only biologically realistic explanation of self-deception as an adaptive feature of the human mind. The view also fits very well with a good deal of work on the evolutionary roots of social behavior that has been supported empirically.

Of course, self-deception is not always so absolute. We are sometimes aware that we are willing dupes in our own con game, stubbornly refusing to explicitly articulate to ourselves just what we are up to. We know that the stories we tell ourselves do not jibe with our behavior, or they fail to mesh with physical signs such as a thumping heart or sweaty palms that betray our emotional states. For example, the students described earlier, who admitted their lies when watching themselves on videotape, knew they were lying at times, and most likely they did not stop themselves because they were not disturbed by this behavior.

At other times, however, we are happily unaware that we are pulling the wool over our own eyes. A biological perspective helps us understand why the cognitive gears of self-deception engage so smoothly and silently. They cleverly and imperceptibly embroil us in performances that are so skillfully crafted that the act gives every indication of complete sincerity, even to the actors themselves.