Close to 10 percent of men and women in America are now taking drugs to combat depression. How did a once rare condition become so common?
By Charles Barber
I am thinking of the Medicated Americans, those 11 percent of women and 5 percent of men who are taking antidepressants.
It is Sunday night. The Medicated American—let’s call her Julie, and let’s place her in Winterset, Iowa—is getting ready for bed. Monday morning and its attendant pressures—the rush to get out of the house, the long commute, the bustle of the office—loom. She opens the cabinet of the bathroom vanity, removes a medicine bottle and taps a pill into her palm. She fills a glass of water, places the colorful pill in her mouth and swallows. The little pill could be any one of 30 available drugs used as antidepressants—such as Prozac or Zoloft or Paxil or Celexa or Lexapro or Luvox or Buspar or Nardil or Elavil or Sinequan or Pamelor or Serzone or Desyrel or Norpramin or Tofranil or Adapin or Vivactil or Ludiomil or Endep or Parnate or Remeron. The pill makes a slight flutter as it passes down her throat.
Julie examines her face in the mirror and sighs. She hopes that by some Monday morning in the future—if not tomorrow morning, then some mythical, brilliant and shimmering Monday morning a month from now, or two months from now, or three—the pills will have worked some kind of inexorable magic. Corrected a chemical imbalance, or something, as the Zoloft commercial had said. “Zoloft, a prescription medicine, can help. It works to correct chemical imbalances in the brain,” the voiceover on the ad had intoned. Julie didn’t know she had a chemical imbalance, nor does she actually know what one is, and it had never really occurred to her that she could have a mental illness (could she?). But she does hope, fervently, that her life will become a little easier, a little less stressed—soon. She hopes, desperately, that the pills will make her feel better—that the little white powder hidden in the green capsule will dissolve in her stomach, enter her bloodstream, travel to her brain and do something. Brushing her teeth, she hopes that one day she will simply feel better.
Mental Illness by the Numbers
If statistics serve, we know a number of things about the Medicated American. We know there is a very good chance she has no psychiatric diagnosis. A study of antidepressant use in private health insurance plans by the New England Research Institute found that 43 percent of those who had been prescribed antidepressants had no psychiatric diagnosis or any mental health care beyond the prescription of the drug. We know she is probably female: twice as many psychiatric drugs are prescribed for women than for men, reported a 1991 study in the British Journal of Psychiatry. Remarkably, in 2002 more than one in three doctor’s office visits by women involved the prescription of an antidepressant, either for the writing of a new prescription or for the maintenance of an existing one, according to the Centers for Disease Control and Prevention.
We know that most likely a psychiatrist did not prescribe her antidepressants: family doctors frequently now prescribe such medications. We know that Julie in Iowa was far more likely to ask her doctor for an antidepressant after having seen it advertised on TV or in print; one fifth of Americans have asked their doctor for a drug after they have seen it advertised. And when Julie asked for her antidepressant, her doctor was likely to comply with the request, even if he or she felt ambivalent about the choice of drug or diagnosis.
It is unlikely that the doctor spent much time talking to Julie about the nature of the drugs, the common side-effect profiles and the remote but potentially dangerous side effects. Based on taped sessions, a 2006 study at the University of California, Los Angeles, showed that when prescribing a new medicine, two thirds of doctors said nothing to the patient about how long to take the medication, and almost half did not indicate the dosage amount and frequency. Only about a third of the time did doctors talk about adverse side effects. In the case of antidepressants, failure to review possible side effects and to monitor the patient’s progress in the weeks and months after starting the drugs is deeply irresponsible. A 2004 study in the Journal of the American Medical Association stated that “the risk of suicidal behavior is increased in the first month after starting antidepressants, especially during the first one to nine days.” Worse, there is no longer any need to deal with an actual physician: all these drugs are readily available, with a few clicks and a credit card.
We further know that Julie’s managed care insurance was more than happy to cover the prescription, especially if it meant that the company did not have to pay for therapy, which Julie is less and less likely, and less and less able, to pursue—an unsurprising fact given that there are only about 40,000 psychiatrists in the country. As a result, after starting antidepressants and taking them for three months, three quarters of adults and more than half of children do not see a doctor or therapist specifically for mental heath care, found a study by Medco Health Solutions. Another report, referenced in the New York Times, reported that only 20 percent of people who take antidepressants have any kind of follow-up appointment to monitor the medication.
Between 1987 and 1997, while the rate of pharmacological treatment for depression doubled, the number of psychotherapy visits for depression decreased, as cited in a study in the January 9, 2002, issue of the Journal of the American Medical Association. These days only about 3 percent of the population receives therapy from a psychiatrist, psychologist or social worker, according to a 2006 study in Archives of General Psychiatry. The strong likelihood is that the fluttering of the pill down her throat will be the extent of Julie’s mental health treatment.
A Growing Trend
Antidepressant SSRIs (selective serotonin reuptake inhibitors) were first approved as treatment for clinical depression, and other uses were steadily added during the 1990s: indications came, one after the other, for obsessive-compulsive disorder, eating disorders, anxiety and premenstrual dysphoric disorder. The drugs were also used for paraphilias, sexual compulsions and body dysmorphic disorder. With each new utilization, the market got bigger, lines between distress and disease got blurrier, and the drugs began to be prescribed for problems beyond those indicated by the Food and Drug Administration. As a result, a good number of Americans are now taking SSRIs for non-FDA-approved uses, termed “off label” prescriptions. A 2006 study found that three quarters of people prescribed antidepressant drugs receive the medications for a reason not approved by the FDA. This practice is legal and intended to give physicians the flexibility to prescribe the drugs that are best suited to their patients’ needs. The problem is that “most off-label drug mentions have little or no scientific support,” says study co-author Jack Fincham of the University of Georgia College of Pharmacy. “And when I say most, it’s like 70 to 75 percent. Many patients have no idea that this goes on and just assume that the physician is writing a prescription for their indication.”
So, if not for a severe mental illness, why exactly is Julie taking the antidepressants? One reason traces to the existence of the catchall term “depression.” Depression, once considered a rare disease usually associated with elderly women, is overwhelmingly the mental health diagnosis of choice of our time. About 40 percent of mental health complaints result in its diagnosis, according to the CDC. Martin E. P. Seligman of the University of Pennsylvania, perhaps America’s most influential academic psychologist, has stated: “If you’re born around World War I, in your lifetime the prevalence of depression, severe depression, is about 1 percent. If you’re born around World War II, the lifetime prevalence of depression seemed to be about 5 percent. If you were born starting in the 1960s, the lifetime prevalence seemed to be between 10 and 15 percent, and this is with lives incomplete.” (When entire life spans are ultimately taken into account, the rate could grow further.) Moreover, Seligman notes, the age of onset of the first depressive episode has dropped. A generation or two ago the onset of depression purportedly occurred on average at age 34 or 35; recent studies have found the mean age for the first bout of depression to be 14 years old.
It is as if from the early 1990s on (nicely coinciding with the mass penetration of Prozac), we have been living in the Age of Depression—just as Valium arrived in, or helped to create, the Age of Anxiety. In contemporary America, it has been broadly accepted for some time that everybody, at some level, is depressed at least some of the time. As Americans have become more aware of their feelings in the past few therapy-oriented decades, it has become acceptable and eminently appropriate to say when someone asks how you are feeling (particularly if it’s late March): “A little depressed.” Or to respond to the query, “How was the movie the other day?”: “A little depressing.” Or to say in response to “How did you feel about last year’s minuscule raise?”: “Depressed.”
But to anyone reasonably experienced in the mental health field, there is depression and then there is Depression. The first type is a terribly broad and bland term, indicating “the blues,” “feeling down,” “bummed out,” “in the dumps,” “low,” “a little tired,” “not quite myself,” each a standard part of the daily human predicament. Major depressive disorder, however, is a harrowing and indisputably profound and serious medical condition. To confuse the two, depression with Depression, is to compare a gentle spring rain to a vengeful typhoon.
A true diagnosis of major depression involves some combination of most of the following: inability to feel pleasure of any kind whatsoever, loss of interest in everything, extreme self-hatred or guilt, inability to concentrate or to do the simplest things, sleeping all the time or not being able to sleep at all, dramatic weight gain or loss, and wanting to kill yourself or actually trying to kill yourself. Truly depressed people do not smile or laugh; they may not talk; they are not fun to be with; they do not wish to be visited; they may not eat and have to be fed with feeding tubes so as not to die; and they exude a palpable and monstrous sense of pain. It is a thing unto itself, an undeniably physical and medical affliction and not, as psychiatrist Paul McHugh writes, “just the dark side of human emotion.”
One feels such patients’ anguish at a primal, physiological level. “Very often patients with major depression will say the emotional pain they feel is worse than the pain of any physical illness,” said J. John Mann, chief of neuroscience at the New York Psychiatric Institute, in a 1997 article in BrainWork. Many depressed people really, really want to die, and thinking about dying, or planning their death, takes up a great deal of their time. So horrific is the incapacitation that the highest risk of suicide actually comes when people are feeling slightly better. In the throes of an episode, depressed patients are too dissipated to even muster the energy to kill themselves. I thought I knew the difference between the blues and major depression until I saw the disease in its full and malicious force. The only treatments are hospitalization, supervision, rest, quiet, sedatives, sleep medications, an appropriate level of antidepressants and electroshock therapy. Despite its side effects (such as short-term memory loss), electroshock therapy remains the single most effective treatment for major depression.
What modern psychiatry has done, I am convinced, is to conflate and confuse the two, Depression and depression. David Healy, in Let Them Eat Prozac (NYU Press, 2004), calls it “a creation of depression on so extraordinary and unwarranted a scale as to raise questions about whether pharmaceutical and other health care companies are more wedded to making profits from health than contributing to it.” A 2007 study at New York University showed that about one in four people who appears to be depressed and is treated as such is in fact dealing with the aftermath of a recent emotional blow, such as the end of a marriage, the loss of a job or the collapse of a business.
Each successive edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM) has proclaimed an ever increasing number of diagnoses that cover an ever widening terrain of normal, if painful, human behavior. DSM-I, published in 1952, covered some 150 diagnoses. DSM-IV, which came out in the 1990s, had more than 350. The next version, DSM-V, due in 2011, will introduce even more.
In contrast, large percentages of people with severe and persistent mental illness get no care whatsoever. “The majority of those with a diagnosable mental disorder [are] not receiving treatment,” wrote the U.S. surgeon general in a 1999 report. Studies published in 1985, 2000 and 2001 found that 50, 42 and 46 percent, respectively, of people with serious mental illness were receiving no treatment for their conditions. A massive study in the early 2000s on the prevalence of mental illness led by health care policy researcher Ronald C. Kessler of Harvard Medical School, in collaboration with the World Health Organization, revealed that in developed countries 35 to 50 percent of people with serious cases had not been treated in the previous year; in poor countries the figure was 80 percent. A separate study, published in 2002, found that of those in the U.S. receiving treatment for serious mental illness, only 40 percent were receiving what is considered minimally adequate treatment. Of all those with serious mental disorders, then, only 15 percent were getting the high-quality care they needed.
The same tragic imbalance exists in the research world. Although people with severe mental illness account for more than half of the direct costs associated with all mental illness, only about a third of National Institute of Mental Health research awards from 1997 to 2002 went to the study of serious mental illness.
The slippery slope that psychiatry has traversed—jettisoning the impoverished mentally ill for the cash-carrying worried well—can perhaps be traced to a single word choice in DSM-III, the totally revised diagnostic manual of 1980. But not for the selection of that one word, the recent history of psychiatry might be entirely different.
The prevailing term to describe specific psychiatric conditions in DSM-I in 1952 was an odd one: “reaction.” Schizophrenia, for example, was described as a “schizophrenic reaction.” Depression was a “depressive reaction.” The concept of “reaction” derived from psychoanalytic thinking, and, as such, mental torment was thought to come about as a result of a reaction to environmental, psychological and biological problems. By DSM-II, in 1968, the term “reaction” had been tossed aside. DSM-II described depression in more psychological terms such as depressive neurosis and depressive psychosis.
DSM-III, which was the brainchild of one man, Robert Spitzer of Columbia, was an attempt to strike a middle ground between the psychoanalytic camp, which had no interest in biology, and the budding brain scientists, who were starting to gain traction as psychiatric drugs were becoming more prevalent and often successfully treating people with severe mental illness. Spitzer, who is probably, after Sigmund Freud, the most influential psychiatrist of the 20th century, worked on DSM-III for six years, often up to 80 hours a week. To appease both groups, Spitzer brought a centrist, “theory-neutral” approach to his work. He based diagnoses not on theories and traditions about how they might have arisen but on objective observation and symptom lists, on the “here and now.” Although this strategy was no doubt well intentioned, the lack of theoretical constraint meant that just about any painful and unhappy human predicament could be entertained for inclusion.
Spitzer presided over an extraordinary expansion of the DSM. “Bob never met a new diagnosis that he didn’t at least get interested in,” said Allen Frances, a psychiatrist who worked closely with Spitzer on DSM-III, in a 2005 interview with the New Yorker. “Anything, however against his own leanings that might be, was a new thing to play with, a new toy.” Spitzer was a technician of diagnosis and loved to compose symptom lists, sometimes drawing them up on the spot. It should be noted that in his centrist approach, Spitzer also presided over many positive developments. For example, he removed homosexuality as a diagnosis, which had been notoriously included in DSM-II. Spitzer also excised “hysterical personality” disorder—which had become unfairly identified with female instability. (The word “hysteria” itself comes from “uterus”—hence the term “hysterectomy.”)
The word that Spitzer settled on, to cover the vast majority of all the roughly 300 diagnoses, was “disorder.” “Disorder” was not entirely new: it had appeared briefly in earlier editions of the DSM to describe general categories of distress. The problem is that “disorder,” so bland and toothless, so appeasing to all parties, has little meaning. There are few constraints on the word “disorder.” Just about everything can be a disorder.
Spitzer’s word choice created the slippery slope that psychiatry occupies today. Had Spitzer settled on, say, the word “disease” instead, it is conceivable that the course of modern psychiatry would have been different. Diseases are scary, upsetting, painful, often chronic and potentially lethal. You stay in bed with diseases. People do not like to be around you when you have a disease. You generally do not look well when you have a disease.
I think we have got to get beyond the absurd vapidity of disorder categories such as “phase of life problem” and “sibling relational problem.” We should get a little more specific about Julie’s angst. Let us take the daring step of calling life problems what they are and what they were up until about 20 years ago: life problems.