The RCT Model
Until recently, the most “scientifically rigorous” answer to the question of determining effectiveness has been to adopt the model of research applied within the field of medicine—run randomly controlled clinical trials (RCTs) to establish evidence-based (also called empirically supported or research-supported) treatments that work better than a placebo. Just as in medical research, the goal has been to match a particular treatment regimen to a particular disorder. Aaron Beck, the founder of Cognitive Therapy, had begun testing his systematic treatment for depression as far back as 1973. But clinical trials didn’t begin to become the driving force they are today until 1993, when the APA established a task force within its Society of Clinical Psychology to identify criteria for—and then review and provide a running list of—treatments that could be shown to be effective for a variety of diagnosable disorders found in the DSM–IV.
Before that task force could issue its first list of “empirically supported” treatments, however, a study appeared that, while lacking most essentials of gold-standard RCT research, powerfully established the effectiveness of psychotherapy in the public mind. In November 1995, a widely circulated survey of clients’ perceptions of their treatment done by a team at Consumer Reports (CR) gave psychotherapy’s legitimacy a huge boost. According to a sample of 2,900, psychotherapy seemed to work, whatever Hans Eysenck once thought. CR pooled 178,000 readers, of whom only 7,000 responded. Of them, fewer than 3,000 had consulted some type of mental health provider. Nonetheless, probably no single publication has so influenced the popular perception of the field. But what did it prove scientifically?
Commenting on the report in the December 1995 issue of American Psychologist, Martin E. P. Seligman, APA president and psychology professor at the University of Pennsylvania, provided his own verdict on what CR had to say:
The study is not without flaws, the chief one being the limited meaning of its answer to the question, “Can psychotherapy help?” This question has three possible kinds of answers. The first is that psychotherapy does better than something else, such as talking to friends, going to church, or doing nothing at all. Because it lacks comparison groups, the CR study only answers this question indirectly. The second possible answer is that psychotherapy returns people to normality or more liberally to within, say, two standard deviations of the average. The CR study, lacking an untroubled group and lacking measures of how people were before they became troubled, does not answer this question. The third answer is, “Do people have fewer symptoms and a better life after therapy than they did before?” This is the question that the CR study answers with a clear “yes.”
Even more important, Seligman concluded, “Consumer Reports has provided empirical validation of the effectiveness of psychotherapy. Prospective and diagnostically sophisticated surveys, combined with the well-normed and detailed assessment used in efficacy studies, would bolster this pioneering study. They would be expensive, but, in my opinion, very much worth doing.” In short, still more science was needed.
That was the mid-1990s, a time that had recently witnessed the ascendancy of psychopharmacology and managed care—features of the therapeutic landscape that have increased in strength and dominance to this day. Since then, expanded numbers of treatments have been deemed “well-established” after being tested in at least two randomized clinical trials, each originating with a different team of investigators. There are now treatments that have successfully run this research gauntlet and been approved for a wide range of DSM disorders.
Those who champion the “medical model” research approach believe it’s already demonstrated beyond question just how potent and reliable psychotherapy has become in relieving human distress. Steven Hollon, a professor of psychology at Vanderbilt University, comments that, 30 years ago, “we didn’t even know the names for some of the disorders” that can now be treated effectively. By comparison, today there’s “hard empirical evidence” that Cognitive-Behavioral Therapy (CBT), interpersonal psychotherapy, and behavioral activations work for a number of specific disorders. Indeed, he says, you can treat pretty much any nonpsychotic disorder just as well with psychosocial interventions as you can with medication, and the results will not only be as good, but provide broader benefits, such as the longer-lasting effects from therapies like CBT that help patients learn how to help themselves.
Dianne Chambless, professor of psychology and director of clinical training at the University of Pennsylvania, and the researcher who oversaw the initial APA Society of Clinical Psychology commission, also sees progress over the past decades. “I think we’ve really improved upon our ability to treat a number of different disorders, or have effective treatments” for them, including anxiety disorders and interventions for bipolar disorders, she says. “Our ability to treat all the anxiety disorders with CBT is leaps and bounds above what it was 30 years ago.”
What works, according to clinical studies? From the start, behavior therapies of different kinds (including Cognitive Therapy) have been, and continue to be, the most prominently represented types of therapy on the list of effective treatments. Dissenters have suggested that this is because they’ve had the longest history of testing, but advocates of evidence-based treatments say it’s because they work. University researchers and insurance companies looking for accountability have been among the staunchest of those advocates.
There’s also a connection to managed care. With providers today reimbursing patients for fewer sessions than they did 30 years ago, studies are now looking at treatment courses that are shorter than they used to be. Previously, “brief therapy” might have meant 24 to 30 sessions. Clinical trials today tend to focus on the impact of 4, 8, or 12 sessions. The good news: success is achieved in fewer sessions than in the past. Today, psychotherapy not only has more empirically proven treatments, but is demonstrably more efficient. We now know that “You don’t have to beat around the bush and first establish a good working relationship before getting to work,” says Hollon. “You just get to work.”
Effectiveness, efficiency, endurance: sounds like a winning trifecta. But not everyone agrees that evidence-based testing has provided the evidence its supporters say it has.
Rocking the Boat
In 2002, Bruce Wampold rocked the boat of clinical testing big-time with the publication of The Great Psychotherapy Debate. In it, this professor of counseling psychology at the University of Wisconsin delivered an in-depth critique of the medical model of psychotherapy, and did so not by the usual route of rejecting the very idea of protocol-driven therapy and appealing to notions that make therapists feel warm and fuzzy, like the need for “authenticity” and “connection,” but by invoking, of all things, statistics. Deploying an impressive array of sophisticated metanalyses on hundreds of efficacy studies conducted through the decades, Wampold concluded that there was “little or no evidence that any one treatment for any particular disorder is better than another treatment intended to be therapeutic.”
What he did find evidence for is the importance of the alliance between patient and therapist. That, along with the psychotherapist’s empathy for the client and the patient’s positive expectation for treatment comprise the three key factors leading to a successful outcome, he said. In other words, it’s not the treatment or the theoretical model that makes the major difference. Rather, the characteristics of the therapist and the client, independent of specific treatment approaches, and the relationship factors mediating their connection have far more impact than the “treatment factors” themselves. Perhaps the old-fashioned clinicians who’d long stressed the human, nontechnical elements of the therapeutic encounter—what are often referred to as the “common factors” underlying effective therapy—were right after all.
The Great Psychotherapy Debate was the equivalent of a devastating body-blow to those favoring the medical model of psychotherapy, with its emphasis on finding the right treatment for any given DSM disorder. By all means, follow the science, Wampold urged, but also make sure you know what the science is saying: the medical model may not be the best model for understanding how psychotherapy works. His research showed that the “medicine” of a particular psychotherapeutic treatment itself isn’t the transportable object that, say, a dose of penicillin is. Then, again, neither can a skilled psychotherapist be mass-produced to deliver the key ingredients of empathy and therapeutic alliance that make the difference in what Wampold terms the “contextual model” of psychotherapy.
If, as Wampold asserted, treatment methods were a far less potent factor in influencing the effectiveness of psychotherapy than had been previously thought, what had all the research on empirically supported approaches actually proven? Did study after study showing a certain method’s superiority to a placebo necessarily mean that the field had gone as far as its proponents like Chambless and Hollon believed? Did the widespread conviction that psychotherapy had come a long way since the days of Hans Eysenck need to be reconsidered?
As therapists, we often lead isolated professional lives, seeing client after client without meeting regularly with our colleagues to talk openly about our work, ask questions, share ideas,
Making Therapists Better
Well, yes and no. Scott Miller, a founder of the International Center for Clinical Excellence, says, “I agree with the trend toward evidence-based practice. It’s just that what’s being advocated by some is decidedly not ‘evidence-based practice’—at least not according to the APA’s definition, or that of the Institute of Medicine.” He explains that, somehow, evidence-based practice has come to mean using a specific method for a specific diagnosis, when it actually means using approaches supported by the best evidence in a treatment informed by client preferences, cultural context, and feedback. He adds, “There’s no proof that clinicians who use particular models applied to specific diagnoses achieve better results. What’s more, available data indicate that real-world clinicians achieve the same degree of treatment success as those in randomized clinical trials. So what’s all the fuss about?”
Going beyond a critique of empirically supported treatments, Miller makes the larger point that, far from establishing the scientific bona fides of its methods, the psychotherapy community, in fact, hasn’t had the public health impact it promised three decades ago. He points out that, by most measures, the sheer number of people suffering from mental health problems is on the increase. Anyone who reads the headlines—high unemployment, poverty, returning veterans—knows those needs aren’t diminishing. Miller also wonders how it’s possible that a profession can amass so many new clinical techniques and approaches, and yet see a rise in the number of people suffering.
Most troubling of all, he says, is that, despite all the expansion of therapeutic knowledge and enormous amount of research activity in the field, overall, the average positive impact of psychotherapy hasn’t changed since the first metanalyses done in the 1970s. Somewhere between 66 and 75 percent of patients appear to improve from treatment. Nevertheless, Miller remains a believer in the effectiveness of therapy. “When you look at the statistics, the success rate for psychotherapy is on par with common medical procedures like coronary bypass surgery,” he says. Yet his reading of the research literature leaves him unconvinced that even a drastic increase in the number of therapists offering specific treatments for specific disorders would have any real effect on therapy’s success rate.
Miller is just one of a number of critics of traditional therapy outcome research who believe that evidence-based investigators too often have been studying the wrong things. Others include Wampold and Brigham Young University psychology professor Michael Lambert. What sets Miller apart is his crusader’s zeal for shaking up the status quo and his polemicist’s flair for presenting his argument in the most provocative light. He insists that to improve the quality of care in the field it’s not the treatments or protocols that need adjusting, it’s the attitudes and skills of individual practitioners that need to change. For psychotherapy to get “better,” he believes, therapists themselves need to get better at getting their clients better.
In making his case, Miller begins with what initially seems like an odd analogy to—of all things—airline safety. But fasten your seat belts, and let him explain. Over the past 35 years, he says, the number of American commercial airline crashes has decreased significantly, yet the technology of the planes themselves hasn’t changed dramatically. What’s changed is pilot training—to include a focus on safety and an emphasis on maintaining standards of professional excellence in individual performance.
What that has to do with psychotherapy is this: by contrast to airplane pilots and other highly skilled professions, in the field of psychotherapy, Miller says, “we’ve applied nothing that’s been learned from the literature on excellence to strengthening our expertise and skills.” Moreover, the pathway for psychotherapists to achieve excellence, he believes, isn’t illuminated particularly by clinical research studies documenting the efficacy of a specific treatment or technique. Instead, the way forward lies, first, in studying the traits shared by the most effective psychotherapists (in the same way that, say, the medical or management or legal professions seek to identify the traits associated with best practices in their fields), and then, getting others up to speed. This, he says, can be accomplished with evidence that comes not from clinical trials of specific protocols, but evidence generated by therapists themselves, based on their own strengths—and weaknesses.