In the last year, the influence of psychotherapy outcome research, not as something to be discussed only within obscure journals or to decorate an academic résumé, but as a source of vital information having a crucial bearing on what actually goes on in therapists’ offices, seems to have passed a tipping point. What was once a matter of interest only among a small circle of academics whose careers hinged on publishing little-read studies on treatment outcome has now become part of the national debate about healthcare reform, not to mention the focus of increasing scrutiny by multibillion-dollar insurance companies looking to exert more and more control over the procedures for which they’ll make reimbursements. It seems likely that in the not too distant future, only research-supported treatments will qualify for insurance reimbursement. The fact that President Obama himself refers regularly to “evidence-based” healthcare is a clear indicator of the growing role research results will play in the future of psychotherapy.
The first major effort to list what were then called “empirically validated” psychotherapies—the official term has since morphed, first to empirically supported, and now to research-supported—was undertaken by a task force headed by University of Pennsylvania professor Dianne Chambless within the Division of Clinical Psychology of the American Psychological Association in 1996. To be officially recognized as effective, the task force decided that an approach needed to yield results beyond no treatment, or at least at the level of proven existing treatments, in at least two randomized clinical trials, each originating from different groups of investigators. Chambless and her colleagues chose the disorders in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) published by the American Psychiatric Association as the focal problems about which to assess the impact of treatments. The task force produced two initial lists of approved treatments: one labeled “well established,” and one more tentatively called “probably efficacious.” Both lists consisted almost entirely of behavior therapies, the kind that had then been most studied within academia. Accordingly, most nonacademic clinical practitioners regarded the lists as irrelevant to their work and livelihoods, and largely ignored them.
Since 1996, the short list of empirically validated treatments originally provided by the Division of Clinical Psychology (although never endorsed by the American Psychological Association as a whole) has been vastly expanded and augmented by numerous other such lists, provided by organizations such as the Society of Clinical Child and Adolescent Psychology, The Cochrane Collaboration in Great Britain, the American Psychiatric Association, and evidence-based medicine websites such as Uptodate.com. The lists from these sources mostly agree about which treatments qualify; most are cognitive-behavioral approaches. Today, lists of treatments that have achieved the status of “well established” have grown exponentially, with groups of approved approaches now established for almost all DSM disorders, including agoraphobia, generalized anxiety disorder, depression, borderline personality disorder, sexual dysfunction, and alcohol abuse. Several states, including Washington, have taken the next step of mandating the use of only research-supported therapies in some of their mental health and juvenile justice programs. In addition, several countries, including Germany and the Netherlands, have tied payment of service to the practice of research-based treatments. Not surprisingly, increasing numbers of insurance companies are requiring, or considering requiring, practitioners to use research-supported methods to be reimbursed.
The good news in all this for psychotherapy is that our field has established a track record of broad empirical legitimacy, which will be crucial if we’re to continue to have a place in the healthcare system. Study after study has shown that, in addressing a wide array of DSM disorders, psychotherapies work at least as well as, and often better than, medications. Nevertheless, some critics insist that for an approach to receive approval as being research-based has less to do with its effectiveness than with whether it’s been studied frequently. They maintain that, as a result, innovative methods and potentially valuable clinical tools not recognized by academic researchers may be harder to incorporate into practice, restricting the flexibility and creativity of ordinary clinicians. In their view, with all the constraints limiting research funding today, the current system is in danger of creating a closed circle of favored approaches as attention comes to be increasingly limited to whatever has been already demonstrated to be effective, leaving new methods, however potentially valuable, outside the purview of research investigation.
An Alternative Paradigm
In 2002, Bruce Wampold of the University of Wisconsin, one of the foremost critics of lists of established clinical practices, published The Great Psychotherapy Debate, a book that raised fundamental questions about the entire enterprise of psychotherapy outcome research. He argued that, despite an emphasis on comparing methods, the evidence showed that differences between treatment approaches accounted for little of the overall impact of successful treatment. Instead, what his metanalyses revealed was that the characteristics of the therapist and the client, independent of specific treatment approaches, and the relationship factors mediating their connection, had far more impact than the treatment factors themselves. He suggested that there’s no evidence that research-supported therapies work better than what he called bona-fide treatments (real therapies conducted by real therapists, rather than treatments in which therapists do virtually nothing), and that metanalyses find no differences in outcome when bona-fide treatments are compared.
Wampold’s elaborate statistical method may have given his critique special weight, but many others have voiced reservations about the field’s way of bestowing its official seal of approval on a select group of approaches. Among the commonest criticisms are that studies of empirically approved therapies ignore cultural or, for that matter, any sort of client factors, lack attention to differences between therapists who apply the treatments (in some prominent studies, the difference in impact between therapists has been much greater than that between treatments), almost exclusively focus on short-term therapies, and consistently find that the treatment favored by the investigator does better than other treatments—the notorious “allegiance effect.”
An even broader objection has to do with the fact that the research literature is unrepresentative of the kinds of problems most people bring to therapy. Most clients seek psychotherapy not because of distinct DSM-IV disorders, but because of less clearly defined problems in everyday living—relationship conflicts, multiple problems that fit no diagnostic category, and situational stresses that trigger strong emotional reactions. In accenting DSM problems, the lists of treatments slant the practice of psychotherapy away from what most people seek therapy for and toward a “medical” model, disconnected from the realities of the practice of psychotherapy, say the critics.
The mental health profession owes much to researchers who’ve established its empirical track record. No doubt, the willingness of insurance companies to reimburse for the sometimes hard-to-define services that psychotherapists provide has more to do with studies on DSM disorders than an interest in clients’ personal growth or relational harmony. Nevertheless, a vocal dissident group in the research community has questioned whether the scientific foundation of the field can still be defended, even while questioning the bona fides of current “research-supported” approaches. They’ve called for a shift away from a narrow focus on contrasting treatments to a highlighting of distinctive features of successful therapy that cut across different approaches.
The critiques of lists of research-supported therapies have led to a parallel movement to build an alternative catalogue of the aspects of psychotherapy that are evidence-based, not disorder by disorder, but in terms of well-established trends across studies. These efforts move away from a narrow focus on contrasting treatments to ask what aspects of psychotherapy have been empirically demonstrated to make a difference. The core notion behind this initiative is that therapists should be trained in and practice these core skills, rather than specific, evidence-based therapy models. Following from this perspective, the American Psychological Association’s Division of Psychotherapy, under the leadership of John Norcross, is in the process of reevaluating the evidence of such treatment-bridging factors summarized in the 2002 book Psychotherapy Relation-ships That Work. His task force is concerned with determining the factors that make the biggest difference in therapy outcome—variables such as the client’s stage of change and the therapist’s empathy and positive regard, and relationship variables such as the quality of the client–therapist alliance and goal consensus.
Others have come to believe the best way to bring a stronger empirical orientation into the psychotherapy field is by instituting a systematic way for clients to provide ongoing feedback to clinicians about their perceived progress. This point of view stresses the use of research methodology in ongoing therapy on a session-by-session basis, so as to track change—or the lack of it. Systems for providing such feedback have been developed by psychologists Michael Lambert, Kenneth Howard, William Pinsof, Scott Miller, Barry Duncan, Leonard Bickman, and others. Miller and Duncan utilize a short scale, asking the client how he or she is doing and feels about the therapy, which they regularly review with the client as therapy progresses. Lambert’s measure, the OQ-44, is completed by clients before each session and computer scored, generating regular feedback letting therapists know how a case is progressing in comparison to results with similar clients at a similar stage in the treatment process. Lambert has already shown that feedback to therapists about cases that are going worse than expected can have a considerable positive effect on therapy outcome.
So what does all this say about what the ordinary clinician, not trained in the close analysis of clinical research, needs to know? For most client problems, psychotherapy, as a broad activity, is well established as being effective. Therapy studies suggest that differences in treatment are far less important than therapist, client, and relationship factors. For some problems widely acknowledged as difficult to treat, however, such as panic disorder, obsessive-compulsive disorder, borderline personality disorder, adolescent substance use and delinquency, bipolar disorder, and schizophrenia, the jury is no longer out: certain research-supported therapies have clearly shown themselves to achieve superior outcomes. For example: panic disorder responds well to treatments such as David Barlow and Michelle Craske’s cognitive-behavioral based Panic Control Treatment; obsessive-compulsive disorder to Edna Foa’s Exposure and Ritual Prevention Treatment; borderline personality disorder to Marsha Linehan’s Dialectical Behavior Therapy; and adolescent substance abuse to such family treatments as Howard Liddle’s Multidimensional Family Therapy, Jose Szapocznik’s Brief Strategic Family Therapy, and James Alexander’s Functional Family Therapy. These are problems that clinicians widely regard as difficult to treat, and the limited outcome data we have suggest that other forms of treatment that aren’t designed to deal with the special aspects of these problems yield poor outcomes and high rates of recidivism. Until proven otherwise, the research-supported treatments should be regarded as treatments of choice for these problems.
At the same time, it’s important to scrutinize empirical results with a critical and discerning eye, as sometimes the certainties we look for from research fade over time, especially when short-term findings are compared with long-term results. For example, it has long been believed, both by researchers and within the popular media, that stimulant medications have proven themselves beyond a doubt to be extremely effective in treating ADD and AD/HD in children. However, prominent AD/HD researcher William Pelham, after reviewing the most recent findings from the best long-term outcome studies of ADD and AD/HD, concluded that a decade after treatment began, the only difference between those treated with stimulants and those who weren’t was a two-inch difference in average height stemming from the side effects of stimulant medication. Despite positive short-term effects, drugs such as Concerta had no demonstrated impact on improving long-term functioning in academic performance, behavior, or symptoms when measured over a decade.
As therapists, we live in a world of evidence and accountability, a world in which a persuasive statement of theory, a case study, or a videotape demonstrating that a dramatic one-session “cure” occurred is no longer enough. Given the trend toward closer and closer scrutiny of what we do, we all need to become more attuned to recognizing the difference between what’s known and what’s claimed to be known. Although science can and should inform practice, psychotherapy remains a complex interpersonal activity, mediated by a vast array of variables, so that clinical acumen will always be essential in navigating the process. In 2005, an American Psychological Association task force on evidence-based practice was convened by former APA president Ronald Levant. It consisted of eminent scholars identified with research-supported therapies, such as David Barlow and Steven Hollon, those opposed to those therapies, such as Bruce Wampold and Drew Westen, and practicing clinicians. The task force produced a consensus statement (albeit one that parties on all sides found less than their own ideal vision) that research evidence needs to be incorporated into treatment, but with clinical judgment and a deep respect for the client’s values and preferences. Ultimately, psychotherapy is a moral activity, filled with value-laden decisions about how to live life, not just how to feel better in the short term or how to be less symptomatic.
Despite the polemics among assorted camps of researchers, there seems to be ample room for a view that emphasizes attention to the common factors within therapy practice while employing specific treatment methods with some specific clinical problems. At the most practical level, session-by-session tracking of progress within therapy is not only becoming well established as clinically useful, but is likely soon to become part of standard practice. Accountability to clients and to third-party payers is a reality of our lives, and it seems clear that the days in which therapists could ignore research findings and continue to do therapy as they were originally trained to, indifferent to the emergence of new evidence, have passed forever.
Hubble, Mark A., Barry L. Duncan, and Scott D. Miller, eds. The Heart and Soul of Change: What Works in Therapy. Washington, D.C.: American Psycho-logical Association, 1999.
Nathan, Peter E., and Jack M. Gorman, eds. A Guide to Treatments that Work, 3rd edition. London: Oxford University Press, 2007.
Norcross, John C., ed. Psychotherapy Relationships that Work: Therapist Contributions and Responsiveness to Patients. London: Oxford University Press, 2002.
Wampold, Bruce E. The Great Psycho-therapy Debate: Models, Methods, and Findings. Mahwah, N.J.: Lawrence Erlbaum Associates, 2001.
Westen, Drew, Catherine M. Novotny, and Heather Thompson-Brenner. “The Empirical Status of Empirically Supported Psychotherapies: Assumptions, Findings, and Reporting in Controlled Clinical Trials.” Psychological Bulletin 130, no. 4 (July 2004): 631-63.
Jay Lebow, PhD, is a former contributing editor to the Psychotherapy Networker and clinical professor at Northwestern University. He’s also senior therapist and research consultant at the Family Institute at Northwestern University.