For nearly 50 years, cognitive behavioral therapy (CBT) has claimed higher scientific authority among the vast legion of psychotherapy approaches as a result of having more research demonstrate its effectiveness than any other therapeutic method. Increasingly, that track record of empirical evidence has been acknowledged and even translated into government funders and insurance companies requiring therapists to use CBT if they want to be reimbursed. But recent developments have raised questions about whether the effectiveness and scientific bona fides of CBT have been overstated.
Developed largely within university settings concerned with quantifiable research results, CBT has been the focus of far more studies than any other therapy model. Almost 90 percent of the approaches deemed empirically supported by the American Psychological Association’s Division 12 Task Force on Psychological Interventions involve cognitive behavioral treatments. More than 269 meta-analyses have been conducted on CBT, and a 2008 survey by a team of Boston University researchers identified 1,165 CBT outcome studies with a wide range of clients, including those suffering from depression, bipolar disorder, eating disorders, criminal behavior, and chronic pain and fatigue.
But recent findings about the effectiveness of CBT have made waves among psychotherapy outcome researchers. A 2013 meta-analysis published in Clinical Psychology Review comparing CBT to other therapies reported that it had failed to “provide corroborative evidence for the conjecture that CBT is superior to bona fide non-CBT treatments.” Another meta-analysis, conducted by researchers at the United Kingdom’s University of Hertfordshire and published in Psychological Medicine in 2009, concluded that CBT has only a weak effect in treating depression and is ineffective in reducing symptoms of schizophrenia or bipolar disorder. In November 2014, an 8-week clinical study conducted by Sweden’s Lund University concluded that CBT was no more effective than mindfulness-based therapy for those suffering from depression and anxiety.
The latest blow to CBT’s claims to therapeutic supremacy came with the publication this past May of a meta-analysis conducted by psychologists Tom Johnsen, of UiT, the Arctic University of Norway, and Oddgeir Friborg, of the University of Tromso, titled “The Effects of Cognitive Behavioral Therapy as an Anti-Depressive Treatment Is [sic] Falling.” Published in the APA’s Psychological Bulletin, the study tracked 70 CBT outcome studies conducted between 1977 and 2014—between the heyday of CBT founders Aaron Beck and Albert Ellis and the most recent studies. Johnsen and Friborg concluded that “the effects of CBT have declined linearly and steadily since its introduction, as measured by patients’ self-reports, clinicians’ ratings, and rates of remission.” According to Johnsen, even the rosy quantitative findings about CBT in its early days should be taken with a grain of salt. “Just seeing a decrease in symptoms,” he says, “doesn’t translate into greater wellbeing.”
Trying to explain the reason for the decline, Johnsen and Friborg suggest that an important factor is the differences among the varying forms of CBT being used in the studies over the years. Today, they argue, there exist two types of CBT: the “pure” CBT, created by Beck and Ellis, reflecting the protocol-driven, highly goal-oriented, more standardized approach they first popularized, in contrast with the looser, more integrative approach of modern CBT. Newer approaches, they believe, often stray from CBT’s original tenets, which included explicitly outlining the treatment agenda at the start of therapy, regularly soliciting client feedback, and including homework assignments after every session. According to Johnsen and Friborg, “Proper training, considerable practice, and competent supervision are very important to provide CBT in an efficacious manner. . . . Therapists who frequently depart from the [Beck] manual demonstrate poorer treatment effects than therapists who follow it.”
Another hit to CBT’s reputation came in 2012 from Sweden’s National Board of Health and Welfare, which, after placing CBT at the top of a list of recommended treatments for depression and anxiety, concluded after a two-year trial period that CBT had no noticeable advantage over alternative therapies and that increasing numbers of clients were dropping out of treatment after finding it ineffective. By that time, more than two billion Swedish kronor had been spent in financial incentives to therapists who made CBT their preferred mode of treatment.
Rolf Holmqvist, a professor of clinical psychology at Sweden’s Linköping University, who was consulted on how to revamp the board’s therapeutic guidelines, believes that part of the problem was an overenthusiastic embrace of the approach as a kind of clinical magic bullet. “The right thing to do would’ve been to look at several different types of treatments,” he says. “If you’re working with a client and you’re not seeing results, you need to change perspectives from the approach the therapist wants to follow to what the client actually needs.”
Scott Miller—a psychologist who runs the International Center for Clinical Excellence and spent time in Sweden during the period when the National Board of Health and Welfare was trying to incentivize practitioners to use CBT—believes that the fundamental problem had less to do with CBT itself than with a misguided notion about the factors that make psychotherapy effective. “Our field struggles with the notion that treatments work like medicine,” says Miller. “It’s as if people coming to therapy have a variety of infections that different psychotherapy models will attack like antibiotics. But the truth is that there isn’t any evidence that one therapeutic method achieves better results than any other.”
Nevertheless, Miller argues, government officials and insurance companies keep being drawn to approaches that promise to offer methodical, step-by-step procedures, which, if accurately carried out, will have systematically predictable results. “Saying that CBT works in that way makes for a simple, reassuring story,” Miller adds, “but it’s misleading and keeps us from advancing as a field. It ignores all we’ve learned about the key variable that research has shown over and over accounts most for positive results: the therapeutic relationship.”
Some critics of the method have jumped at the recent negative findings to argue that alternative therapies are just as effective as CBT, or even better, but its supporters argue that plenty of reasons to question those findings remain. Steve Hollon, a psychologist at Vanderbilt University who specializes in treating depression, argues that, because conditions of replicated trials can be so wildly different from original ones, it’s unsurprising that results, too, can differ. He agrees with Johnsen and Friborg that studies conducted under Beck’s supervision, for instance, might have been more concerned with methodological fidelity. “It may be that the more recent studies don’t have the same methodological rigor,” Hollon says. “It may be that we’re just seeing the more variable results you’re going to get in the real world.”
Another important factor may be the placebo effect. Johnsen and Friborg suggest that the passage of time may have diminished the boost the placebo effect gave to CBT results in the earlier years. “The placebo effect is typically stronger for newer treatments,” they note. “As time passes and experience with therapy is gained, the strong initial expectations of dramatic improvements wane.” Adds Jesse Wright, a psychiatrist and director of the University of Louisville’s Depression Center, who’s taught courses worldwide on CBT since 1980, “You see these patterns fairly often. For instance, if we look at the effectiveness of antidepressants compared to placebos, you see a similar slope over time. Does this mean antidepressants aren’t useful? This is mostly a matter of time going by and people’s expectations changing.”
Part of Johnsen and Friborg’s analysis focused on the relative effectiveness of new CBT practitioners against those who were veterans and presumably more skilled. “The competence of the therapist probably exerts more influence on how treatment works,” they write. “Patients receiving CBT from experienced psychologists had a more pronounced reduction in depressive symptoms compared with patients receiving CBT from psychology students with less experience doing therapy.” Miller, too, concludes that therapy outcomes depend on the personal impact of the clinician more than they do on treatment method. “We know the truth,” he says, “and we’ve known it for decades. Methods don’t always work the way they’re promised to work. What really makes a difference in psychotherapy is the therapist’s ability to craft a relationship. But this doesn’t sound sexy. To many, it seems like you’re saying psychotherapy is just about having a good friend.”
As much as we’d like research to provide tidy conclusions and confer legitimacy on our preferred treatment methods, it often just adds to our questions about how to understand what goes on in the consulting room. But in the end, both CBT’s advocates and its critics can agree on two things: no form of psychotherapy offers a reliable miracle cure, and it’s never easy making neat science out of the often nebulous encounter we call psychotherapy.
Chris Lyford is the Senior Editor at Psychotherapy Networker. Previously, he was Assistant Director and Editor of the The Atlantic Post, where he wrote and edited news pieces on the Middle East and Africa. He also formerly worked at The Washington Post, where he wrote local feature pieces for the Metro, Sports, and Style sections. Contact: firstname.lastname@example.org.