The Tyranny of Time

How Long Does Effective Therapy Really Take?

Magazine Issue
March/April 2020
A bunch of clocks and an hourglass

It’s been a long time since a systematic study asked clients whether they were actually getting what they needed from their psychotherapy. To be precise, it’s been 25 years since Consumer Reports conducted the study. By far the most cited on the topic in the literature, it gathered data from more than 4,000 respondents who’d sought formal help for stress or emotional problems in the previous three years.

What the study concluded came as a shock to academic researchers who studied and promoted brief therapies—therapy takes time. Meaningful change began at about the six-month mark, and clients who stayed in therapy for a year did substantially better. Those who stayed for two years improved still more. There was an unmistakable dose–response curve: “The longer people stayed in therapy, the more they improved.” This dovetailed with earlier research that also found a dose–response relation between amount of therapy and amount of improvement.

“The majority were highly satisfied with the care they received,” Consumer Reports reported. “Most had made strides toward resolving the problems that led to treatment, and almost all said life had become more manageable. This was true for all the conditions we asked about, even among people who had felt the worst at the beginning.” But people had poorer outcomes when the type or duration of therapy was restricted by their health plans. “This suggests that limited mental health insurance coverage, and the new trend in health plans—emphasizing short-term therapy—may be misguided.”

A noteworthy feature of the Consumer Reports study—besides being conducted by an unbiased consumer organization with no ax to grind about psychotherapy—is that it did not prejudge what people wanted from their therapy. The three outcome questions left it to respondents to decide whether they’d gotten what they needed: 1) How much did treatment help with the specific problem that led you to therapy? 2) Overall, how satisfied were you with the therapist’s treatment of your problems? 3) How would you rate your overall emotional state compared to when you started treatment, from very poor (I barely managed to deal with things) to very good (life was much the way I liked it to be)? This is different from how most psychotherapy research is done. More often, academic researchers decide up front, without consulting clients, what their therapy is meant to accomplish. This is a crucial point, and we’ll return to it.

Research findings are most powerful when multiple, independent data sources converge on the same conclusions. The Consumer Reports study obtained information from clients. But what do expert therapists say about how long therapy takes? Surprisingly, academic researchers rarely ask this question either. Generally, psychotherapy researchers have been remarkably uninterested in the input of real-world therapists. Across the psychotherapy research literature, the voices of clients and therapists are rarely heard. The voices heard loudest are those of career academics with little real-world psychotherapy experience as clients or as therapists.

A research team led by Emory University psychologist and neuroscientist Drew Westen asked a sample of 270 highly experienced psychotherapists (average 18 years’ practice experience, about two-thirds psychologists and one-third psychiatrists) to describe their last completed therapies “in which patient and therapist agreed that the outcome was reasonably successful.” The therapists were asked about successfully completed therapies with three clients: one with clinically significant depression, one with clinically significant panic, and one with clinically significant anxiety without panic. The therapists had diverse theoretical orientations, and roughly equal numbers described their orientations as cognitive behavioral therapy (CBT), psychodynamic, or eclectic.

The median number of sessions ranged from 52 (for panic) to 75 (for depression), or about one year of weekly therapy—about the same duration Consumer Reports recommended as most helpful for most people.

The researchers also looked at durations of therapy separately for therapists of different theoretical orientations. Not surprisingly, psychodynamic therapists tended to have the longest treatments and CBT therapists the briefest, with eclectic therapists somewhere in between. But even the CBT therapists reported that successful therapy took much longer than the 8 to 16 sessions common in research trials. The mean number of sessions for successful real-world CBT ranged from 33 to 44—a little over six months of weekly therapy to almost a year. Those time frames may be starting to sound familiar.

The survey also asked a more nuanced question than how long successful treatments had lasted: “At what point did you see clinically significant change in the primary symptoms with which the patient presented?” In other words, how long did it take, not for clients to get well, but just to improve enough to make a difference? The median number ranged from 16 to 20.

Academic researchers promoting brief manualized therapies tell us therapy is finished in 8 to 12 sessions. But if we believe the expert therapists—psychologists and psychiatrists of diverse theoretical orientations with an average of 18 years of practice experience—meaningful therapy has barely started.

The studies we’ve described are surveys that rely on clients’ and therapists’ retrospective reports. The findings are suggestive, all the more so because the information obtained from clients and therapists converges. But surveys, no matter how rigorously conducted, are not conclusive.

Another way to study how long effective therapy takes would be to track client progress in real time, session by session. Hypothetically, this could be done by repeatedly administering a validated, standardized assessment instrument throughout the therapy. Such a study could settle important questions beyond a reasonable doubt: Is there really a dose–response relation between the amount of therapy and improvement? If so, just how much “dose” is needed to make a meaningful difference for a meaningful number of people?

As it happens, this hypothetical study was conducted by a research team led by eminent psychotherapy researcher Michael Lambert of Brigham Young University and published in the Journal of Consulting and Clinical Psychology. The study had a nationwide sample of over 10,000 therapy clients—yes, over 10,000—who were assessed session-to-session with the Outcome Questionnaire-45 (OQ-45), a validated outcome instrument with established norms. Its 45 questions span three domains: symptoms, interpersonal relations, and quality of life and work. The end-result of interest in this study was not whether clients got fully well, but the lower bar of “clinically significant change,” defined quantitatively based on norms for the OQ-45 instrument.

Like the Consumer Reports study, this study also found a dose–response relation between therapy sessions and improvement. In this case, the longer therapy continued, the more clients achieved clinically significant change. So just how much therapy did it take? It took 21 sessions, or about six months of weekly therapy, for 50 percent of clients to see clinically significant change. It took more than 40 sessions, almost a year of weekly therapy, for 75 percent to see clinically significant change.

Information from the surveys of clients and therapists turned out to be pretty spot on. Three independent data sources converge on similar time frames. Every client is different, and no one can predict how much therapy is enough for a specific person, but on average, clinically meaningful change begins around the six-month mark and grows from there. And while some people will get what they need with less therapy, others will need a good deal more.

This is consistent with what clinical theorists have been telling us for the better part of a century. It should come as no surprise. Nothing of deep and lasting value is cheap or easy, and changing oneself and the course of one’s life may be most valuable of all.

Consider what it takes to master any new and complex skill, say learning a language, playing a musical instrument, learning to ski, or becoming adept at carpentry. With six months of practice, you might attain beginner- or novice-level proficiency, maybe. If someone promised to make you an expert in six months, you’d suspect they were selling snake oil. Meaningful personal development takes time and effort. Why would psychotherapy be any different?

The Evidence-Based Therapy Myth

You’d think that researchers seeking to advance scientific knowledge of psychotherapy would use this information to design psychotherapy research. The findings set general parameters around what makes most sense to study: for most people, six months of therapy is a starting point for meaningful change, and studies of therapies of 12 months or longer should be the norm. Clients wouldn’t be obligated to continue therapy longer than they needed or wanted for the sake of a research protocol, but if we were serious about studying therapies that could offer meaningful help to meaningful numbers of people, those would be reasonable starting parameters.

And yet this is not how most academic research is conducted, nor the message most people are getting about therapy. Norms regarding therapy duration now tend to be calibrated by the cost-cutting agenda of health insurance companies, and by academic research based on assumptions that don’t realistically reflect how psychotherapy works. In the psychotherapy research world, the lion’s share of research focuses on brief, fixed-duration therapies conducted by following step-by-step instruction manuals—known as manualized therapy. With the exception of a small number of longer-term treatments for borderline personality disorder, manualized therapies are almost always 16 sessions or fewer, and 8 to 12 sessions is typical.

Academic researchers initially referred to these instruction-manual therapies as empirically validated therapies (EVTs), then empirically supported therapies (ESTs), and currently evidence-based therapies. These terms, especially evidence-based therapy, are problematic because they’re misleading. People take them to mean that clients get well. That is not so. Rather, evidence-based therapy means the treatment is delivered in a standardized way by following an instruction manual, and it has been studied using specific research methods. It doesn’t refer to the percentage of people who get well, or how much they improve, or whether they themselves consider their therapy successful.

To avoid the built-in confusion between the methods used to study these therapies versus their benefits to clients, we’ll avoid the ambiguous term evidence-based therapy and instead refer to the treatments descriptively as instruction-manual therapies.

The Road to Progress

In the psychotherapy research world, studies using randomized controlled trial (RCT) research designs are given the highest stamp of approval. The method involves random assignment of research subjects to a treatment or control group (usually no intervention, or a placebo intervention not meant to target the client’s problems). The treatment must be conducted by following a manual to ensure it’s delivered in a uniform way by all therapists in the study. Researchers select subjects with a specific DSM diagnosis, say generalized anxiety disorder, major depressive disorder, or PTSD, and the disorder is the focus of the treatment. Desired outcome is defined by the researchers, not clients or therapists, and is measured by scores on symptom checklists based on the DSM diagnosis.

This medicalized approach to pigeonholing client problems into a single psychiatric diagnostic category, and defining progress solely in terms of symptoms, allows psychotherapy to be compared with medications in RCTs and treated as if it were equivalent to taking a medication. Because it’s often considered sufficient to evaluate the effects of medication in a short 8- or 12-week span, the same assumption is applied to instruction-manual therapy.

Most research trials therefore study 8-to-12-week treatments. These abbreviated models of care are spreading in the mental health field. The website for the Society of Clinical Psychology, dedicated to raising public awareness of instruction-manual therapy, lists 85 approved treatments. All are aimed at a single, specific DSM diagnosis, and most have a 12-session format, like Cognitive Processing Therapy (CPT) and Prolonged Exposure (PE) for PTSD.

Nowhere are instruction-manual therapies pushed more than in the US Department of Veterans Affairs, the largest provider of mental health services in the world. A few years ago, when officials from the Institute of Medicine showed up at the Bronx Veterans Affairs Medical Center to prod the staff to use instruction-manual therapies like PE, Rachel Yehuda, one of the psychologists in attendance, remembers being told, “Not offering PE for PTSD is like not offering insulin for diabetes.”

Brief, instruction-manual therapies are also fast becoming the standard of care for health insurance companies. Lyra Health, a behavioral health care company that provides mental health care for employees of big corporations like eBay and Amgen, boldly stated on its website that “[instruction-manual therapies] should be used as first-line intervention, since they offer the best prospect of helping clients with common mental health problems to feel better and recover. Using anything else risks exposing the client to a potentially harmful treatment and deprives the client of treatment that has been rigorously tested and proven to work.”

This brings us to the all-important question: Do brief, instruction-manual therapies offer meaningful help? Before digging into the research, let’s start with the perspective of a psychologist interviewed for this article who was trained to provide instruction-manual therapy to combat veterans in a large Midwestern city. We’ll use the pseudonym Dr. Cameron, because she feared her 10-year reputation in the VA could be tarnished if she identified herself. Asked to describe a typical case of PTSD she’d treated with instruction-manual therapy, she immediately recollected Ryan, a 30-year-old former combat engineer, who’d been deployed three times in the Middle East to detect and detonate roadside bombs.

Ryan diligently used his allotted 12 sessions of CPT to identify and correct his “stuck points,” or unreasonable automatic thoughts about a roadside bombing incident. Over time, he rewrote his story about what had happened (his “impact statement”) so he was less apt to judge himself harshly for not being able to prevent harm to fellow soldiers. When the 12 sessions were up, Dr. Cameron felt compelled to keep meeting with Ryan because he was still in severe distress. Walking to the store, he’d forget where he was because he was so caught up reliving awful combat memories. He felt like an alien being, a hollowed-out person disconnected from those around him. He badly wanted to feel like “part of society,” to “get along better with other people,” “be a better husband to my wife,” and “just feel alive again.”

Dr. Cameron believed Ryan’s difficulties were worsened by an insecure attachment style, which his military training had reinforced. “He’s so approach-avoidant, so uncomfortable being vulnerable and relying on anybody else,” she said. “His needs for independence, self-reliance, and being a ‘tough guy’ are exaggerated. He equates being emotional with being weak.” Midway through the interview Dr. Cameron remarked, “With clients like Ryan, the real story only comes out after about a year of weekly therapy sessions because, honestly speaking, it takes that long to establish the sort of trust traumatized vets need to start opening up.”

In Ryan’s case, it took a year of therapy before he broke down and cried, confessing to Dr. Cameron that he was often suicidal and that he kept his wife in the dark about how miserable he was. Dr. Cameron summed up her view of instruction-manual trauma treatments this way: “Many of us in the VA know they don’t fit the population because the cases we see are always more complicated. These treatments seldom, if ever, lead to real progress. Six months, a year, two years after they’re over, clients show up in our clinic again because they’re not doing well.”

It turns out that Ryan’s case is not unusual. Research published in the Journal of the American Medical Association by psychologist Maria Steenkamp and her associates raises eyebrows: two out of three veterans treated with so-called evidence-based therapies for PTSD still have PTSD after completing treatment.

Research findings likewise cast doubt on claims that instruction-manual therapies help depressed clients over the long haul. One of us (Jonathan) recently reviewed the findings of research on instruction-manual therapy research in a paper titled “Where Is the Evidence for ‘Evidence-Based’ Therapy?” Across research spanning 30 years, the percentage of depressed clients who got well and stayed well, even for brief follow-up periods of 12 to 18 months, hovered around 25 percent.

The American Psycho­logical Association came to much the same conclusion, though you’d need to go to some length to find that out. In its recently published clinical practice guidelines for depression that recommend only brief, instruction-manual therapies, the following caveats are buried in the fine print of the 168-page document initially posted for public comment: “Overall, [instruction-manual] treatments for depression have a modest impact on alleviating symptoms.” “Evidenced best practices are not universally synonymous with the patient’s personal idea of treatment success,” resulting in treatment dropout and “the potential for life-long negative associations and treatment alienation.”

Buried still deeper is the jaw-dropper: “Four decades of psychotherapy research have shown that, after treatment completion, more than half of patients remain depressed, and of those who improve by the end of treatment, about 40 percent experience a relapse.”

A little arithmetic yields the following evidence-based conclusion: 7 out of 10 people who receive so-called evidence-based therapy for depression do not improve, or they relapse quickly.

The Master Clinician’s Secret

One reason brief instruction-manual therapies don’t really help with long-term improvement is that they gloss over the mix of attachment insecurities, personality quirks, and psychological defenses that make most clients in the everyday world of clinical practice anxious and depressed.

Limiting research to “pure” cases of a diagnosis and restricting the meaning of outcome to symptom checklists make for elegant experimental research designs but miss the complexity of the psychological difficulties most clients experience. It’s the rare client who sits down and lists off symptoms in lock-step with a DSM diagnosis. Instead, we hear about a person’s shame at, once again, losing his temper with his daughter; or confusion about a boyfriend’s possessiveness; or someone’s bedeviling attraction to unavailable women; or a proneness to slip and say off-putting things during intimate moments that spoil a potential for closeness; or any number of difficulties recognizing and expressing difficult emotions.

In fact, most problems therapists treat are embedded in, and inseparable from, personality—a person’s characteristic and enduring patterns of thinking, feeling, fantasizing, desiring, fearing, coping, defending, attaching, relating, and experiencing self and others. Research suggests that as many as half of therapy clients with primary diagnoses of common disorders like depression or generalized anxiety also meet formal DSM criteria for personality disorders or have subthreshold personality pathology. All have personality styles that bear on treatment and recovery.

This is the secret known to master clinicians: meaningful and lasting psychological change comes from focusing not on symptoms, but on the personality patterns that underlie them. This is not just clinical wisdom; it’s an empirical finding. According to research in the American Journal of Psychiatry, the personality issues most often reported by experienced clinicians as clinically significant concerns include problems with intimacy, relatedness, or commitment in close relationships; difficulty with assertiveness or expression of anger or aggression; problems with separation, abandonment, or rejection; problems with self-esteem (such as feeling inadequate or incompetent); problems with authority; shyness or difficulty getting close to people or making friends; and perfectionism or high self-criticism.

We suspect these therapy issues sound familiar to most therapists.

What’s a “Good” Outcome?

A basic flaw in the logic of instruction-manual therapies is that clients come to therapy primarily for symptom reduction. Often, they want something else and something more. Probe them on how they themselves define good outcome and they refer to things like new ways of relating to others, improved self-understanding, improved self-worth, and self-acceptance. That was the conclusion of a recent study in the journal Psychotherapy Research, based on post-therapy interviews with clients. Narrowly defining progress in terms of symptom reduction, as most research on instruction-manual therapy does, not only ignores what clients want from therapy, but leads to false conclusions about treatment success.

In a newly published study, researchers at Ghent University in Belgium met with clients after they’d completed 16 to 20 sessions of therapy for major depression. About half the clients classified as improved or recovered, based on their Beck Depression Inventory-2 scores, reported they needed more therapy: they hadn’t benefited in the ways they most wanted, such as making progress toward becoming the person they wanted to be, becoming more independent, tackling fears and challenges, gaining insight, or becoming more self-accepting. The authors offer words of caution for psychotherapists tempted to rely on scores on a symptom checklist as proof of progress: “‘Good outcome’ as defined based on the statistical interpretation of symptoms scales does not necessarily reflect good psychotherapy effects in patients’ experiences.”

What clients seek from therapy squares with the type of changes therapists desire for their clients. Psychologist David Orlinsky spearheaded an international survey of over 5,000 mental health professionals of diverse theoretical backgrounds who were asked to rank-order treatment goals they most wanted their clients to realize. The top of the list, by far, was “have a strong sense of self-worth and identity.” Next was “improve the quality of their relationships.” A close third was “understand their feelings, motives, and/or behavior,” and “integrate excluded or segregated aspects of experience” came after that. “Experience a decrease in their symptoms” ranked a distant fifth.

When we honor what patients want from psychotherapy and what expert therapists prioritize, the nature of therapy changes.

Dr. Cameron’s shrewd comment—“it takes a year of therapy for the real story to come out”—is not so off the mark. Just this week, Debra (a client of Enrico’s) blurted out, “I need to get serious in therapy.” A 40-year-old actress, she’s been in weekly therapy for about a year and a half. As with most clients, no single diagnosis captures her difficulties. She has intense abandonment fears, which make her possessive of her husband and fearful she’ll lose him. She wishes she were more assertive with overbearing bosses. She regrets being overly trusting with a string of business agents whose decisions benefited them financially and hurt her. She has a terrible fear of conflict and is overly “nice” to ensure that others don’t get mad at her.

Debra doubled down in therapy after her husband’s sudden announcement that he needed time apart. He thought temporally moving out might allow him space to rediscover his love for her. Truth be told, Debra had made strides in therapy: tapping into hurt and anger related to childhood emotional neglect, realizing that her fears of abandonment arose from umpteen experiences of her parents threatening divorce and running off for days after knock-down-drag-out fights, and generally gaining access to a range of feelings that allowed her to feel more alive and less anxious and depressed.

Debra’s epiphany was that she remained all-too-often emotionally withdrawn and overly nice with her husband. Fearing rejection, she made herself blandly agreeable, but this contributed to the emotional flat-lining of her marriage. Her announcement—“I need to get serious”—reflected a newfound commitment to make room for the full range of her feelings at home, both loving and not-so-loving ones.

In our experience, this sort of second wind in longer-term therapy, or access to deeper change processes and renewed energy to pursue life goals, is more common than not. Some research substantiates this. In a year-long study of community-based psychodynamic psychotherapy with 65 clients, Refael Yonatan-Leus at Hebrew University of Jerusalem and colleagues found that treatment becomes incrementally more effective as it progresses over time with experienced therapists. As therapy unfolds, clients learn to confidently expect understanding and sensitivity, which in turn fuel greater emotional truth-telling and unguarded disclosure.

Sadly, legions of clients don’t stick with therapy long enough to reap the benefits. Early dropout is a neglected topic in our field. A 2010 article in the American Journal of Psychiatry reported on a sample of 30,000 psychotherapy clients and found that nearly 40 percent had dropped out within the first two sessions, and 80 percent before attending 10 sessions. A mere nine percent attended 20 or more sessions. Within the VA system, psychotherapy dropout rates are staggering. Recent analysis of PTSD patients in VA clinics in New England by psychiatrist Bradley V. Watts showed that only two percent of veterans with PTSD get an “adequate dose” of psychotherapy, defined as just eight sessions or more.

It’s tempting to wonder if dropout rates reflect the poor job we’re doing in our cherished profession to train therapists to hone the human traits that make meaningful therapy possible and have mountains of research backing: empathy, understanding, genuine regard, and skill at building an alliance around the shared task of the therapy work. Clients need to know their therapist not only cares enough to listen, but knows how to listen carefully enough to hear what matters. Clients need plenty of space and time to tell their agonizing life stories in the nonlinear, scattershot way distressed humans are apt to do. Silences leave room for deeper feelings and realizations to bubble up. Subtle head nods, wry smiles, and knowing groans by the therapist are reminders to clients that they are being heard.

Nowadays, the scales are tilted toward beginning therapists learning short-term, technique-heavy, instruction-manual therapies that leave them itching to get clients “back on track” with the agenda in the treatment manual. When therapy is brief and follows a strict agenda—correcting negative thoughts, explaining symptoms, psychoeducating clients on their emotions—doesn’t it make sense that many clients will feel there’s little room to get to the heart of what’s truly eating away at them?

We worry scripted instruction-manual therapies are turning clients off, not on, to therapy, at a time when the dropout numbers are already too high. If the typical anxious and depressed client struggling with self-esteem and interpersonal issues is to stick with therapy for the time needed for real and lasting change, we need to hear the voices of those in the profession who emphasize training therapists to be better listeners. Scott Miller, a passionate scholar of “what works” in psychotherapy, pulls no punches here. “In psychotherapy, who provides the treatment is between five and nine times more important than what particular treatment approach is provided,” he says—which is why he counsels beginning therapists to work on relationship skills.

This is wise counsel. In our everyday practices, we therapists appreciate the need for our clients to use therapy to become less reactive and more personally and socially confident. We grasp the value our clients place on refraining from repeating tired old habits laid down in childhood and gaining an understanding of why they feel and act as they do. We’re often surprised, at first, by how naïve clients are about the length of therapy needed to make real progress in these areas. But there’s one thing most of us are not surprised by: when you put in the years of personal therapy and self-examination necessary to be good enough at embodying sustained empathy, genuine regard, and careful and caring listening, most clients keep coming back. Encountering someone who is patient, doesn’t leap in to provide answers and fix problems, and lets them untangle their issues at their own pace frees clients up to accept that real and lasting change takes time.

 

Resources

A bibliography of studies referenced in this article is available at psychotherapynetworker.org/shedler.

illustration: debrahardesty/illustrationsource.com

Jonathan Shedler

Jonathan Schedler, PhD, is known internationally as an author, consultant, and master clinician and teacher. His article “The Efficacy of Psychodynamic Psychotherapy” won worldwide acclaim for establishing psychodynamic therapy as an evidence-based treatment. He practices in San Francisco and provides consultation to clinicians around the globe.

Enrico Gnaulati

Enrico Gnaulati, PhD, is a clinical psychologist who’s been in private practice for more than 25 years and authored four books, including his latest, Flourishing Love: A Secular Guide to Lasting Intimate Relationships.