This article’s title poses a challenging question, to be sure—one filled with ambiguity and open to multiple answers. Getting better than what? Getting better, in what way? Getting better, according to whom? And the real kicker: Can we get better—and how? But if these tough questions are to be asked, there would seem to be no more fitting occasion than this magazine’s 30th anniversary and the opportunity it provides to reflect on an era in the field of psychotherapy during which systematic efforts to quantify and measure the key factors in the psychotherapeutic process received more attention than ever before.
In a sense, the story of how to assess the effectiveness of therapy and how it might be improved began in 1952, 30 years before the first issue of this magazine appeared. In a classic paper that year, outspoken behavior therapist Hans Eysenck, one of the field’s leading provocateurs at that time, took on psychotherapy. A staunch believer in science, he’d later be the subject of bomb threats and publicly punched in the nose by a protestor for his controversial views on genetics and IQ differences. The paper that concerns us here, “The Effects of Psychotherapy: An Evaluation,” asserted there was no proof that psychotherapy worked. On the contrary, he claimed that surveys showed that patients suffering from clinical neuroses improved after two years, whether or not they were treated by a psychotherapist.
“In the absence of agreement between fact and belief,” he proclaimed, “there is urgent need for a decrease in the strength of belief, and for an increase in the number of facts available. Until such facts as may be discovered in a process of rigorous analysis support the prevalent belief in therapeutic effectiveness of psychological treatment, it seems premature to insist on the inclusion of training in such treatment in the curriculum of the clinical psychologist.” Eysenck concluded that the shortcomings of data “highlight the necessity of properly planned and executed experimental studies into this important field.”
The first blast had been fired, and it was up to the profession to answer the challenge. Whatever its origins in Freud’s grand speculations and couch-based methodology, psychotherapy’s modern quest for scientific legitimacy may be said to have begun here.
From the get-go, however, measurement issues loomed: How do you go about proving psychotherapy really is more effective than a placebo or more helpful than a friendly, sympathetic listener? How do you determine objective measures with which to identify, define, and quantify the variable, and sometimes intangible-seeming, factors and aspects that contribute to a successful course of treatment? Moreover, as you go about establishing the science of psychotherapy, what happens to the intuitive art of psychotherapy? Until recently, this divide between objective science and intuitive art characterizing the uneasy relationship between psychotherapy’s researchers and practitioners has appeared unbridgeable.
A 2009 press release from the Association for Psychological Science, an organization formed by researchers who felt the American Psychological Association (APA) had become more of a guild than a scientific organization, carried the headline “Where’s the science? The sorry state of psychotherapy.” The release’s lead paragraph reads like an indictment: “The prevalence of mental health disorders in this country has nearly doubled in the past 20 years. Who is treating all these patients? Clinical psychologists and therapists are charged with the task, but many are falling short by using methods that are out of date and lack scientific rigor.” It goes on to cite a study suggesting that, in the absence of such scientific rigor, “six out of every seven sufferers were not getting the best care available from their clinicians.”
Ouch. Listen closely and you can hear the echo of Hans Eysenck and déjà vu all over again. So what is the science in psychotherapy? And why is it so important? Set aside obvious reasons of public trust and professional credibility, and, simply put, the quest for psychotherapeutic treatments that can be proven to work is especially important in the current era of accountability and third-party reimbursement. Today, more than ever, time is money, and measured results help determine the allotment of both time and money for patient services and for the professionals rendering those services. In other words, the stakes riding on the answer to the question of psychotherapy’s effectiveness could scarcely be higher.
The RCT Model
Until recently, the most “scientifically rigorous” answer to the question of determining effectiveness has been to adopt the model of research applied within the field of medicine—run randomly controlled clinical trials (RCTs) to establish evidence-based (also called empirically supported or research-supported) treatments that work better than a placebo. Just as in medical research, the goal has been to match a particular treatment regimen to a particular disorder. Aaron Beck, the founder of Cognitive Therapy, had begun testing his systematic treatment for depression as far back as 1973. But clinical trials didn’t begin to become the driving force they are today until 1993, when the APA established a task force within its Society of Clinical Psychology to identify criteria for—and then review and provide a running list of—treatments that could be shown to be effective for a variety of diagnosable disorders found in the DSM–IV.
Before that task force could issue its first list of “empirically supported” treatments, however, a study appeared that, while lacking most essentials of gold-standard RCT research, powerfully established the effectiveness of psychotherapy in the public mind. In November 1995, a widely circulated survey of clients’ perceptions of their treatment done by a team at Consumer Reports (CR) gave psychotherapy’s legitimacy a huge boost. According to a sample of 2,900, psychotherapy seemed to work, whatever Hans Eysenck once thought. CR pooled 178,000 readers, of whom only 7,000 responded. Of them, fewer than 3,000 had consulted some type of mental health provider. Nonetheless, probably no single publication has so influenced the popular perception of the field. But what did it prove scientifically?
Commenting on the report in the December 1995 issue of American Psychologist, Martin E. P. Seligman, APA president and psychology professor at the University of Pennsylvania, provided his own verdict on what CR had to say:
The study is not without flaws, the chief one being the limited meaning of its answer to the question, “Can psychotherapy help?” This question has three possible kinds of answers. The first is that psychotherapy does better than something else, such as talking to friends, going to church, or doing nothing at all. Because it lacks comparison groups, the CR study only answers this question indirectly. The second possible answer is that psychotherapy returns people to normality or more liberally to within, say, two standard deviations of the average. The CR study, lacking an untroubled group and lacking measures of how people were before they became troubled, does not answer this question. The third answer is, “Do people have fewer symptoms and a better life after therapy than they did before?” This is the question that the CR study answers with a clear “yes.”
Even more important, Seligman concluded, “Consumer Reports has provided empirical validation of the effectiveness of psychotherapy. Prospective and diagnostically sophisticated surveys, combined with the well-normed and detailed assessment used in efficacy studies, would bolster this pioneering study. They would be expensive, but, in my opinion, very much worth doing.” In short, still more science was needed.
That was the mid-1990s, a time that had recently witnessed the ascendancy of psychopharmacology and managed care—features of the therapeutic landscape that have increased in strength and dominance to this day. Since then, expanded numbers of treatments have been deemed “well-established” after being tested in at least two randomized clinical trials, each originating with a different team of investigators. There are now treatments that have successfully run this research gauntlet and been approved for a wide range of DSM disorders.
Those who champion the “medical model” research approach believe it’s already demonstrated beyond question just how potent and reliable psychotherapy has become in relieving human distress. Steven Hollon, a professor of psychology at Vanderbilt University, comments that, 30 years ago, “we didn’t even know the names for some of the disorders” that can now be treated effectively. By comparison, today there’s “hard empirical evidence” that Cognitive-Behavioral Therapy (CBT), interpersonal psychotherapy, and behavioral activations work for a number of specific disorders. Indeed, he says, you can treat pretty much any nonpsychotic disorder just as well with psychosocial interventions as you can with medication, and the results will not only be as good, but provide broader benefits, such as the longer-lasting effects from therapies like CBT that help patients learn how to help themselves.
Dianne Chambless, professor of psychology and director of clinical training at the University of Pennsylvania, and the researcher who oversaw the initial APA Society of Clinical Psychology commission, also sees progress over the past decades. “I think we’ve really improved upon our ability to treat a number of different disorders, or have effective treatments” for them, including anxiety disorders and interventions for bipolar disorders, she says. “Our ability to treat all the anxiety disorders with CBT is leaps and bounds above what it was 30 years ago.”
What works, according to clinical studies? From the start, behavior therapies of different kinds (including Cognitive Therapy) have been, and continue to be, the most prominently represented types of therapy on the list of effective treatments. Dissenters have suggested that this is because they’ve had the longest history of testing, but advocates of evidence-based treatments say it’s because they work. University researchers and insurance companies looking for accountability have been among the staunchest of those advocates.
There’s also a connection to managed care. With providers today reimbursing patients for fewer sessions than they did 30 years ago, studies are now looking at treatment courses that are shorter than they used to be. Previously, “brief therapy” might have meant 24 to 30 sessions. Clinical trials today tend to focus on the impact of 4, 8, or 12 sessions. The good news: success is achieved in fewer sessions than in the past. Today, psychotherapy not only has more empirically proven treatments, but is demonstrably more efficient. We now know that “You don’t have to beat around the bush and first establish a good working relationship before getting to work,” says Hollon. “You just get to work.”
Effectiveness, efficiency, endurance: sounds like a winning trifecta. But not everyone agrees that evidence-based testing has provided the evidence its supporters say it has.
Rocking the Boat
In 2002, Bruce Wampold rocked the boat of clinical testing big-time with the publication of The Great Psychotherapy Debate. In it, this professor of counseling psychology at the University of Wisconsin delivered an in-depth critique of the medical model of psychotherapy, and did so not by the usual route of rejecting the very idea of protocol-driven therapy and appealing to notions that make therapists feel warm and fuzzy, like the need for “authenticity” and “connection,” but by invoking, of all things, statistics. Deploying an impressive array of sophisticated metanalyses on hundreds of efficacy studies conducted through the decades, Wampold concluded that there was “little or no evidence that any one treatment for any particular disorder is better than another treatment intended to be therapeutic.”
What he did find evidence for is the importance of the alliance between patient and therapist. That, along with the psychotherapist’s empathy for the client and the patient’s positive expectation for treatment comprise the three key factors leading to a successful outcome, he said. In other words, it’s not the treatment or the theoretical model that makes the major difference. Rather, the characteristics of the therapist and the client, independent of specific treatment approaches, and the relationship factors mediating their connection have far more impact than the “treatment factors” themselves. Perhaps the old-fashioned clinicians who’d long stressed the human, nontechnical elements of the therapeutic encounter—what are often referred to as the “common factors” underlying effective therapy—were right after all.
The Great Psychotherapy Debate was the equivalent of a devastating body-blow to those favoring the medical model of psychotherapy, with its emphasis on finding the right treatment for any given DSM disorder. By all means, follow the science, Wampold urged, but also make sure you know what the science is saying: the medical model may not be the best model for understanding how psychotherapy works. His research showed that the “medicine” of a particular psychotherapeutic treatment itself isn’t the transportable object that, say, a dose of penicillin is. Then, again, neither can a skilled psychotherapist be mass-produced to deliver the key ingredients of empathy and therapeutic alliance that make the difference in what Wampold terms the “contextual model” of psychotherapy.
If, as Wampold asserted, treatment methods were a far less potent factor in influencing the effectiveness of psychotherapy than had been previously thought, what had all the research on empirically supported approaches actually proven? Did study after study showing a certain method’s superiority to a placebo necessarily mean that the field had gone as far as its proponents like Chambless and Hollon believed? Did the widespread conviction that psychotherapy had come a long way since the days of Hans Eysenck need to be reconsidered?
As therapists, we often lead isolated professional lives, seeing client after client without meeting regularly with our colleagues to talk openly about our work, ask questions, share ideas,
Making Therapists Better
Well, yes and no. Scott Miller, a founder of the International Center for Clinical Excellence, says, “I agree with the trend toward evidence-based practice. It’s just that what’s being advocated by some is decidedly not ‘evidence-based practice’—at least not according to the APA’s definition, or that of the Institute of Medicine.” He explains that, somehow, evidence-based practice has come to mean using a specific method for a specific diagnosis, when it actually means using approaches supported by the best evidence in a treatment informed by client preferences, cultural context, and feedback. He adds, “There’s no proof that clinicians who use particular models applied to specific diagnoses achieve better results. What’s more, available data indicate that real-world clinicians achieve the same degree of treatment success as those in randomized clinical trials. So what’s all the fuss about?”
Going beyond a critique of empirically supported treatments, Miller makes the larger point that, far from establishing the scientific bona fides of its methods, the psychotherapy community, in fact, hasn’t had the public health impact it promised three decades ago. He points out that, by most measures, the sheer number of people suffering from mental health problems is on the increase. Anyone who reads the headlines—high unemployment, poverty, returning veterans—knows those needs aren’t diminishing. Miller also wonders how it’s possible that a profession can amass so many new clinical techniques and approaches, and yet see a rise in the number of people suffering.
Most troubling of all, he says, is that, despite all the expansion of therapeutic knowledge and enormous amount of research activity in the field, overall, the average positive impact of psychotherapy hasn’t changed since the first metanalyses done in the 1970s. Somewhere between 66 and 75 percent of patients appear to improve from treatment. Nevertheless, Miller remains a believer in the effectiveness of therapy. “When you look at the statistics, the success rate for psychotherapy is on par with common medical procedures like coronary bypass surgery,” he says. Yet his reading of the research literature leaves him unconvinced that even a drastic increase in the number of therapists offering specific treatments for specific disorders would have any real effect on therapy’s success rate.
Miller is just one of a number of critics of traditional therapy outcome research who believe that evidence-based investigators too often have been studying the wrong things. Others include Wampold and Brigham Young University psychology professor Michael Lambert. What sets Miller apart is his crusader’s zeal for shaking up the status quo and his polemicist’s flair for presenting his argument in the most provocative light. He insists that to improve the quality of care in the field it’s not the treatments or protocols that need adjusting, it’s the attitudes and skills of individual practitioners that need to change. For psychotherapy to get “better,” he believes, therapists themselves need to get better at getting their clients better.
In making his case, Miller begins with what initially seems like an odd analogy to—of all things—airline safety. But fasten your seat belts, and let him explain. Over the past 35 years, he says, the number of American commercial airline crashes has decreased significantly, yet the technology of the planes themselves hasn’t changed dramatically. What’s changed is pilot training—to include a focus on safety and an emphasis on maintaining standards of professional excellence in individual performance.
What that has to do with psychotherapy is this: by contrast to airplane pilots and other highly skilled professions, in the field of psychotherapy, Miller says, “we’ve applied nothing that’s been learned from the literature on excellence to strengthening our expertise and skills.” Moreover, the pathway for psychotherapists to achieve excellence, he believes, isn’t illuminated particularly by clinical research studies documenting the efficacy of a specific treatment or technique. Instead, the way forward lies, first, in studying the traits shared by the most effective psychotherapists (in the same way that, say, the medical or management or legal professions seek to identify the traits associated with best practices in their fields), and then, getting others up to speed. This, he says, can be accomplished with evidence that comes not from clinical trials of specific protocols, but evidence generated by therapists themselves, based on their own strengths—and weaknesses.
It’s known as feedback.
Instead of focusing on evidence about “better” treatments garnered from clinical trials, he says, psychotherapists should seek out systematic feedback about their own performance and case outcomes in the form of simple questionnaires that patients can fill out prior to each session. Miller is part of a group that’s developed the four-question Outcome Rating Scale (ORS), an easy-to-use instrument for tracking treatment progress, as well as a therapeutic alliance checklist that helps therapist and client make sure their goals for therapy are aligned. It’s his contention that those answers—about the client’s well-being and sense of progress—are a form of practical research that enables therapists to gauge how they’re doing, session by session, and patient by patient. By regularly and systematically putting that mirror up to themselves, psychotherapists can become aware immediately of clinical missteps and errors in judgment that might otherwise go undetected until a client dropped out or ended treatment without deriving the anticipated benefits.
Clinicians like Miller and Lambert, his graduate-school mentor and the pioneer of developing feedback measures, think that regularly assessing therapeutic progress is fundamental to helping clinicians more dependably steer their course in therapy. They believe that such systems are essential because therapists too often rely solely on their intuitive judgments about which interventions will work or when to alter treatment tactics, despite the well-established fact that intuition by itself is notoriously unreliable, even for veteran clinicians. Everyone needs outside norms, baselines, and reference points by which to double-check those judgments; feedback instruments provide them. At the same time, feedback helps the therapist align with the patient’s goals and then match treatments to them. It provides another tool for “listening” to the patient’s response style in treatment. So that, for instance, if your client doesn’t like feeling like he’s being “told” what to do, such feedback can give you the heads-up that you need to try a less directive approach.
A crucial consideration in improving overall levels of treatment success, according to Lambert, is spotting problems early in the process. “Therapists are overly optimistic about their ability to help patients, and they ignore, or even have a positive view about, people getting worse, in that they believe, erroneously, that in order to get better, you first have to get worse,” he explains. “When they see a patient getting worse, that doesn’t alarm them.” Lambert’s system doesn’t allow therapists not to be alarmed.
His Outcome Questionnaire (OQ) is longer than Miller’s (45 questions, measuring symptoms, relationship problems, and social-role function). Both the OQ and Miller’s measure can be scored electronically, and are designed to send the therapist an alert when the measurements fall below a certain level. That serves as a wake-up call to tell therapists to pay attention, treatment is off track, and they should reevaluate and modify what they’re doing. The OQ’s own track record is impressive. In eight studies (six of them published) so far, the failure rate of therapists using the OQ declined to 6 percent in comparison with a failure rate of 21 percent among therapists not using the feedback measure. The briefer ORS has been tested in three major studies, and a recent metanalysis completed by Lambert shows that both measures improve outcomes.
Still, some clinicians remain skeptical that closely attending to clients’ feedback is the magic bullet that some of its advocates seem to claim. “One of the problems with feedback-informed approach is that it too often seems to operate as a kind of customer-is-always-right model,” says William Doherty of the University of Minnesota. “But what if the complaints the client has about how therapy is going is a reflection of the very problem that brought him into therapy in the first place?”
The Context of Practice
If the question of the effectiveness of psychotherapy and how to increase it hinges on the debate between the evidence-based traditionalists and the feedback-informed insurgents, some believe the simplest resolution is, essentially, to split the difference and put aside an unproductive disagreement about whose type of research is superior. Such is the view of John Norcross, professor of psychology and distinguished university fellow at the University of Scranton. Why take sides or engage in polarizing arguments, he asks, about what’s more important: the treatment method or the therapist–client relationship. “Sensible people don’t have a debate on all this,” he says.
What is sensible, he believes, is to take the approach that, when it comes to treatment choices, different strokes work for different folks. Every type of therapy has its inadequacies and won’t work 100 percent of the time for 100 percent of people, he says. “There isn’t one method. There are multiple methods.” Whatever the method, what’s most important is to go beyond either/or debates. In fact, Norcross has coauthored a new book with Michael Lambert, Evidence-Based Therapy Relationships, in which the two deplore the “culture wars in psychotherapy” that pit polarized camps of evidence-based treatment champions against those who advocate the overarching importance of the therapist–client relationship. Such squabbles only distract from the shared goal of all, the authors say, which is “to provide the most efficacious psychological services to our patients.”
Norcross prefers to emphasize the importance of the two types of research in helping psychotherapy continue to progress. “It’s the mutual interplay between both of these—between practice and research—that’s leading to more effective and more efficient psychotherapy,” he notes. As he and Lambert write in their book: “Decades of psychotherapy research consistently attest that the patient, the therapist, their relationship, the treatment method, and the context all contribute to treatment success (and failure). . . . We should be looking at all of these determinants and their optimal combinations.”
But others believe that reducing the argument about psychotherapy’s effectiveness to a debate between two different research models ignores far more crucial considerations. Looking at the broader issues of how clinicians are trained and the incentives currently offered for therapists to further develop their skills, they insist that it’s important to grasp the everyday context in which most therapists practice. To understand why overall success rates in our profession don’t appear to be improving despite all the new information coming into the field about the brain, mindfulness, and the mind-body connection, and all the research results being regularly reported in the journals, they say you must grasp the fact that this information isn’t being conveyed to therapists in ways that help them improve their actual performance with clients.
“In our field, there are model-specific skills—the procedures you need to learn to do EMDR or CBT or EFT,” says William Doherty. “We go to didactic workshops to keep up with new developments, but that’s done largely through lecture, with minimal opportunity to see how people actually employ these tools in their work. There are also the generic skills that cut across models that the research says are fundamental to helping build alliances with our clients and achieving good outcomes. But once you’re out of your initial grad-school training, how can you develop those skills? Peer consultation too often leads only to the discussion of cases at a theoretical level, or at the level of abstract strategy.” There’s also an isolation factor endemic to the field, he notes. “People don’t actually get to see each other’s work and learn about the nuances of dealing with the unpredictable things that happen in therapy. Most of us aren’t part of communities of practice in which the norm is close examination of what we actually do with our clients.”
Putting it more pointedly, professor emeritus Jay Efran of Temple University says, “How do you improve as a therapist? You can’t read how to do it in a book. When you think about how little real incentive there is in our field to improve our skills, it’s hard to escape the conclusion that, in some way, the attitude is it really doesn’t matter. Think about it. If you’re a surgeon, you’re regularly held accountable in a way therapists aren’t.”
Viewed in this way, the discussion about evidence-based practice versus what the feedback-informed advocates like to call “practice-based evidence” seems too narrow and rarefied, ignoring too much of the nitty-gritty reality of how most therapists ply their trade. Until we look more closely at the actual context of practice, it’s unlikely that psychotherapy will change markedly.
“Where are the incentives for improving our therapeutic outcomes, or even to become more aware of how we’re doing?” asks Doherty, echoing Efran’s point. “If you look at it broadly, most of us don’t practice in a context that offers a stimulating or effective learning environment for improving our skills. For most of us, therapy is a private art form, done behind closed doors in our solo practices or in group practices where there’s little coordination or shared discussion of the challenging cases we’re facing. I think too many therapists feel that there’s no real system around them. If this field is to do a better job of serving the clients who come to us, we need a much more radical solution than just having more clinicians do more evidence-based therapy.”
In fact, some believe that the innovations most likely to influence the future of the field may come not so much from theoreticians, clinical innovators, or psychotherapy researchers, but from advances that make it easier for therapists to learn and master their craft. “We don’t need some great new therapeutic breakthroughs to make great strides in improving the quality of our outcomes as a profession,” says Susanne Bargmann, a Danish psychotherapist and trainer who helped create the International Center for Clinical Excellence (ICCE), the world’s largest web-based therapist learning community—currently used by 4,000 psychotherapists—dedicated to improving the standards of practice in the field. “We could raise the level of our clinical work enormously if we simply took more time to review our cases, especially when we’re stuck, and got concrete help when we made mistakes or had questions,” she says. “A big part of the barrier to doing that is one of attitude. Right now, too many therapists think that what they already do is perfect, or else that it’s too dangerous to acknowledge your clinical shortcomings. But, actually, the only time you ever learn something new is when you make a mistake.”
There are many resources available on the ICCE website that Bargmann developed with Scott Miller and others, but the centerpiece is the Fit-Outcomes management system, a specially programmed database of more than 100,000 cases with which practitioners can determine whether their current cases are on track. Including both outcome measures and session-by-session feedback scores, the cases loaded onto the site are categorized by their location on the spectrum from clinical failure to success. To determine whether their current cases are headed toward positive or negative outcomes, members compare them with those in the database by entering their own session-by-session feedback measures for the therapeutic alliance and overall client functioning. Beginning with the second session (so that patterns of progress, stagnation, or decline can be determined from the get-go), therapists can see whether a case is likely to have a positive or negative conclusion. When a case is progressing, a green lamp lights up, while no progress is indicated by a yellow lamp. A red lamp is the attention-getting symbol indicating that a case seems headed for an unsuccessful outcome.
Of course, the system is far from foolproof and offers no crystal ball, but it provides the kind of normative data previously unavailable to most clinicians that practitioners can use to chart the course of cases, especially challenging ones, adding a new dimension to their tool kit with potentially dramatic consequences. Beyond the color-coded alerts that offer a sense of accountability and urgency with cases in trouble, a key element of the Fit-Outcomes management system is the opportunity for members to send posts to the community of fellow practitioners to share frustrations, ask questions, and get new ideas any time of the day or night. Wherever you are on the planet, no matter how geographically remote, you can ask for help with challenging clients through ICCE. “Being a solo practitioner can be very isolating,” says Australian psychologist and ICCE member Vanessa Spiller. “Having a supportive, like-minded community in which I can ask questions and present ideas and thoughts, and have people critically review these, has been very helpful. It’s been great to be able to access this ‘oasis of international expertise,’ providing me with peers willing to critically review my work, identify some of my unquestioned assumptions, and make specific suggestions for changes I can implement and objectively evaluate.”
Like the ICCE system, a database called the Systemic Therapy Inventory for Change (STIC) being developed by William Pinsof of the Family Institute at Northwestern University tracks the session-by-session progress of therapy. But instead of relatively simple questionnaires, the STIC features an initial assessment of 30 different personality, behavior, and relationship dimensions and collects information not only on individual clients, but on every participant in couples or family therapy. Because it’s scored and displayed on a computer, with easy-to-read graphic displays, it can be filled out relatively quickly. Clients fill out STIC measures throughout treatment, which are e-mailed directly to therapists. They can consult the measures before a session and get an instant sense of what’s happened between sessions and whether a case is progressing. It’s not that the STIC dictates a particular intervention, but it gives the therapist information that might otherwise be missed, especially about potentially damaging ruptures that have taken place in the alliance with clients.
“The therapist may have pushed too hard or responded in a way that a client didn’t perceive as empathic in a previous session,” says Pinsof. “Typically, the client might not say anything about this, but the STIC gives the therapist a way of finding out that, for example, a client’s trust in the therapist declined in the last session. Knowing this beforehand enables the therapist to point to the STIC scores and say, ‘It looks like something happened between us last time’ and bring that into the therapeutic conversation.” When used as a tool for monitoring and repairing the therapeutic relationship in this way, Pinsof believes that it amplifies the client’s voice and equips the clinician with an additional sixth sense, helping to overcome the blind spots that are inevitable in every human relationship. He insists that STIC deepens the therapy process and empowers both therapists and clients, rather than taking away therapists’ autonomy, like a therapeutic protocol can.
Technology and the Future
While technology often is seen as a depersonalizing force in our lives, some are beginning to argue that the digital revolution may be the primary means by which the standard of care within in the field will rise to a new level. The expanded video capabilities of the Web are already opening up learning and training opportunities that can help therapists further develop their clinical skills. Cognitive Therapy pioneer Donald Meichenbaum is developing a website that’ll offer video demonstrations of what he considers the core skills required for effective practice, along with assessment instruments to determine which of those skills a given practitioner might need to improve. The Networker regularly broadcasts video interviews with accomplished experts in a range of clinical topics like couples therapy, trauma, and mindfulness practice that focus directly on the fundamentals of clinical craft too often ignored in therapists’ training. Its goal is to radically expand the range of observable clinical role models available to practitioners around the world. The ease of video recording today makes it possible for clinicians to conveniently review their own sessions, either with colleagues or supervisors, and zero in on the nuances of session management and intervention that go beyond generalized discussion, providing the kind of immediate feedback that the literature on human performance and mastery tells us is necessary to change behavior and enhance skill.
Some of the new digital systems being developed offer possibilities that might have seemed like science fiction just a few years ago. For example, Pinsof is now working on an adjunct to STIC, called the Integrative Therapy Session Report, which will gather measures of therapists’ techniques session by session. “After a session, we ask therapists to detail what specific interventions and client strategies they used. This additional information is integrated with the ordinary STIC data to show how the progress of a case—or lack of it—can be related to the therapist’s interventions. As more data are collected, this will give us a road map of how a broad range of clients with all kinds of characteristics responded to different kinds of interventions at different stages of the therapy process. So when a therapist is stuck with a particular client or couple or family, she’ll be able to see how a sample of thousands of past clients with matching characteristics responded to various treatment options.” Pinsof likens his feedback system to an X-ray, blood analysis, or MRI in medicine, and considers his feedback and reporting instruments as sources of vital information that, one day, will be part of every therapist’s essential tool kit, ensuring greater accountability and better care.
So it appears that whether therapy progresses to a new level of effectiveness may be determined, not by some game-changing discovery of new methods, but by whether we can change our time-honored distrust of the very concept of “research.” “Most therapists today see research as something that’s intimidating and controlling, and don’t believe they can integrate scientific data into their work without compromising their clinical intuition and judgment,” says Pinsof. But as technology makes it possible to make immediate, practical use of data in managing the ongoing therapeutic relationship, therapists will be increasingly encouraged to become more discerning investigators of their own practices and more attuned collaborators with their clients, especially when things are looking grim or uncertain in the consulting room.
As 21st-century technology increasingly makes itself felt in psychotherapy, it seems that the pathway toward enhanced effectiveness will require the field’s practitioners to bring together knowledge domains often treated as distinct. Regardless of how hi-tech and data-driven some of the tools practitioners use in their pursuit of clinical excellence become, however, the therapist’s demanding, evolving craft will remain one in which both art and science are inextricably intertwined.
Illustration © Richard Tuschman
Diane Cole is the author of the memoir After Great Pain: A New Life Emerges and writes for The Wall Street Journal and many other publications.