My Networker Login   |   
feed-60facebook-60twitter-60linkedin-60youtube-60
 

Is Psychotherapy Getting Better? - Page 3

Rate this item
(2 votes)

It’s known as feedback.

Instead of focusing on evidence about “better” treatments garnered from clinical trials, he says, psychotherapists should seek out systematic feedback about their own performance and case outcomes in the form of simple questionnaires that patients can fill out prior to each session. Miller is part of a group that’s developed the four-question Outcome Rating Scale (ORS), an easy-to-use instrument for tracking treatment progress, as well as a therapeutic alliance checklist that helps therapist and client make sure their goals for therapy are aligned. It’s his contention that those answers—about the client’s well-being and sense of progress—are a form of practical research that enables therapists to gauge how they’re doing, session by session, and patient by patient. By regularly and systematically putting that mirror up to themselves, psychotherapists can become aware immediately of clinical missteps and errors in judgment that might otherwise go undetected until a client dropped out or ended treatment without deriving the anticipated benefits.

Clinicians like Miller and Lambert, his graduate-school mentor and the pioneer of developing feedback measures, think that regularly assessing therapeutic progress is fundamental to helping clinicians more dependably steer their course in therapy. They believe that such systems are essential because therapists too often rely solely on their intuitive judgments about which interventions will work or when to alter treatment tactics, despite the well-established fact that intuition by itself is notoriously unreliable, even for veteran clinicians. Everyone needs outside norms, baselines, and reference points by which to double-check those judgments; feedback instruments provide them. At the same time, feedback helps the therapist align with the patient’s goals and then match treatments to them. It provides another tool for “listening” to the patient’s response style in treatment. So that, for instance, if your client doesn’t like feeling like he’s being “told” what to do, such feedback can give you the heads-up that you need to try a less directive approach.

A crucial consideration in improving overall levels of treatment success, according to Lambert, is spotting problems early in the process. “Therapists are overly optimistic about their ability to help patients, and they ignore, or even have a positive view about, people getting worse, in that they believe, erroneously, that in order to get better, you first have to get worse,” he explains. “When they see a patient getting worse, that doesn’t alarm them.” Lambert’s system doesn’t allow therapists not to be alarmed.

His Outcome Questionnaire (OQ) is longer than Miller’s (45 questions, measuring symptoms, relationship problems, and social-role function). Both the OQ and Miller’s measure can be scored electronically, and are designed to send the therapist an alert when the measurements fall below a certain level. That serves as a wake-up call to tell therapists to pay attention, treatment is off track, and they should reevaluate and modify what they’re doing. The OQ’s own track record is impressive. In eight studies (six of them published) so far, the failure rate of therapists using the OQ declined to 6 percent in comparison with a failure rate of 21 percent among therapists not using the feedback measure. The briefer ORS has been tested in three major studies, and a recent metanalysis completed by Lambert shows that both measures improve outcomes.

Still, some clinicians remain skeptical that closely attending to clients’ feedback is the magic bullet that some of its advocates seem to claim. “One of the problems with feedback-informed approach is that it too often seems to operate as a kind of customer-is-always-right model,” says William Doherty of the University of Minnesota. “But what if the complaints the client has about how therapy is going is a reflection of the very problem that brought him into therapy in the first place?”

The Context of Practice

If the question of the effectiveness of psychotherapy and how to increase it hinges on the debate between the evidence-based traditionalists and the feedback-informed insurgents, some believe the simplest resolution is, essentially, to split the difference and put aside an unproductive disagreement about whose type of research is superior. Such is the view of John Norcross, professor of psychology and distinguished university fellow at the University of Scranton. Why take sides or engage in polarizing arguments, he asks, about what’s more important: the treatment method or the therapist–client relationship. “Sensible people don’t have a debate on all this,” he says.

What is sensible, he believes, is to take the approach that, when it comes to treatment choices, different strokes work for different folks. Every type of therapy has its inadequacies and won’t work 100 percent of the time for 100 percent of people, he says. “There isn’t one method. There are multiple methods.” Whatever the method, what’s most important is to go beyond either/or debates. In fact, Norcross has coauthored a new book with Michael Lambert, Evidence-Based Therapy Relationships, in which the two deplore the “culture wars in psychotherapy” that pit polarized camps of evidence-based treatment champions against those who advocate the overarching importance of the therapist–client relationship. Such squabbles only distract from the shared goal of all, the authors say, which is “to provide the most efficacious psychological services to our patients.”

Norcross prefers to emphasize the importance of the two types of research in helping psychotherapy continue to progress. “It’s the mutual interplay between both of these—between practice and research—that’s leading to more effective and more efficient psychotherapy,” he notes. As he and Lambert write in their book: “Decades of psychotherapy research consistently attest that the patient, the therapist, their relationship, the treatment method, and the context all contribute to treatment success (and failure). . . . We should be looking at all of these determinants and their optimal combinations.”

But others believe that reducing the argument about psychotherapy’s effectiveness to a debate between two different research models ignores far more crucial considerations. Looking at the broader issues of how clinicians are trained and the incentives currently offered for therapists to further develop their skills, they insist that it’s important to grasp the everyday context in which most therapists practice. To understand why overall success rates in our profession don’t appear to be improving despite all the new information coming into the field about the brain, mindfulness, and the mind-body connection, and all the research results being regularly reported in the journals, they say you must grasp the fact that this information isn’t being conveyed to therapists in ways that help them improve their actual performance with clients.

“In our field, there are model-specific skills—the procedures you need to learn to do EMDR or CBT or EFT,” says William Doherty. “We go to didactic workshops to keep up with new developments, but that’s done largely through lecture, with minimal opportunity to see how people actually employ these tools in their work. There are also the generic skills that cut across models that the research says are fundamental to helping build alliances with our clients and achieving good outcomes. But once you’re out of your initial grad-school training, how can you develop those skills? Peer consultation too often leads only to the discussion of cases at a theoretical level, or at the level of abstract strategy.” There’s also an isolation factor endemic to the field, he notes. “People don’t actually get to see each other’s work and learn about the nuances of dealing with the unpredictable things that happen in therapy. Most of us aren’t part of communities of practice in which the norm is close examination of what we actually do with our clients.”

Putting it more pointedly, professor emeritus Jay Efran of Temple University says, “How do you improve as a therapist? You can’t read how to do it in a book. When you think about how little real incentive there is in our field to improve our skills, it’s hard to escape the conclusion that, in some way, the attitude is it really doesn’t matter. Think about it. If you’re a surgeon, you’re regularly held accountable in a way therapists aren’t.”

Viewed in this way, the discussion about evidence-based practice versus what the feedback-informed advocates like to call “practice-based evidence” seems too narrow and rarefied, ignoring too much of the nitty-gritty reality of how most therapists ply their trade. Until we look more closely at the actual context of practice, it’s unlikely that psychotherapy will change markedly.

“Where are the incentives for improving our therapeutic outcomes, or even to become more aware of how we’re doing?” asks Doherty, echoing Efran’s point. “If you look at it broadly, most of us don’t practice in a context that offers a stimulating or effective learning environment for improving our skills. For most of us, therapy is a private art form, done behind closed doors in our solo practices or in group practices where there’s little coordination or shared discussion of the challenging cases we’re facing. I think too many therapists feel that there’s no real system around them. If this field is to do a better job of serving the clients who come to us, we need a much more radical solution than just having more clinicians do more evidence-based therapy.”

<< Start < Prev 1 2 3 4 Next > End >>
(Page 3 of 4)

Leave a comment (existing users please login first)

2 comments