Researchers like Michael Lambert and the late Ken Howard have proven that therapists who receive structured feedback from or about clients after each session have fewer premature terminations and stronger therapy alliances—factors that generally lead to more successful therapy outcomes. To assist this process, clinician–researchers such as Lambert, Scott Miller, and Barry Duncan have developed short questionnaires for clients to complete, providing quick snapshots of the therapeutic progress at every session. But the brevity of these questionnaires may also be a limitation.

Such forms focus on one or two dimensions of therapy—the alliance and/or how clients assess their own progress—mere snippets of one client’s subjective experience. Longer forms could give therapists much more information about what’s really happening in therapy. However, since few therapists take the time to use even the short forms already available, preferring to rely on their own instincts, perceptions, and emotionally charged clinical moments, it seems a stretch to suppose they’d welcome longer questionnaires.

What if there were a multidimensional feedback system that took clients just five to seven minutes to fill out, covered more of the complexities of therapy, had a systemic perspective, fed the information to therapists in user-friendly graphs on the Internet that required just a couple of minutes to examine, and revealed information not always apparent in sessions? Family therapist and researcher William Pinsof, president of The Family Institute at Northwestern University in Evanston, Illinois, thinks he’s found just that, which he calls a real game-changer for therapy. He and colleagues have developed the Systemic Therapy Inventory of Change (STIC), which is filled out by clients online, allowing clinicians to quickly and efficiently tap into a mother lode of information about a session, the client, and the client’s family. It’s especially useful for couples and family therapists, who can see at a glance what’s happening with each client and what’s going on among family members from the different perspective of each individual. “STIC tells you about the therapist–client alliance, but also gives a complex picture of where the trouble is and where the strengths are,” says Pinsof. “Other feedback measures mostly just give you an idea of how clients feel and how they think the therapist is doing.”

Let’s say a depressed wife has a joyous breakthrough in therapy, but in going over the data before the next session, the therapist sees that the husband’s self-ratings of satisfaction with the therapist took a nosedive in the same session. Or, curiously, their ratings of how close they feel toward each other, which had been approaching congruence during the last few sessions, suddenly diverged after her breakthrough: the wife feels closer, but the husband now feels more distant. When there’s some kind of anomalous spike on the graphs such as this, Pinsof will often just show the information to clients and ask what they think explains it, which helps create a process of collaborative hypothesizing with clients.

Family Institute therapists have already begun using STIC. The Institute has bought 20 iPads and mounted them in therapists’ waiting areas. Institute clinicians go to the STIC website (access isn’t available to the public), and look at their clients’ graphs before their sessions. “It gives you so much data, it’s like having MRIs of clients’ brains,” says Doug Breunlin, the Director of the Institute’s Marriage and Family Therapy Program at Northwestern.

With a large new grant, Pinsof is rolling out a multisite, multicountry research program to further test STIC with more than 400 therapists and more than 4,000 cases. He says that the resulting database will add another useful dimension to STIC by allowing therapists to see how other therapists around the world intervened in cases with similar dynamics.

For therapists who are used to navigating by their own observations and intuitions, the prospect of using STIC may seem at first like being asked to wear a straightjacket while swimming in a deep ocean of information. They may feel they’re being compelled to ignore clinical hunches while they “read” multiple graphs and factor in dozens of variables before making a move. But therapists can use their intuitions to home in on which graphs to examine. Instead of constricting therapy, Pinsof says, “STIC gives therapists more clinically relevant information than they’ve ever had access to before.” Speaking as both a therapist and a researcher, he adds, “I think it offers a glimpse of what the more science-based therapy practice of the future will be like.”

Therapy Skill and Clinical Research

Even a clinical model with solid empirical support for its effectiveness is only as good as the therapist practicing it. You can train therapists to follow treatment manuals and protocols more or less accurately, but how well they deliver the treatment is often what makes the difference between success, mediocrity, and even failure. Therapy researchers describe the degree of precision with which practitioners follow the prescribed steps of a model as “adherence,” and refer to the skill with which therapists practice the model as “fidelity.” Not surprisingly, the correct intervention delivered awkwardly, incompletely, or at the wrong moment can score as high adherence and poor fidelity.

The skill or fidelity with which a model—even a well-researched and demonstrably effective one—is used can have a major impact on whether the model makes it into the big leagues of widespread practice or languishes at the margins. And the success or failure of make-or-break trials of an approach may hinge on circumstances surrounding the trial that have nothing to do with the inherent worth of the model itself.

Case in point: in the last 30 years, Brief Strategic Family Therapy (BSFT) has acquired solid empirical support for its effectiveness with substance-abusing adolescents and their families, especially in the Hispanic community. But the intricacy of the model requires considerable training for therapists and an investment of agency time, so cash-strapped community agencies—the likeliest venues for treating inner-city, substance-abusing adolescents—need more dramatic proof before committing to it. Therefore, when one of its primary developers, University of Miami psychologist Jose Szapocznik, landed a major grant from the National Institute on Drug Abuse (NIDA) to measure BSFT’s effectiveness against agency treatment-as-usual with 480 families in eight community agencies in the mainland United States and Puerto Rico, he hoped the results would convince agencies to use the approach more often. However, the study didn’t live up to expectations, although several of the outcomes were slightly better than those of the usual treatments. What went wrong may reveal more about the kinds of research problems that block widespread adoption of complex treatment approaches than about the qualities of BSFT.

The researchers ran into trouble from the outset. The grant was conducted at community agencies that work with NIDA’s Clinical Trial Network. This network supplied the therapists, who, typically, were already providing clinical services at the agencies. “Normally when we go into an agency and train therapists,” Szapocznik says, “we choose people who already have experience working with adolescents or families, and who have a certain level of clinical skill.” This time, the researchers were told which therapists to train. Because of the limited pool of 79 therapists offered for the study, Szapocznik’s team could exclude only 2. It wasn’t that the therapists selected were bad, but family therapy in general, and particularly BSFT, requires an orientation and set of observational and interactional skills that not only take substantial time to acquire, but frequently run counter to the previous training of clinicians who haven’t practiced family therapy.

The study design influenced the outcome in another way. No-shows are a common occurrence in therapy with poorer urban families. “In every evidence-based family therapy model,” Szapocznik points out, “to achieve the best results, it’s important to take hold of a family and not let them go.” BSFT therapists work hard to bring all relevant family members to sessions, sometimes using their free time between sessions for calls and reminders. However, the therapists in the study were under agency pressure to fill their “empty” hours with billable tasks. The result was that 66 percent of the therapy sessions lacked at least one key family member—not because the therapists didn’t know the model required full attendance, but because circumstances militated against strict adherence to BSFT principles.

Still, Szapocznik’s analysis of his therapists’ adherence found that, on a 5-point scale, 90 percent of the BSFT therapists were in the acceptable 3- to 4-point range. Three is considered minimum adherence. What really undermined results, however, was the lack of clinical competency in the way the model was delivered. Training therapists who have no family therapy background in BSFT “is like teaching someone to ice skate,” says Szapocznik. “You can train them to skate, but you can’t expect triple axels.”

A subsequent analysis of a subset of the BSFT cases by independent researchers Michael Rohrbaugh and Varda Shoham, who are on the faculty of the University of Arizona, suggests that the fidelity of the BSFT therapists was considerably below the quality Szapocznik would have wanted. Among the most common fidelity-related omissions were therapists’ failure to engage key family members or to treat communications and interactions in terms of three people—both of which are cornerstones of family therapy.

Given the difficulties with adherence and fidelity, BSFT still outperformed treatment-as-usual. Szapocznik points out that therapists with the highest adherence ratings had the best outcomes, perhaps the most positive finding for BSFT’s effectiveness in the study. Meanwhile, Szapocznik is conducting a four- to five-year follow-up study, hoping that BSFT’s gains will prove more durable than those of the usual agency treatments. And better days may be ahead for family therapy and clinical trials. The National Institute of Mental Health has set aside approximately $1.5 million in grants this year to study and improve treatment fidelity. Hopefully, some of the studies developed from these grants will lead to suggestions for how to ensure that therapists participating in clinical trials genuinely have the skills to correctly follow the model under investigation.

Chasing Therapy “Facts”

Despite the millions of dollars spent on research, psychotherapy will always have one foot on the bedrock of science and the other in the loam of subjective truth. Therapy has always been both enmeshed with and an arbiter of the shifting landscape of cultural values and ideas.

Remember when most therapists “knew” that it’s always healthy to express and vent emotions? That notion has now been convincingly disproven, but in the buttoned-down 1950s, it may indeed have been healthier, even necessary, for people to free up and express their emotions. Did the shift in social attitudes toward more acceptance of emotional expression perhaps come about in part because legions of past therapists helped make emotional catharsis a mainstream ideal? Other widely believed “facts” among the public and therapists that are unsupported by research include the beliefs that there’s a common pattern to working through grieving, that men and women are biologically engineered to relate and communicate differently, and that extramarital sex always signals intimacy problems in marriages.

Unlike laws and empirically derived theories in the physical sciences, psychotherapeutic “truths” can change with bewildering speed. This has been especially true during the digital information age, when ideas and values mutate and propagate with unprecedented frequency.

In the May 2011 Smithsonian Magazine, science journalist James Gleick discusses memes—units of information or ideas, not necessarily factual but assumed true, that spread like viruses, infiltrating the metaphorical DNA of our perspectives and discourses. Many float in and out of therapists’ offices. Years ago, psychologist Martin Seligman pointed out that the belief that childhood events have a greater influence on our personality development than adult events doesn’t have nearly enough empirical evidence to support its entrenched position in therapy, yet the belief persists. We once “knew” that our ways of being in the world were determined by the struggles between id, ego, and superego. Later we believed our thought patterns created, influenced, and could ultimately solve our difficulties. Now we believe that our problems derive from unfortunate genetic or neurotransmitter configurations, underregulated amygdalas, and overdeveloped prefrontal cortexes.

Everyone has a need to create paradigms and then assume they are truths. In The Belief Instinct, social psychologist Jesse Bering points out that the brain is biologically predisposed to make sense of things—to find order and meaning. We can’t avoid creating meaning. Unfortunately, as media philosopher Marshall McLuhan once pointed out, we also can never totally understand the environment of which we’re a part.

In his 1997 book Cultures of Healing, psychologist Robert Fancher, deconstructing psychoanalytic therapy, Cognitive-Behavioral Therapy, and biological psychiatry, insisted that their explanations of personality and the process of change were philosophical, not scientific. In a 2010 interview with David Van Nuys, Fancher broadened his notion to include other therapies. Therapists, he said, “tend to find something that makes us feel secure, and then we socialize patients into it.” We ought not to delude ourselves or our clients, he insists, that we “know” all about them and how they can change.

In his essay, “The Autobiography of a Theory,” psychologist George Kelly described beginning to feel uncomfortable with his Freudian “insights.” So he started fabricating new ones for clients, some of them “preposterous.” His only criteria for the insights: “that the explanation account for the crucial facts as the client saw them, and that it carry implications for approaching the future in a different way.” Many of the explanations, including the preposterous ones, worked surprisingly well. Kelly noted that his experiment shouldn’t justify an “anything goes” philosophy, but that both therapists and clients should beware of allowing their own “orthodoxies” to constrict therapy.

What should therapists do, given that we often—Fancher would say fundamentally—traffic in ideas masquerading as facts? The solution may lie in having a coherent but flexible vision of therapy; perhaps adopting the same stance toward therapy that therapy students are taught in multiculturalism classes. At one time, such classes focused on defining the characteristics of different groups, a practice that closed off curiosity and inquiry and engendered stereotyping. Now students are taught to be acutely aware of their own assumptions, biases, and orthodoxies, not to mistake them for facts, and to be willing and able to step outside what they assume they know to learn from their clients.


Family Therapy:

For an overview of BSFT and adherence difficulties, see Journal of Consulting and Clinical Psychology 79, no. 1 (February 2011): 43-53.

Therapy “Facts”:

For Fancher interview, see

For the Kelly essay, see Clinical Psychology and Personality: The Selected Papers of George Kelly, Brendan Maher, ed. Wiley: New York, 1969.

Illustration © Ralph Butler

Garry Cooper

Garry Cooper, LCSW, is a therapist in Oak Park, Illinois.


Issues & Developments The Field

Earn CE Credits

Just for reading the Networker!