Can AI Make Us Better Therapists?

Using New Technology for Supervision

Magazine Issue
January/February 2024
Photo by ThisIsEngineering

Recently, sitting across the table from April, my friend and colleague, at our favorite Filipino restaurant, I listened to her worry about all the reasons artificial intelligence (AI) was bad for our profession. Will people really turn to a human if a “humanly enough” chatbot is available? Will human-to-human therapy be an experience available to only a privileged few? What new restrictions and expectations will insurance companies put in place to dictate how we work?

I get where she’s coming from. Having transitioned to becoming a therapist after more than 20 years of working in IT, I’m familiar with how advances in technology can bring about unexpected consequences. If you’ve played around with some of the new AI tools available, you know they’re eerily good at generating content. Ask ChatGPT to create a treatment plan for a new college student suffering from anxiety, and it will hit all the right (if somewhat basic) targets. Enter some rudimentary information about a therapy session and ask it to create a SOAP note, and you’ll likely be surprised at the quality. An early-career therapist I spoke with even acknowledged (a little sheepishly) that she’s found AI to be useful in helping her generate ideas for treatment planning and in reducing some of the intimidation of learning how to do documentation. “I can just enter a little bit of information, and it comes back with all these cool ideas that I can handpick from.”

Companies are already capitalizing on these new opportunities. AutoNotes offers clinical language and structure for treatment plans and case notes. Mentalyc transcribes the audio of an uploaded session and generates documentation for you—all housed in a HIPAA-compliant environment. Limbic Access integrates AI chatbots to triage clients, with real people overseeing the process. The Deliberate AI platform uses AI to track client change over time through gathering and analyzing biomarkers.

As a faculty member teaching marriage and family therapy students in the classroom and a supervisor supporting and advising beginning therapists in their clinical work, I—unlike April—am excited about the possibilities for using AI. I think we can teach students to use these new tools ethically. Not to replace our work as therapists, but just as we might incorporate any new technology in support of the creative work of therapy.

In fact, as April and I were debating the merits and pitfalls of AI in the therapy world, I began to imagine how it could really help a mid-career therapist like her. She’s been fully licensed for seven years and works in a small group private practice. She complies with all regulatory board requirements and faithfully attends to her professional development, earning the required CEUs and training in new techniques that she can incorporate into her practice. She regularly participates in her two-hour consult group twice a month, where she and three colleagues take turns presenting challenging cases.

But what about all the clients who don’t get presented in a case consultation? Once we’re no longer operating under supervision, we have few opportunities to get detailed feedback on all our clients. It takes concerted energy—and time—to get professional opinions, both of which are lacking for many busy practitioners. What if AI offered a virtual consulting tool that could learn what April is doing in the room with clients and then give her some feedback? In addition to giving her some basic statistics about how often she uses paraphrasing, reflecting, or other counseling skills, it could give her some conceptual ideas about what she could do differently with this client (which she could, of course, bring to her consult group or choose to ignore).

Mentalyc is already tracking speaking time versus silence in session, and Lyssn has been shown to detect empathy from audio recordings. Personally, I’d be interested to see how adherent I am to my primary theoretical approaches or how much I integrate other techniques. I like to think I already pay close attention to these details, but it doesn’t hurt to have another listening ear. A blind spot is a blind spot after all.

“But what about the underlying source of all that content?” April countered at our dinner. “What Silicon Valley dude is deciding what material represents the type of ‘good’ therapy we should be doing? Or what machine is deciding? And, I mean, how good could it really be at analyzing therapy?” I understand her concern. Bias in AI is a considerable problem, meriting a careful look at how companies are curating source data and what steps they’re taking to avoid perpetuating damaging stereotypes.

“Before incorporating AI into my work in a serious and consistent way, I expect companies to articulate how, exactly, they’re addressing representational harms,” I tell her.

As we wrap up our remaining adobo chicken and lumpia to go, April and I exchange a look that acknowledges we both agree that we have a ways to go in understanding, implementing, and accepting AI into our offices. When I consider the complexity of subtle verbal intonation and nonverbal cues that comprise some of the artistry of our work, I admit to moments of skepticism that technology can grasp all that nuance. And that’s not to mention the exponentially complicated dynamics that occur when you’re meeting with couples and families. But I’m not afraid of the challenge, and I’m also not afraid of the success. Call me naive, but I think AI is here to stay, and I’m down for that.

Photo by ThisIsEngineering/Pexels

Heather Hessel

Heather Hessel, PhD, LMFT, is an assistant professor in the Department of Counseling, Rehabilitation, and Human Services at the University of Wisconsin-Stout. She has an active research agenda focusing on emerging adulthood, extended family relationships, and the intersection of technology and clinical work. Her clinical practice includes working with individuals, couples, and families.