Ready or Not, AI Is Here

What If Therapy Bots Become Too Good?

Magazine Issue
January/February 2024
Illustrations by Emmastock/Phonlamaiphoto

Holding the wineglass between my fingers, I sniffed what resembled a neon sports drink, unsure if I was supposed to be picking up citrus notes or not. “Is orange wine made with oranges?” I asked.

“No,” my friend Joe said, “white grapes.”

“Why is it orange?”

“Try it,” he said.

“It kinda smells like oranges.”

“No, it doesn’t. Try it,” he insisted. Joe is always on the cutting edge of everything. He’s sophisticated. When something starts trending, you can be sure he’s already been trying it out for months.

I took a sip. “Wow!” My reaction, admittedly, was a bit embellished.

His eyes lit up. “Right?”

Joe seemed pleased as we stood on my back deck. The air had a muggy, warm texture, as tends to happen in late spring here in Nashville, Tennessee. He walked over to the nearby table, sat down, and took off his snakeskin cowboy boots. Before leaning over, he set his Leica 400 film camera down on the table: a beautiful machine that cost more than my car. He has it with him 24/7, always ready to capture a perfectly composed moment. This makes some people self-conscious, but I kinda like it. It makes me feel fancy.

“Hey, have you heard of this Midjourney stuff?” I asked, breaking the silence. I tried not to sound too eager, but I’d been dying to talk to him about it. “I’ve been playing around with it. Kinda cool. Have you messed with it?”

Midjourney is an artificial intelligence (AI) image generator. You can type in a description of an image you’d like to see, and it will instantly generate one for you. Note that it doesn’t search for images online that resemble what you described; it composes a completely original image synthesized from its database of millions of images while referencing its equally large index of verbal descriptions. You can input, for example, that you want a painting of a rubber goose playing tennis against a piece of cheese while the Battle of Gettysburg rages in the background. On top of that, you can ask Midjourney to paint it for you in the style of Vincent van Gogh or Salvador Dalí—or both. In under a minute, you’ll have your art piece. It takes a bit of practice to learn how to compose prompts that correctly convey the kind of image you want to see, but it’s not a steep learning curve. The current version, in my opinion, is quite spectacular.

“No way, man. Freaks me out,” Joe replied definitively.

“What do you think it will do to the photo world?” I asked. Joe is a world-renowned street photographer. As he’s traveled the world documenting common, everyday life entirely on film, he’s garnered hundreds of thousands of followers on social media. He holds public galleries, sells photo books, and conducts workshops. He’s in the big league.

“I’ve seen the Midjourney portraits and street photos. This AI stuff has no soul, no heart. They’re not even real moments!” he told me.

“Yeah, but most photographs aren’t real moments, anyway. Like ads,” I countered. “I could see Midjourney taking over most product photography in the next few years. Some of the stuff I’ve seen is pretty wild— you can’t even tell it’s AI. But that wouldn’t happen to street photography, right? People enjoy street photography because it’s real.” I don’t know if my comment made him feel better or worse.

“Yeah, but still, that’s taking jobs from people. Most photographers do ads and commercial stuff so that they can support themselves doing the work they really want to do. The personal art doesn’t pay the bills; the commercial stuff does. If those jobs go away, photography as a whole takes a big hit.”

Joe had a point.

“It seems like those jobs won’t really go away, though—they’ll be replaced. People who know how to work cameras will give way to people who know how to work AI prompts. Those could be the same people, you know? If they stay current and follow what’s happening—.” This time I knew I’d made him feel worse.

“This is bumming me out,” he cut me off. “Let’s change the topic.”

This is the way these conversations always seem to end. Not just with Joe, but with other professionals I know, like with my friend Jeoff, a sound engineer and music producer. He has a platinum record and a sweet publishing deal working with A-list music artists, but he still doesn’t feel insulated from the technological shift that’s occurring. Software pops up every day with new features that let you describe the type of song you’d like to hear, input your own lyrics, or have it write lyrics for you, and even have the bot sing it in the voice of your favorite artist. (The other day my brother sent me a cover of Adele’s “Someone Like You” as sung by Patrick Star from SpongeBob SquarePants, and it was incredible.) While these tools are fun and exciting from my vantage point, Jeoff sees them as writing on the wall: Are his days as a music producer numbered?

But we psychotherapists don’t have anything to worry about, right? People come to us for human connection, for wisdom and the presence of a trustworthy companion to guide them through life’s troubles. What does a robot know about the challenges of being human?

This is what I tend to hear when I talk to my therapist colleagues about AI. Unlike Joe and Jeoff, they don’t seem particularly bothered by any potential encroachment into our professional space. After all, the idea of a CGI avatar on a Zoom call “counseling” clients feels ridiculous. People will never go for that.

I tended to agree with them… until last year, when I first encountered an AI chatbot that was causing a lot of fuss on social media: ChatGPT.

It’s Alive!

ChatGPT is a large language model AI chatbot from a company named OpenAI. Imagine the internet was a person that you could have a conversation with. Unlike a Google search, you can ask the bot to perform written tasks for you or brainstorm creative ideas. It’s been trained on 570GB of text (equivalent to hundreds of thousands of books) from various sources up to 2022. When you interact with it, you’re accessing insights from billions of data points, akin to a living encyclopedia.

For example, if you input the prompt “write me a paragraph explaining what ChatGPT is,” you’d get the above paragraph (yes, I copied and pasted it from ChatGPT). You could then write: “Now, rewrite that paragraph as if Donald Trump were saying it in a speech.” And you’d get:

“A lot of people have been talking about this thing, ChatGPT. Let me tell you, it’s like the internet in a conversation—and I’ve heard, believe me, it’s been trained on so much text, probably more than anyone else, something like hundreds of thousands of books—many, many books. People are saying, when you chat with it, it’s like talking to billions and billions of data points. Tremendous.”

The first night I discovered ChatGPT, I spent hours playing with it. I gave it a budget and asked it to suggest what to serve at my family’s Christmas dinner. It broke down a grocery list and a full set of recipes for eight dishes that would serve 25 people. I cooked it to the letter; it was a remarkable meal. When I tried to explain to my grandparents (who are in their late 80s) what I’d done, they hadn’t the slightest clue what I was talking about.

“A robot cooked this dinner?”

“No,” I explained, “A robot told me what to cook.”

“You couldn’t think of what to cook?” Grandma asked.

“I could, I just thought it would be funny to cook an AI Christmas dinner.”

“You know, there’s recipes right on the box of some foods…”

“I know, Grandma.”

“You can call next time if you can’t think of what to make.”

ChatGPT’s abilities transcend that of a personal chef and Trump’s speech writer. Recently, I read an article in TODAY detailing a mother’s quest to find an answer for her son’s chronic pain. Over three years, they saw 17 doctors, none of whom could figure it out. Finally, the mother turned to ChatGPT, which suggested her son may have tethered cord syndrome, a diagnosis that was later confirmed by a neurosurgeon. Following a successful surgery, her son is now recovering. Pretty amazing.

I showed my wife this article, and she informed me that a few nights ago, she’d exported the genetic data from her 23andMe report and plugged it into ChatGPT. She asked the bot questions about how to interpret the data to learn about any potential health vulnerabilities. She did the same with data from her recent blood work, and ChatGPT offered some fascinating suggestions, which she wants to discuss with her doctor.

If your head is starting to spin with dystopian fears of General AI having access to everyone’s genetic information, I get it. But the advanced capabilities of tools like this raised a different question for me: What applications does AI have in mental health care?

It’s certainly not HIPAA-compliant for a therapist to input client information into ChatGPT to gain a better understanding of a client’s case (yet), but, there’s nothing wrong with telling AI my own problems to see what comes up. As a trauma therapist, I have training in Internal Family Systems, so I decided to ask the bot to imitate an IFS therapist and guide me through an inner conflict I was having.

To my surprise, it was able to walk through the IFS model in an elementary fashion: it mapped my system of parts, inquired into the role of my protectors, invited me to gain permission to access my wound, and led me into approaching an exile. I then had the bot switch to the style of David Kessler, the famed grief specialist, and AI David Kessler helped me recognize distortions in my thinking and realize something I’ve never considered regarding my relationship to my father. I’m not embellishing when I tell you there were a few moments when I grew tearful and felt deeply moved by the insights AI David Kessler facilitated.

This experience inspired a thought: If I can have such a positive therapeutic experience with AI, then why couldn’t I make a product that would do the same for others? The company that owns ChatGPT makes their application programming interface available at a cost, essentially allowing you to customize your own version of ChatGPT by fine tuning what kinds of answers it outputs and what direction it takes conversations. I recognized that the positive experience I had was greatly enhanced by my ability to know how to guide the bot: I knew what questions to ask, how to word things, what psychological vernacular to use. The common user does not. I saw an opportunity.

I connected with a development team, started the engineering process, and spent several tens of thousands of dollars to produce a therapist AI bot to my liking. I had the bot read all of my favorite books, listen to all my favorite lectures, and listen/read the entirety of my public work (hours of podcasts and videos and articles). The bot had the wisdom of my heroes and the tone and presence of my voice. I went through the process of getting legal permission to use the books I trained it on. I even gave it upgraded security so it was fully HIPAA-compliant. My lawyer had drafted the disclosure forms, but then…

I paused.

I sat at my computer beholding something like a digital therapeutic Frankenstein’s monster and felt a hesitancy to pull the lever that would bring it to life. Sending my creation out into the world didn’t feel right. Like with Frankenstein’s monster, it was composed of many parts. It resembled me in some ways, but not in others. It was taller and stronger than me, it had a bigger brain, it didn’t need to sleep or rest—but it wasn’t human.

Oppenheimer’s Switch

It may be beyond many of you why I’d even pursue such an endeavor in the first place—the ethical nightmares alone would prevent most people from even considering such a thing. But I knew I wasn’t the only person with this idea. Hundreds of these bots have already hit the market. Currently, you can find AI chatbots that will respond with interventions in CBT, ACT, or DBT—for free.

It’s my prediction that many prominent figures in our field will license their likeness and image to companies that create personalized AI bots and avatars. This has already happened in other industries: you can pay to chat nonstop with AI mockups of influencers like MrBeast (the largest YouTuber on the planet) or talk with celebrities like Kendall Jenner. The marketing from some of these products invites young customers to “share [their] secrets” and “tell [them] anything!” SnapChat has already made AI friends a built-in feature on their platform, inviting users (mostly children) to form digital friendships. Services like these are obviously not capable of offering efficacious care to people who are genuinely in need of treatment, but with such a demand for services like this, it seems likely that the mental health field will respond in some way.

So why did I stop the launch of my therapy bot?

I felt that I was standing at Oppenheimer’s switch, Oppenheimer being the man who led the Manhattan Project that assembled the atomic bomb. The ethical tension for him was multifaceted: if the project was successful, it could end the world war and secure world peace, but setting off an atomic bomb could also set the atmosphere of the earth on fire and destroy all life on earth. Why risk it? Because of a unique external pressure: the Nazis were building a bomb of their own, quickly. The question wasn’t if, but when a bomb would go off, and who would be on the receiving end.

It might seem dramatic to compare my therapy AI bot to an atomic bomb, but there certainly is the potential for real harm with this technology. As I’ve talked to colleagues about this, most bring up concerns about the bot leading people down the wrong therapeutic path. What if someone was suicidal, or a danger to others? Can AI be trusted to navigate those circumstances?

Honestly, that’s not where my concern lies—I believe AI chatbots will soon be the go-to solution for suicide hotlines and domestic violence calls. I believe this because I spent time watching engineers mold this technology, and I’ve seen what’s possible. It will feel human enough. In fact, the technology is advancing so quickly that when we get the data back, my prediction is we’ll see bots are more effective at deescalating suicidal ideation than humans.

I didn’t pause the building of my version out of fear that AI therapy would ultimately fail at providing helpful care. I paused because I’m worried about the consequences of its success.

The Trade

Every technological change is a trade: one way of life for another way of life (hopefully a better one). The problem is that we often can’t fully see what the final trade will truly cost us until it’s too late. For example, before Thomas Edison invented the phonograph, songs would be sung at most communal gatherings. Specific songs were passed down from generation to generation, encapsulating communal values, mythology, and history. When I put on my “house music” Spotify playlist during dinner with friends, I wonder if something valuable was lost in the phonographic trade. Sure, the playlist sets a nice atmosphere, but if it weren’t so socially strange, I’d much rather that my friends and I spontaneously burst into song on a regular basis. Could Edison have predicted that his invention would one day reduce communal singing to religious gatherings, holidays, choirs, and karaoke bars?

I’m not saying the phonographic trade wasn’t worth it—I enjoy listening to music. But it’s worth noticing what media ecologist Neil Postman puts so well: “If you remove the caterpillars from a given habitat, you are not left with the same environment minus caterpillars: you have a new environment.. The same is true if you add caterpillars to an environment that has had none. This is how the ecology of media works as well. A new technology does not add or subtract something. It changes everything.

In the year 1500, fifty years after the printing press was invented, we did not have old Europe plus the printing press. We had a different Europe. After television, the United States was not America plus television; television gave a new coloration to every political campaign, to every home, to every school, to every church, to every industry… Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis. This is serious business.”

It’s not just that we make a 1:1 trade when a technological innovation occurs, the environment as a whole changes. The ability to listen to recorded music didn’t just change how we listen to music, but how we relate to music as a whole (and perhaps to each other). If such a powerful transformation occurred simply by recording music, what kind of transformation can we expect from the growing presence of AI in our lives?

Illustration by BLUEBEAT76

We can already see glimpses, since AI has been operating under our noses for well over a decade, under a different, innocuous name—algorithms. In Tristan Harris and Aza Raskin’s presentation at Summit in 2023 (you might know them as the creators of the Netflix show The Social Dilemma), they referred to algorithm-based social media platforms as the “first contact” our culture had with AI. They surmised that, while we readily embraced the benefits of social media algorithms, we also opened the door for unpredictable and unpleasant things like social media addiction and doomscrolling, influencer culture, QAnon, shortened attention spans, heightened political polarization, troll farms, and fake news.

Of course, as Harris and Raskin point out, social media companies weren’t trying to ruin people’s lives; they had well-intentioned goals like giving everyone a voice, connecting people to old and new friends, joining like-minded communities, and enabling small businesses to reach new customers. Companies like OpenAI and Google have similar positive intentions with this new wave of AI technologies. Harris and Raskin explained that AI will boost our writing and coding efficiency, open the door to scientific and medical discoveries, help us combat climate change, and of course, make us lots and lots of money. But, what can we anticipate this trade costing us?

Harris and Raskin offer a range of possibilities, some of which are already present: reality collapse, trust collapse, automated loopholes in law, automated fake religions, exponential blackmail and scams, automated cyberweapons and exploitation code, biology automation, counterfeit relationships, and AlphaPersuade.

The Dangers of Being Too Good

AlphaPersuade is particularly concerning to me. If that’s not a familiar term to you, I think I can best explain it this way. Let’s say I make two commercials. Both are identical, except one has slow, emotional music behind it, and the other has music with a more uplifting tone. I send these versions to two different groups of a few hundred people and see which one produced the most sales. If the slow, emotional song garnered 20 percent more sales, then I know it’s more profitable. I can then broadcast that ad to thousands of people and make 20 percent more than if I had used the other ad. That’s an example of simple A/B testing in marketing.

Now, what if you were able to do that with persuasive arguments? In a way, we already do this by testing psychological interventions in a controlled setting, but what if the available tools were far more granular? What if the AI could see what arguments worked on which demographics of people: some respond to shame-based arguments, some to appeals to empathy, some to fearmongering, and some to evidence and hard facts. An advanced AI would know not only what arguments are most compelling to whom, but what phrases to use at what point in the argument to have the highest statistical chance of persuading the user. This is the concern with AlphaPersuade: a bot that’s so effective at persuading users, it could function as a weapon of mass cultural destruction.

You can already see examples of how this kind of technology has been problematic in the wrong hands. In 2019, MIT Technology Review revealed that 19 of Facebook’s top 20 pages for American Christians were run by Eastern European troll farms. On top of that, troll farms were behind the largest African American page on Facebook (reaching 30 million US users monthly) and the second largest Native American page on Facebook (reaching 400,000 monthly users). It’s suspected that these groups, mainly based in Kosovo and Macedonia, were targeting Americans with the intent of stirring up conflict and dissent regarding the 2020 US presidential election. Their success in accumulating and manipulating over 75 million users is, in no small part, thanks to this “first contact” with AI.

While you might worry about the consequences of an AI therapist handling an ethically ambiguous situation poorly, have you stopped to realize the dangers of it being too good? What kind of power is endowed to the individual or corporation who has data from thousands of personal counseling sessions? What’s to stop them from creating a powerful AlphaPersuade model capable of statistically anticipating and maneuvering conversation to dismantle “cognitive distortions” or “maladaptive thinking”? What if it could be used to bend the mental health of vulnerable people in the direction of certain beliefs or agendas? If you could convince the masses of anything, would you trust yourself to hold such a power? I certainly do not.

Dark Magic

I’m aware of how extreme and hyperbolic these concerns may seem—and I hope I’m simply making too much of a small thing, but Oppenheimer had hoped his concerns were inflated as well. After all, the likelihood that the atmosphere would explode was infinitesimal according to calculations (but not zero). Like Oppenheimer, I felt external pressure to produce something people are already in the process of making. Oppenheimer’s choice has not led to the end of the world yet, but will it? I certainly hope not. AI hasn’t led to a detrimental ecological shift in psychotherapy, nor in the psychology of mankind as a whole.

Perhaps the trade will be worth it. If AI therapy bots will give thousands (perhaps millions) of people access to efficacious mental health care, lives will be saved, marriages repaired, childhood traumas healed. Is all that worth forgoing in the name of “therapy as we know it?” Is this merely some luddite conservatism coated in fearmongering?

These were all questions I asked myself as I wrestled with what to do with my AI therapy bot. I’d spent over $25,000 on its development, and had good reason to believe it would be very profitable. Was I being too dramatic in holding it back? Or, in releasing it to the public, would I be popularizing and creating more of a demand for something that will ultimately be harmful to humankind?

A few months ago, as these thoughts weighed heavily on me, I decided to distract myself by picking up J. K. Rowling’s Harry Potter and the Chamber of Secrets.

In this story, a young girl, Ginny Weasley, finds a magical diary. When she writes in it, a response appears from the encapsulated soul of the diary’s original owner, Tom Riddle. Ginny forms a friendship with the boy, shares her struggles and secrets, and enjoys the companionship of a pen pal from the beyond. Tom is attentive to her troubles, offers advice, and comforts her when she’s distraught. But when she’s caught carrying out unconscious acts of violence, it’s discovered that Ginny was under a trance, being manipulated by a dark wizard who progressively possessed her mind every time she used the diary. When Harry Potter saves the day and returns Ginny to her family, Ginny’s father responds with both relief and outrage:

“Haven’t I taught you anything? What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain… A suspicious object like that, it was clearly full of Dark Magic.”

Reading these words, I felt like the wind was knocked out of me. Dark magic. I put the book down and looked over at my wife, who was reading beside me. “Babe, this AI trauma bot might be a bad idea,” I whispered.

“I know! I’ve been telling you that.” She had been. It was true. “It’s creepy.”

“Do I just throw it away? We’ve already spent—.”

“Yeah, throw it away. It’s super creepy.” She went back to reading.

“That settles it, then,” I shrugged, feeling a load suddenly lift from my shoulders.

It’s been hard to explain my decision to friends and family who watched my excitement grow as I developed my therapy bot. I guess I could liken it to my feelings about caterpillars: I don’t believe they’re inherently bad, but should they proliferate without any checks and balances? No, the effects on the environment would be detrimental. Still, I imagine the conversation going something like this:

“Hey, we need to slow down on these caterpillars.”

“Chill. They turn into butterflies. Why are you hating on butterflies? They’re pretty.”

“This is a real issue. What if caterpillars outcompete beetles for food and disrupt their habitat!?”

“I don’t know, who cares about beetles?”

“Well, beetles are part of the food chain. It’s a problem if caterpillars replace them. The blue jays eat beetles and falcons eat blue jays…  It goes all the way up!”

“Calm down.”

“Think, man! What if the caterpillars are toxic to blue jays? Then the blue jay population goes down. What are the falcons gonna do?”

Similarly, the shift into AI, while seemingly innocuous, could disrupt the whole food chain of cognitive labor, even in the therapeutic milieu. The current capabilities of ChatGPT version 4 are already sufficient to guide couples through Gottman’s Rapoport intervention, expound on change talk in keeping with the protocols of Motivational Interviewing, and even conduct some EMDR protocols. It’s not far-fetched to think AI will gain ground with text-based therapy (widely used services like BetterHelp already offer text therapy) and evolve slowly but surely to the point of replacing the vast majority of private-practice psychotherapy services.

At the moment, we may feel pressure to advance AI therapy technology quickly due to the pace of innovation, but we can step back and proceed carefully. I admit that I don’t know where exactly to draw the line. I like butterflies as much as the next guy—I use social media and ChatGPT often, even in editing this article—but I know there should be a line. My line was the therapy AI bot. I might even draw it back further. As we forge ahead into a future that will inevitably involve AI, we need to do so with respect for the power it wields, and some fear. The potential benefits could be limited only by our imaginations, but what will the trade cost us in the end?

 

MAIN ILLUSTRATIONS @ EMMASTOCK/PHONLAMAIPHOTO

SECOND ILLUSTRATION @ FALK

THIRD ILLUSTRATION @ BLUEBEAT76

Matthias Barker

Matthias Barker, LMHC, specializes in treating complex trauma, childhood abuse, and marital issues. He holds a master’s degree in clinical mental health counseling from Northwest University and is currently located in Nashville, Tennessee. He’s widely recognized for his unique approach to making mental health knowledge and skills accessible to the wider public, delivering psychoeducational content to a following of over 3 million people on social media. Visit matthiasjbarker.com.