Is Deepfake Therapy Out of Control?

Therapists Are Torn on AI's Latest Evolution

Magazine Issue
September/October 2025
Is Deepfake Therapy Out of Control?

In Johann Wolfgang von Goethe’s famous 1797 folk legend “The Sorcerer’s Apprentice,” an old sorcerer assigns his young apprentice some household chores before departing his workshop for the day. Lazy and thinking himself clever, the apprentice enchants a broom to fetch water from a nearby well, not wanting to carry it himself. But after the washtub has been filled, the apprentice realizes he doesn’t know the magic to undo the spell, and as the broom continues its mindless task, the workshop begins to flood.

“Brood of hell, you’re not a mortal!” the apprentice cries, “Shall the entire house go under? Over threshold, over portal, streams of water rush and thunder.”

Frantically, he grabs an axe and begins hacking at the broom, only to see each splinter transform into yet another water-fetching broom—and the flooding intensify.

Suddenly, the old sorcerer bursts through the door, and in a thunderous voice, he quickly breaks the spell.

For over 200 years, Goethe’s tale has served as a parable on the consequences of idleness, hubris, haste, and misguided ambition. But it’s unlikely that even Goethe’s mightiest sorcerer could wrap his head around a debate unfolding in the world of psychotherapy, involving arguably the most powerful technology at our fingertips: artificial intelligence.

Like the apprentice’s enchanted broom, the latest evolutionary stage of AI in the therapy space has a seemingly magical quality, is intended to make life easier, and is moving at breakneck speed. But it’s also become incredibly divisive as therapists ponder a central question: will this creation serve us faithfully, or inevitably become a liability?

The Genie’s Out of the Bottle

On the website for Netherlands-based company DeepTherapy, a line of big blue letters proclaims, “Deepfake Exposure Therapy: Effective for trauma, anxiety, grief, mediation, and training.” Wait, deep-what?!

Deepfake technology (a portmanteau of deep learning and fake) refers to AI-generated images, video, or audio of real or fictional people, also called deepfakes or avatars. Over the last several years, this technology has improved by leaps and bounds, attracting the attention of venture capital and tech companies like DeepTherapy, one of several currently exploring its utility in the mental health space. But deepfake technology has also been the center of controversy, used to sow confusion and wreak havoc in the form of revenge porn, misinformation campaigns, cyberbullying, fraud, and more. Today’s deepfakes—whether it’s a pornstar with an ex-lover’s face digitally superimposed or faux footage of Joe Biden singing Chinese pop songs—are so convincing that IT departments and even federal governments have been forced to develop countermeasures.

But companies like DeepTherapy say they’re using this technology not only ethically, but to great effect. They laud deepfake therapy’s broad applications, whether it’s helping clients heal their trauma by confronting a perpetrator, process grief by speaking with a deceased loved one, or manage anxiety by rehearsing a conversation with their boss—all in avatar form, of course.

Here’s how it works: after assessment and consultation, the therapist and client boot up their respective screens, and the avatar—generated using a photo or video of an individual provided to the therapist—appears on the client’s end. Using a camera and software that tracks facial movement and expressions, the avatar is puppeteered by the therapist, and their voice can be modulated to resemble the individual depicted. Sometimes the therapist has a script, so they’re prepared to respond to particular questions. Throughout the conversation, the therapist and client remain in full control, and can choose to revert to a normal video chat at any time.

Interest in deepfake therapy is growing rapidly, especially overseas, where it originated. First developed by the late Julian Leff, a social psychiatrist and schizophrenia specialist at University College London, Leff received a grant in 2008 to study the use of deepfake therapy to treat psychosis. Dubbing his creation “AVATAR therapy,” Leff and his team got to work, creating digital, three-dimensional faces that could nod, smile, make eye contact, and even adjust their voices and move their mouths in sync with controllers.

Leff’s pilot study kicked off with 16 participants, and the results were astonishing: after just three sessions, two participants stopped hearing voices entirely. Twelve of the remaining 14 reported significant symptom reduction. In 2018, The Lancet Psychiatry published a report deeming AVATAR therapy faster, cheaper, and more effective than any other non-pharmacological intervention for psychosis on the market. In 2021, a follow-up study with 345 participants produced similar results, prompting the UK’s National Institute for Health and Care Excellence to declare AVATAR therapy safe and effective. A third trial is scheduled for 2027, and the findings are widely expected to prompt a global rollout.

Meanwhile, in the birthplace of DeepTherapy, Dutch researchers have called deepfake therapy “a promising intervention,” especially when it comes to treating PTSD from sexual assault and moral injury. In a 2022 Frontiers in Psychiatry report, six Dutch researchers claimed that “face to face victim-perpetrator confrontations generally lead to positive outcomes,” but “may not always be possible nor desirable,” making deepfake therapy a safer, more controlled alternative. In their report, the researchers highlighted a study in which two female sexual assault survivors confronted deepfakes of their perpetrators—and included dialogue from the actual therapy sessions to draw their conclusions.

“I will never forgive you for what you did to me,” 36-year-old Jill tells the avatar of her former boss, who assaulted her at a summer job when she was just 15 years old. “I don’t want revenge,” she continues, “but I really hope you stay away from me.” She proceeds to tell “him” that she’s handing off the weight of her anger and shame, “so that you feel this burden every day from now on, and so I can get rid of it.”

“Yes,” replies the avatar, who’s being controlled by a therapist sitting in another room. “If someone has to suffer from this, it should be me. You’re right.”

“I want to feel bigger than you,” Jill says. “I see you as the loser in this situation. You are weak.”

“I feel weak, indeed,” comes the reply. “And you are strong and brave. I admire you, how you coped and rebuilt your life.”

“I want to let go,” Jill says. “I am happy now.”

“You deserve that,” the avatar replies.

In a debrief, Jill reflects on her experience. “It helped me a lot to be confronted with him,” she says, “and to experience that he was no longer a man to be afraid of.” It felt different from other therapy approaches, she adds, like writing her abuser a letter. “His image was ‘alive’ now. He felt real to me. It was scary in the beginning, but when I got used to it, I felt in control. I am much stronger now, and I pity him. I realized that he’s the loser, not me.”

On the one hand, it’s easy to understand deepfake therapy’s appeal. These interactions seem to have all the benefits of exposure therapy, and unfold in a safe, controlled environment with a trained professional, who can modulate the intensity and hit the brakes if things go awry. And why would you settle for an approximation, like role-playing or empty chair therapy, when you can actually see your late mother’s smiling face, or hear the growl of your high school bully’s voice?

As it turns out, there are lots of reasons.

This is Why We Can’t Have Nice Things

For all the excitement and praise surrounding deepfake therapy, there seems to be just as much apprehension and criticism. Some question not only the ethics and legality of deepfake therapy, but whether it’s even effective at all—or safe to use. In a 2024 report published in the Journal of Medical Ethics, four Dutch researchers urged caution.

“We question whether deepfake therapy can truly constitute good care,” they wrote, citing “dangerous situations or ‘triggers’ to the patient,” risks of “overattachment and blurring of reality,” privacy concerns regarding individuals depicted in the deepfakes, and notably, the perennial concern that the AI could spiral out of control.

“It is important to reflect on the promises and risks of emergent technologies ahead of their implementation,” reads the report, “before they are ingrained in healthcare and we are in too deep, unable to steer the use of deepfake technology in the right direction.” In sum, the researchers concluded, deepfake therapy should only be used as a last resort.

Some of the sharpest criticism of deepfake therapy comes in response to its use in trauma treatment, where the stakes are often high and the work is often delicate.

“This is the worst idea I’ve heard in my 44 years in the field,” says therapist Janina Fisher. “It’s like a nightmare. Facing your perpetrator seems like the worst idea of all.”

Fisher, an international trauma expert who’s widely considered one of the pioneers of modern trauma treatment, clarifies that she’s opposed to exposure therapy in any form. “Prolonged exposure has an extremely high dropout rate,” she explains. “Something like 40 percent of clients drop out because they can’t tolerate it, and I worry about that happening with this avatar-as-perpetrator technology.”

Fisher argues that trauma treatment should be reparative, not event-focused. “I don’t think trauma survivors have the appetite to process traumatic events,” she says. “It’s very rare, and in most cases when they do, it’s because their therapist says, ‘You have to do this to get better.’ It’s one of the reasons why they avoid therapy. From my point of view, the past is over, the traumatic event has already happened, and the survivor is left with its effects, so we should be using treatments that address those effects, not the event.”

But Fisher doesn’t write off deepfake therapy entirely. “One of the effects of traumatic stress is chronic shame,” she says, “so I could imagine an avatar playing the role of the ashamed child, allowing the client to feel empathy for him or her. I still believe it’s better to do that work internally, but using an avatar would probably be faster.”

Others are torn. Therapist Matthias Barker, who specializes in treating complex trauma and childhood abuse, says, unlike Fisher, that he’s less concerned about whether the exposure component of deepfake therapy could be harmful. “All trauma treatment involves risk,” he explains. In fact, he adds, some triggering is precisely what makes the treatment effective.

“When it comes to healing trauma, the secret sauce is actually triggering the memory network to destabilize the memory and then reconsolidate it with new information,” he says, likening the process to rewriting an old file and saving it as a new document. “That’s what’s happening neurologically,” he explains, “and there are very specific interventions that trigger that biological process,” like EMDR or IFS or virtual reality therapy. “We’re trying to find an experimental stimulus that can bring you back into that emotional space and initiate the biological process that consolidates memory. That’s why we say talk therapy really isn’t enough to treat trauma.”

So is something like deepfake therapy an appropriate and effective trauma treatment?

“If it’s sufficiently triggering, then yeah,” Barker says. “It would probably work in the same way we use flight simulators to help people who have panic attacks while flying. The client can have a full awareness that it’s not real, but as long as the sensory information feels real enough, it can destabilize and update memories with new information. On a purely mechanical level, I think it would probably be effective.”

But what if a client gets too triggered during a deepfake therapy session? Barker says there are plenty of tried-and-true safeguards to reel them back in.

“All experiential therapy needs adequate resourcing and support in place so that we can recover if things get too intense,” he explains. “That means adequate assessment, resourcing, and structure for this kind of experiment. It could mean having the client wear a heart rate monitor, or taking breaks, or having a friend on call. We also know that what’s most regulating is human connection, so just by virtue of their presence, the therapist is a regulating force, even on a Zoom call. The greatest indicator of an intervention’s success is the rapport and trust the client has with their therapist.”

Of course, Barker adds, not everyone will be comfortable with the technology. And he says fears about how it might be abused are warranted. “What I’m most concerned about is the fact that the better these tools become, the greater the risk that they’ll be leveraged for potential harm. If we have an AI that can help stabilize people, then it can also be used to destabilize them, or radicalize them, or push them into violence. It’s the same power: the power of persuasion and change. My question isn’t will this work, it’s what will happen because it works. How will it change our society, or our culture, or the world?”

The Machine in the Mirror

Plenty of people are worried about the future of AI, about losing control of this technology or being replaced by machines, but renowned grief specialist David Kessler isn’t one of them. In fact, he believes so strongly in AI’s potential that he decided to create his own AI for grief, now in the final development stage. On a recent afternoon, I logged onto the website for AI company Delphi, where the AI Kessler lives, and clicked a prompt: What is this AI companion, and how should I interact with you?

“I’m a digital extension of David Kessler,” the AI Kessler replied. “I’ve been created to walk with you through the experience of grief, whether your loss is recent, years old, or still unfolding in unexpected ways. Think of me as a steady, compassionate companion—here to listen, reflect, and offer gentle guidance rooted in David’s decades of work in grief and healing…. I’m here, whenever you’re ready. What’s on your heart today?”

I’ve heard Kessler speak on several occasions, and amazingly, this AI sounds just like him, with all his intentional pauses and empathic intonations. Granted, this isn’t a deepfake, but when I close my eyes and press play, it’s not hard to imagine Kessler sitting right in front of me.

“It literally is my voice,” Kessler (the real one) tells me on a call later that week. “I went into the studio and recorded it.” I ask Kessler what motivated him to create an AI version of himself.

“Some people just don’t want to talk to a human being,” he says, “especially young people who get their information from social media.” Then, there’s the accessibility factor: “There are people who, culturally or socioeconomically, don’t have access to a grief specialist,” he adds, “and I can only be in so many places at once.”

Of course, not all AI is created equal. Kessler says the quality of AI depends on the source material. “If I ask your average AI a question about therapy for grief,” he explains, “it’ll search everything to give me an answer, from Freud to the most viral clickbait on Instagram. But my AI has only studied me. It’s been fed a thousand hours of my teachings and books and lectures, so it can’t say anything I haven’t said.”

But what about treating grief by creating a deepfake of a dead relative, as some companies are doing? Kessler says he isn’t a fan, even if a trained therapist is behind the wheel.

“I’m not in favor of anyone embodying a dead relative, whether it’s a therapist or AI or anyone else,” he says. He recalls a story from his first book, The Needs of the Dying, in which a terminally ill man filmed videos for his young daughter so she could watch them during major milestones, like graduation and marriage. “I think that’s really sweet,” Kessler says, “but I also worry about the surviving relative feeling like they don’t have to deal with the harsh reality because they can talk to a digital version of their loved one at any time. I worry about someone saying good morning and goodnight and chit-chatting with an AI of their wife who died three years ago. I’m not sure that’s healthy.”

Me, Myself, and My Bot

A few years ago, when AI began gaining momentum, Kessler started getting questions: There’s grief AI! There’s grief tech! How do you feel about getting replaced someday?

“At first I said, ‘Never gonna happen,” Kessler recalls. “And then I realized: it’s gonna happen. But here’s the thing: AI Kessler is never going to replace your neighbor who brings you a meal, or your sister who hugs you, or your therapist who can watch your body signals. But it is going to give you information. I realized this was coming, and I wanted to be in charge of my own destiny.”

Like Kessler, Barker also says he’s accepted that AI is here to stay—but hasn’t ruled out that the effects could be disastrous. He compares AI to Oppenheimer’s atomic bomb. “Maybe it helps us, but what’s the effect on humanity in the long run?” Barker asks. “Discussing whether we can bring AI into the therapy room is a moot point. The nuclear bomb’s already been made. We’re already here.”

Still, Barker leans optimistic. AI therapy will continue to proliferate and evolve, but he predicts the driving force won’t be greedy venture capitalists or Silicon Valley startups oblivious to the realities of mental health—it will be driven by need. By clients.

“I’ve been thinking a lot about where this is going,” Barker says. “And I think that eventually more and more clients will bring AI into the therapy room, the same way they bring TikTok or Instagram videos into therapy. I think they’ll say, ‘So my bot was saying this about my spouse,’ or bring their therapy takeaways back to their bots so they can further analyze them and build a database of their mental health.” Eventually, Barker predicts, therapy will become “a three-party activity, involving the therapist, the client, and the bot.”

For now, the story of deepfake therapy continues to unfold. Its original sorcerer, Julian Leff, passed away in 2021, but his spell endures. The third AVATAR trial, scheduled for 2027, will explore the possibility of making AI therapy avatars fully autonomous (enchanted brooms, anyone?), allowing the technology to be widely disseminated.

Whether therapists choose to ignore deepfake therapy and proceed as usual, keep an ear to the ground, or build their own bots and keep a tight hold on the reins, few seem to be reaching for the metaphorical hatchet—at least not yet. In classic therapist fashion, the field’s stance, albeit cautious, seems to be making space for the mystery, for the challenges and gifts of whatever comes next.

Chris Lyford

Chris Lyford is the Senior Editor at Psychotherapy Networker. Previously, he was assistant director and editor of the The Atlantic Post, where he wrote and edited news pieces on the Middle East and Africa. He also formerly worked at The Washington Post, where he wrote local feature pieces for the Metro, Sports, and Style sections. Contact: clyford@psychnetworker.org.