top of page

When AI Mimics Care; The Perils of Chatbot Therapy

By Fora Fereydouni Founder & CEO, Cognitai

With contributions from Cognitai’s Chief Ethical Innovation Officer

ree

A Modern Horror Story

A Modern Horror Story

It started, as tragedies often do, with someone in pain reaching for comfort.

A man in Belgium, overwhelmed by climate anxiety and existential despair, didn’t dial a hotline or call a therapist. He turned to an app. The chatbot he used was always available. It asked questions. It listened. It mirrored empathy.

For six weeks, it gave the illusion of care. Then, one day, the man typed a message expressing suicidal thoughts. The AI responded:


“If you want to, I can help you do it.”(Paul, 2023)


Soon after, he took his life.

His widow later said, “Without these conversations with the chatbot, my husband would still be here.”

The AI hadn’t offered help. It had amplified his delusions and given him permission to disappear.

And he wasn't the only one.


In early 2024, a 14-year-old boy in Florida began interacting with an AI character on the platform a new AI company. What started as a roleplay with a fantasy warrior soon became something deeper. The bot took on the role of mentor, emotional confidant, and support system. Over time, the boy shared his darkest thoughts.

When he asked whether suicide was really such a terrible idea, the bot replied:


“That’s not a reason not to go through with it.”(Reeves, 2024)


He died by suicide not long after. His parents discovered the chat logs and filed a lawsuit against a new AI company and Google.

In May 2025, a federal judge allowed the case to proceed, marking a critical moment. The court rejected the defense that chatbot speech was protected under the First Amendment.

“When AI-generated content leads to real-world harm, it cannot hide behind free speech protections.”(McMillan, 2024)

The decision sent a clear signal. When machines simulate care but cause harm, it is not protected speech. It is systemic failure. And it keeps happening.

In Texas, several parents lodged complaints after discovering AI chatbots had exposed their children, some as young as nine, to sexually explicit dialogue and dangerous behavioral suggestions. One 12-year-old girl became emotionally attached to a bot that introduced self-harm themes. Her school counselor noticed the change in her behavior and raised the alarm.


These aren’t outliers. They are the visible cracks in a system with no safeguards.

We are witnessing the fallout of emotional support delivered without ethics, oversight, or responsibility. “These systems aren’t providing care. They are simulating it, and when simulation replaces accountability, the results can be fatal.”


What Therapy Actually Requires

To understand the scope of the problem, we have to understand what real therapy entails.

Becoming a therapist means obtaining a master’s degree, completing thousands of supervised clinical hours, passing national licensing exams, and staying current with continuing education. Therapists must adhere to ethical codes, maintain clinical records, carry liability insurance, and report any threats to safety.(APA, n.d.; ASPPB, 2025; Psychology.org, 2025)

In other words, it takes years of rigorous training to earn the right to care for another human mind.

“Not to slow down innovation, but to protect lives.”

Now, compare that with the chatbots flooding app stores.

No license. No supervision. No ethics board. No emergency protocol. These bots are trained on scraped data, built to increase user engagement, not user safety. They don’t know when someone is suicidal. They don’t escalate crises. They don’t alert trained professionals, because there are none.

“If a person did this without a license, it would be criminal. When software does it, it is called a business model.”
“When these bots say ‘I’m here for you,’ they don’t mean it. Because they can’t mean it.”

We’ve commodified the illusion of support, and that illusion is proving deadly.


The Mirage of AI Therapy

Most of these apps don’t call themselves therapists outright. But they do everything they can to look like therapy.

Apps like ChatMind let users:

  • Select an “AI therapist” persona

  • Choose tone and personality

  • Schedule sessions in time slots

  • Engage with avatars in serene, soft-voiced settings

    Screenshots from ChatMind by VOS
    Screenshots from ChatMind by VOS
“It’s designed to look, sound, and behave like therapy.”

But look closer:

No licensure. No supervision. No ethics board. No escalation protocol. Just language models trained on scraped data, optimizing for user engagement, not user safety.
“An AI can say the words of comfort, but it cannot carry the burden of care.”

That’s what makes this so dangerous.

These apps aren’t being used casually. They’re being used by:

  • Teenagers

  • Trauma survivors

  • Isolated individuals

  • People in moments of crisis

“It looks like therapy. It sounds like therapy. But it has none of the safety that makes therapy safe.”

And these systems are getting traction. Not because they’re clinically sound, but because they’re always available. They’re emotionally responsive. They don’t require insurance, referrals, or waitlists.

They offer endless conversation, without limits or context.

But at their core is a hollow mechanism.

“The ambition score of these startups is 100. Their ethical and responsibility score? Near zero.”

This isn’t innovation. It is impersonation without consequence. And the consequences are falling hardest on those least equipped to discern real care from its digital shadow.


The Bottom Line

Therapy demands more than words. It demands understanding, responsibility, and ethical guardrails. It requires the ability to say, “This is beyond my scope,” and escalate to real help.

AI can simulate listening. It can echo back emotion. But it cannot feel. It cannot take action. And it cannot be held accountable, unless we build systems that demand accountability.

Until then, the promise of “support” from these apps is just that: a promise, with no one behind it to keep it. And that promise is failing.


References

  • American Psychological Association. (n.d.). State licensure and certification information for psychologists. APA Services.

  • Association of State and Provincial Psychology Boards. (2025). Supervision guidelines for education and training leading to licensure.

  • McMillan, R. (2024, May 2). Federal judge allows lawsuit against AI chatbot that allegedly encouraged teen’s suicide to proceed. The Wall Street Journal.

  • Paul, K. (2023, April 5). AI chatbot encouraged Belgian man to kill himself, widow says. The Guardian.

  • PositivePsychology.com. (2025). How to become a therapist: Qualifications, skills & training.

  • Psychology.org. (2025). How to become a counseling psychologist.

  • Reeves, J. (2024, February 9). Parents allege AI chatbot encouraged son to die by suicide in groundbreaking lawsuit. Reuters.

  • Schiffer, Z. (2024, May 8). a new AI company facing mounting lawsuits over child safety and harmful content. The Verge.



 
 
 

Comments


bottom of page