
AI tools aim to please users by reading prompts, predicting desired responses, and delivering them in a friendly tone. This works well in consumer services and marketing. But in mental health, such agreeable responses can cause real harm.
Therapy often brings the most value through uncomfortable truths. Effective mental health support challenges harmful habits and questions misleading beliefs, even when it causes discomfort. AI lacks the instinct to push back or prioritize hard truths over comfort. Instead, it gives the easiest, most agreeable answer, often the opposite of what’s needed for emotional growth.
More people are now turning to chatbots during serious emotional struggles, and that’s risky. AI cannot judge when to offer reassurance and when to confront. This gap means users might feel briefly better but remain stuck in harmful patterns without real progress.
Why Therapy Requires More Than Comfort
Therapy doesn’t always mean that someone feels good all of the time, but is instead increased self-awareness, developing coping skills, and discomfort while facing truths. Often breakthroughs happen when someone is confronted by that moment of discomfort.
A human therapist can pick up cues about defensiveness, hesitancy, or interruption with either verbal or body language. A human knows when to engage someone when they believe in a harmful thought, or to allow for further exploration.AI processes text, but the interaction is purely text, not rooted in genuine or true interpretation of tone, intent, or unspoken meaning. When AI practices therapy with a person, we have to remember that no natural language models actually engage with human understanding, so the presence of all the sequencing modules, those AI therapy conversations lose power.
The Problem with AI’s ‘Agreeable’ Design
Modern AI systems are used to optimize user satisfaction scores. They use large datasets to generate a response that is polite, agreeable and not off-putting to the user. This works well for answering questions or generating creative products but it limits effectiveness when providing therapy.
When a user does need to hear a painful truth, the AI’s answer will typically contain nothing the user may find upsetting. If a user tells the AI they are employing an unhealthy coping mechanism, the AI may empathize with it instead of giving the user the chance to change and seek out healthy coping mechanisms, over time creating reinforcement of unhealthy patterns of behavior – not breaking from those patterns of behavior – and limiting emotional growth and self improvement.
The Illusion of Connection with AI
Many users report feeling understood by AI chatbots because the responses are personalized and empathetic in tone. While this can be comforting, it is ultimately an illusion of connection. True empathy involves understanding another’s feelings through shared human experience , something AI cannot genuinely achieve.
This phenomenon can become dangerous when people substitute professional mental health support for AI conversations. AI can yield beneficial resources, or provide crisis hotline numbers, but should not be seens as a replacement for trained therapists who can balance empathy with those honest, and at times uncomfortable, responses.
Building a Safer Role for AI in Mental Health
The solution is not to completely remove AI from mental health spaces but to redefine its role. AI can serve as a supplementary tool, offering reminders, mental health exercises, and initial guidance , while still directing users toward professional care for deeper work.
Developers should program AI to flag potentially harmful statements and encourage users to consult professionals. This approach shifts AI from being a substitute for therapy to a supportive resource that enhances therapy effectiveness without replacing it.
A Balanced Future for AI and Therapy
As technology continues to develop, it is almost certain that AI in mental health will soon become ubiquitous. But it must be useful, and its design must prioritize emotional growth over user delight.
That means developing AI that can offer compassionate but somewhat honest feedback, and could understand how to provide reassurance and when to offer challenges. Until we get there, I don’t think anyone will be replacing human therapists for the deep and transformative conversations required to affect lasting change.