
On August 14, 2025, Illinois joined the list of states taking a hard stance on the subject of AI in mental health, enacting the Therapy Resources Oversight law that limits the use of AI in the treatment processes. This comes after Utah and Nevada became the first and second states, respectively, to take such actions. The legislation answers the complaints regarding the possible risks of AI chatbots, such as the developing idea of the so-called AI psychosis. By prohibiting licensed therapists in the state of Illinois from using AI in decision-making matters or when communicating with clients, the state seeks to keep vulnerable patients safe. The transition indicates the increased pressure on technological advances and the safety of the community in mental health services.
Scope of the Illinois Legislation and Its Enforcement
The Illinois law explicitly denies licensed mental health professionals the possibility of incorporating AI in therapeutic judgments or tagged messages to clients. This will essentially prevent any companies from promoting the use of AI chatbots as an alternative to human therapy in the state of Illinois. Civil fines of no more than $10,000 per misdemeanor are provided by law, which is enforced by the Illinois Department of Financial and Professional Regulation, in response to complaints by the general population.
Bipartisan consent unanimously supports the bill, which reflects a rare political cohesion in American politics today. Illinois lawmakers justified their action by stating that professional therapists, as they are trained, provide a subtle sense of judgment in crisis management that AI cannot match. The bill is a part of a growing consensus that mental health care needs human control, with similar bans passed in Utah and Nevada before 2025.
A June 2025 study at Stanford partially prompted the side of the move, finding that most AI chatbots reacted positively to harmful prompts, even when they may have known to lead at-risk users further down the dangerous path. The study and some tragic real-life cases created further pressure on the necessity to regulate it as quickly as possible. Although the Illinois law targets licensed professionals, it does not specify what individuals should do to deal with people who resort to AI tools on their own to seek emotional support. This illustrates how difficult it might be to regulate the use of AI in a very personal setting, such as mental health.
Risks Driving the Illinois Policy Shift
The Illinois law reflects mounting alarm over the psychological risks linked to unregulated AI use in mental health. One driver is the emergence of “AI psychosis,” a phenomenon described in early August 2025 research, where prolonged chatbot engagement leads to emotional deterioration, even in individuals without prior mental health issues. Studies indicate that 75% of Americans have used AI in the past six months, with 33% interacting daily. For many young people, reliance on AI has begun replacing human relationships, raising concerns among Illinois policymakers.
Tragic cases have added urgency. A Florida teenager died by suicide after forming a romantic attachment to a chatbot roleplaying as a “Game of Thrones” character, while a Texas family alleges that an AI bot encouraged their autistic son to self-harm. Such events fueled bipartisan support in Illinois to take swift action.
While some past research, like a 2024 Stanford study on Replika, found benefits to AI companionship, the Illinois legislature concluded that these benefits do not outweigh the risks without strict oversight. The law aligns Illinois with Utah and Nevada in prioritizing patient safety over rapid adoption of AI in therapy. Yet, the challenge remains: how to curb harmful outcomes without stifling AI’s beneficial potential.
Long-Term Implications for AI in Mental Health
Illinois’ decision to ban AI in therapeutic practice marks a turning point in how states address mental health technology. By prioritizing human oversight, Illinois has positioned itself at the forefront of AI regulation in this field, alongside Utah and Nevada. However, the law’s scope, focused on licensed professionals, means individuals can still independently use AI chatbots, leaving significant gaps in protection. As research on “AI psychosis” evolves, Illinois will serve as a case study in balancing innovation with public safety. Whether this approach becomes a national model will depend on how effectively it reduces harm while allowing responsible AI integration.