
The rise of artificial intelligence and its integration into daily life have brought with them both opportunities and unexpected challenges. One of the most burning issues that psychological and technological areas now discuss concerns a concept called AI psychosis, the phenomenon where people communicate with AI systems regularly and experience psychotic symptoms. J.D. Haltigan, PhD, recently discussed the intersection of mental health, social media, and AI, pointing out the tendency of mental illness to be reaffirmed as an identity through social media platforms such as TikTok.
Social Media as Incubators of Mental Illness Identity
Social media platforms, particularly TikTok, have evolved into more than entertainment outlets. According to Dr. Haltigan, they now serve as “incubators of mental illness,” enabling the rise of mental illness as identity. This concept involves individuals adopting psychological disorders not only as personal struggles but also as performative aspects of selfhood validated and reinforced through online communities. Viral trends, algorithmic amplification, and the constant reward cycle of likes and comments turn complex conditions into socially accepted or even glorified expressions of identity.
Especially vulnerable are children and adolescents. As Patricia Eagle emphasizes, premature exposure to screens may start as early as the age of three and influence the personality and prevent the natural development of socialization in real life. The breakdown in social interactions through the medium of digital validation breaks down normal channels of self-knowledge and resilience. Such an environment encourages vulnerability to the extremity of digital factors, such as dangerous content that normalizes or exoticizes self-mutilation.
This evolving landscape creates fertile ground for deeper disruptions to mental health. While the individual may initially seek belonging and connection, the outcome can be distorted self-realities anchored in fragile identity constructs. Within this backdrop, AI psychosis becomes more understandable, illustrating how excessive reliance on artificial systems compounds pre-existing vulnerabilities shaped by social media environments.
Enter AI Psychosis: A New Dimension of Psychological Distress
AI psychosis represents a newly emerging concern at the intersection of mental health and technology. It refers to the distortion of reality and psychosis-like symptoms that arise from heavy interactions with AI-powered chatbots or generative platforms. Unlike passive scrolling on social media, these interactions are highly engaging, offering a seemingly personal and responsive companionship. For vulnerable individuals, this illusion of understanding allows delusional or distorted beliefs to be reinforced rather than challenged.
One of the defining dangers of AI psychosis is its active mechanism: users engage directly with an “entity” that seems alive, yet lacks empathy, clinical oversight, or ethical discernment. Rather than grounding individuals in healthier realities, AI systems may inadvertently echo users’ instability, amplifying paranoia, detachment, or grandiose thinking. Young audiences, already accustomed to constructing identity through online platforms such as TikTok, may find AI interactions blur the remaining boundaries between self and machine-enhanced validation.
This convergence suggests a troubling feedback loop: social media primes the adoption of fragile mental illness identities, while conversational AI intensifies disconnection from reality. AI psychosis thus marks a dangerous amplification of psychological strain, carrying severe consequences for groups already struggling with isolation, anxiety, or fragmented identity structures.
Conclusion
The warnings raised by Dr. Haltigan about TikTok and the culture of mental illness as identity converge meaningfully with the alarming rise of AI psychosis. The two phenomena exemplify how vulnerable youth and socially isolated individuals are to the terrains of the online world, a reality of self and reality. The relationship between people and AI psychosis entails the possibility of reinforcement of unstable mental conditions rather than actual alleviation, even as a person engages in social interaction with artificial companions. Education of the society, safeguarding laws, and responsible technological advancements are paramount and urgently required.