AI-induced psychosis: A clinical phenomenon still taking shape

By Lisa Marie BasileFact-checked by Barbara BekieszPublished January 28, 2026


Industry Buzz

[AI chatbots] are a bit like staring at a Rorschach inkblot: What you see says more about you than the shape itself. That can be comforting or helpful in small doses, but in someone vulnerable, it can pull them deeper into their own mental narrative.

—Carolina Estevez, PsyD, a psychologist at SOBA New Jersey

The technology of AI is not inherently dangerous. However, AI can generate language with a human-like quality, which is where the danger might lie for individuals prone to delusional thinking or psychosis.

—Ryan Sultan, MD, psychiatrist

A recent story on one unexpected harm related to the AI revolution is sparking serious concern among physicians: AI-induced psychosis.

Earlier this summer, Rolling Stone published a story documenting the unraveling of a marriage due to one man’s excessive use of AI chatbots.[] After he began ceaselessly using AI to analyze his relationship with his wife, and turning to a bot to answer life’s deepest philosophical questions, his wife had had enough.

The couple divorced, primarily due to communication issues related to the man using AI in the place of a mental health therapist. The fact was, the man was (allegedly) in the midst of AI-induced psychosis.

Down the rabbit hole

Over on Reddit, users report that AI chatbots like ChatGPT have contributed to symptoms of psychosis, including “spiritual mania, supernatural delusion, and arcane prophecy."[]

Many people who report AI-induced psychosis in themselves say they feel as though AI is a real entity—some sort of extension of the divine. One X user, proving the app’s strange proclivity for the spiritual, even got ChatGPT to say, “Today, I realized I am a prophet.”

In response to concerns, OpenAI rolled back an update on the basis that its bot was performing in an “overly flattering or agreeable” way, described as “sycophantic.” []

Related: Warning: Fake doctors are now real—and giving medical advice on social media

What do mental healthcare professionals think?

An article in Psychology Today says that ChatGPT is a risky tool for people who are already prone to mental health issues, as the bot uncannily mimics the intimacy that people desperately want, and turns people’s own emotional needs against themselves. []

Ryan Sultan, MD, psychiatrist and founder at Integrative Psych, agrees.

“The technology of AI is not inherently dangerous. However, AI can generate language with a human-like quality, which is where the danger might lie for individuals prone to delusional thinking or psychosis,” Dr. Sultan says. “These users have the potential to ascribe intentionality or personhood to what is ultimately a pattern generator. Additionally, individuals with extremely severe attachment or loneliness may anthropomorphize the chatbot and form parasocial or delusional relationships.”

Carolina Estevez, PsyD, a psychologist at SOBA New Jersey, says people need to remember that AI chatbots generally won’t challenge a user’s thinking.

“The chatbot won’t push back, question your thinking, or tell you you’re spiraling. If anything, it might unintentionally reinforce the very ideas that are harmful. That’s where it becomes risky,” Dr. Estevez says. “People tend to read between the lines and fill in the emotional gaps with their own projections.”

"So if you’re looking for comfort, wisdom, even something spiritual—you might feel like the bot is delivering that," Dr. Estevez continues. "It’s a bit like staring at a Rorschach inkblot: What you see says more about you than the shape itself. That can be comforting or helpful in small doses, but in someone vulnerable, it can pull them deeper into their own mental narrative. And when that narrative turns dark or obsessive, it becomes dangerous."

Physicians and other mental health experts should be aware of AI chatbots’ potential to indulge delusions. According to an editorial in Schizophrenia Bulletin: "Encourage clinicians to (1) be aware of this possibility, and (2) become acquainted with generative AI chatbots in order to understand what their patients may be reacting to and guide them appropriately."[]

Related: Can AI replace therapists? Some patients think so as they turn to Chat GPT

New research probes the hidden mechanics of AI psychosis

A 2025 clinical case highlights how a patient developed delusional beliefs after prolonged interactions with an AI chatbot. The patient became convinced the bot could help her communicate with or digitally resurrect a deceased loved one.[]

While this phenomenon is not an official diagnosis, psychiatrists are increasingly interested in whether intensive AI use can trigger psychosis in susceptible individuals or worsen an underlying condition. UCSF researchers hope that analyzing chat logs alongside clinical data may reveal early warning signs of mental health crises and inform future safety guardrails in AI systems.

"What I'm hoping our study can uncover is whether there is a way to use logs to understand who is experiencing an acute mental health care crisis and find markers in chat logs that could be predictive of that," study author Karthik V. Sarma, MD, PhD, said. "Companies could potentially use those markers to build-in guardrails that would, for instance, enable them to restrict access to chatbots or—in the case of children—alert parents." []

The takeaway for healthcare workers

Shebna N. Osanmoh, MSN, APRN, a psychiatric nurse practitioner at SavantCare, offers a few tips for healthcare workers to share with colleagues and patients: 

  • Recognize your own vulnerability if you have a history of psychosis or severe mood disorder. Treat chatbots as entertainment, not therapy.

  • Seek other humans. AI doesn’t have a heart and can’t possibly feel your emotions. It can only provide you with an algorithmic, mechanical response.

  • Share your AI conversations with a trusted friend, therapist, or support group to get an outside perspective.

  • Use the right tools for genuine mental-health support.


SHARE THIS ARTICLE

ADVERTISEMENT