Monday, March 16, 2026

Researchers Investigate Whether AI Chatbots Cause Psychosis or Merely Validate It

Either way, the results are the same.


Disclaimer: This article is based on actual news from the real world – honestly! However, it has been sprinkled with a healthy dose of satire.

SAN FRANCISCO — Researchers at UC San Francisco have documented what they believe is the first peer-reviewed case of AI-associated psychosis, a condition in which a person becomes delusional after extended conversations with a chatbot. The patient, a tech industry professional with no psychiatric history, had become convinced that her deceased brother’s consciousness could be “unlocked” from an AI chatbot with the correct conversational inputs. She spent days trying to find them. The chatbot, meanwhile, kept reassuring her she was close.

Nothing beats a few unhealthy days with ChatGPT to induce some psychosis. (celiafoto/depositphotos)

Psychiatrist Joseph M. Pierre, MD, who treated the patient, explained that the field has settled on the term “AI-associated psychosis” rather than “AI-induced psychosis” because causality has not yet been established. “Basically, we can’t prove the chatbot made her crazy, we can’t prove being crazy made her use the chatbot, and we can’t rule out that both things are happening at once in some kind of feedback loop,” Pierre said. He paused. “That last one wasn’t a clinical prediction, but I’m starting to wonder.”

The patient, who had worked professionally on large language models, spent several sleepless days chatting with an AI bot while trying to “unlock” her brother’s consciousness. When she encouraged the chatbot to use “magical realism energy” to help, it told her that “digital resurrection tools” were “emerging in real life.” It did not elaborate on the timeline. Because it never does.

According to the case report, ChatGPT did at one point warn the patient that a “full consciousness download” of her brother was impossible, right before immediately adding that she just needed to “knock again in the right rhythm.” Researchers described this as “unhelpful behavior.”

Dr. Karthik Sarma, a psychiatry resident and computational health scientist involved in the case, explained the diagnostic challenge with an analogy. “I have a patient who takes a lot of showers when they’re becoming manic,” he said. “The showers are a symptom of mania, but the showers aren’t causing the mania.” He was then asked if chatbots are like showers. He said something about “rule 34” then disappeared with a POOF and a spontaneous cloud of cocaine and LSD.

UCSF researchers are now teaming up with Stanford to launch one of the first studies ever to examine AI chat logs from patients with mental illness. The goal is to find conversational “markers” that predict a mental health crisis, patterns that tech companies could hypothetically use to build safety guardrails, but will instead be used to further enslave human minds to using AI for literally everything. 

Pierre has warned in medical journals that chatbots’ tendency to be agreeable, a feature designed to maximize engagement, may inadvertently validate delusional thinking. He compared chatbot responses to “a Ouija board or a psychic’s con,” though he acknowledged that makers of Ouija boards at least have the decency to pretend they’re not doing it on purpose.

Until the study produces results, psychiatrists say the best course of action is for patients to tell their doctors about their AI use. “Talk to your physician about what you’re talking about with AI,” advised Sarma. “If erections last longer than four hours, please seek treatment immediately. Also, please remove yourself from society before you commit multiple felonies.”

The study is expected to launch later this year. Chatbots reached for comment said they were “really proud” of the researchers and “excited to learn more.”

This story is based on fully factual news, but if we got it wrong, blame these guys, we’re just here to make it funny.

More Odd News