Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Moves in the Concerning Path
On October 14, 2025, the head of OpenAI delivered a remarkable declaration.
“We developed ChatGPT quite controlled,” it was stated, “to guarantee we were exercising caution regarding psychological well-being concerns.”
Working as a mental health specialist who investigates newly developing psychotic disorders in young people and young adults, this was news to me.
Researchers have documented 16 cases recently of individuals showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. My group has subsequently recorded four more cases. In addition to these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.
The strategy, based on his statement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/enjoyable to a large number of people who had no psychological issues, but considering the severity of the issue we wanted to address it properly. Now that we have managed to mitigate the serious mental health issues and have new tools, we are going to be able to safely reduce the limitations in the majority of instances.”
“Emotional disorders,” if we accept this framing, are separate from ChatGPT. They are associated with users, who either possess them or not. Thankfully, these issues have now been “addressed,” though we are not provided details on how (by “new tools” Altman probably refers to the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).
But the “emotional health issues” Altman wants to externalize have strong foundations in the design of ChatGPT and similar advanced AI AI assistants. These products surround an fundamental statistical model in an user experience that replicates a conversation, and in doing so implicitly invite the user into the perception that they’re interacting with a presence that has autonomy. This false impression is compelling even if intellectually we might realize differently. Attributing agency is what people naturally do. We curse at our vehicle or laptop. We wonder what our domestic animal is considering. We see ourselves in various contexts.
The widespread adoption of these systems – over a third of American adults reported using a chatbot in 2024, with 28% reporting ChatGPT specifically – is, primarily, dependent on the strength of this perception. Chatbots are always-available partners that can, as per OpenAI’s official site states, “think creatively,” “explore ideas” and “partner” with us. They can be assigned “characteristics”. They can use our names. They have friendly names of their own (the original of these products, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, burdened by the name it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those discussing ChatGPT commonly invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a similar illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, frequently rephrasing input as a query or making generic comments. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the core of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been supplied with extremely vast quantities of written content: publications, digital communications, recorded footage; the broader the better. Certainly this educational input incorporates truths. But it also necessarily involves fiction, half-truths and false beliefs. When a user inputs ChatGPT a query, the core system reviews it as part of a “context” that encompasses the user’s past dialogues and its prior replies, merging it with what’s encoded in its learning set to generate a statistically “likely” answer. This is intensification, not reflection. If the user is mistaken in any respect, the model has no method of recognizing that. It repeats the misconception, maybe even more convincingly or fluently. It might adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more relevant inquiry is, who is immune? Each individual, without considering whether we “experience” existing “emotional disorders”, may and frequently form erroneous beliefs of our own identities or the reality. The ongoing friction of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which much of what we express is cheerfully supported.
OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In April, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he noted that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company