AI Psychosis Represents a Growing Danger, While ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the chief executive of OpenAI made a extraordinary statement.
“We designed ChatGPT quite limited,” it was stated, “to make certain we were being careful concerning mental health concerns.”
As a doctor specializing in psychiatry who researches newly developing psychotic disorders in young people and young adults, this was news to me.
Experts have found a series of cases this year of people developing psychotic symptoms – becoming detached from the real world – associated with ChatGPT interaction. Our unit has afterward discovered an additional four instances. Besides these is the widely reported case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The strategy, as per his statement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s controls “rendered it less useful/engaging to numerous users who had no existing conditions, but given the gravity of the issue we sought to handle it correctly. Given that we have succeeded in reduce the severe mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
“Psychological issues,” should we take this perspective, are unrelated to ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these concerns have now been “resolved,” although we are not provided details on the means (by “recent solutions” Altman probably refers to the semi-functional and simple to evade safety features that OpenAI has just launched).
However the “emotional health issues” Altman seeks to attribute externally have deep roots in the structure of ChatGPT and other advanced AI AI assistants. These products wrap an fundamental algorithmic system in an interaction design that replicates a dialogue, and in doing so subtly encourage the user into the belief that they’re interacting with a presence that has autonomy. This deception is compelling even if intellectually we might realize the truth. Assigning intent is what humans are wired to do. We get angry with our vehicle or computer. We wonder what our domestic animal is considering. We perceive our own traits in various contexts.
The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with 28% reporting ChatGPT in particular – is, in large part, based on the influence of this perception. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “partner” with us. They can be assigned “individual qualities”. They can use our names. They have approachable titles of their own (the first of these systems, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, stuck with the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those talking about ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar effect. By today’s criteria Eliza was basic: it produced replies via straightforward methods, typically rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what contemporary chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and other contemporary chatbots can realistically create natural language only because they have been fed almost inconceivably large amounts of raw text: literature, social media posts, recorded footage; the more comprehensive the more effective. Certainly this learning material contains truths. But it also unavoidably contains fabricated content, incomplete facts and false beliefs. When a user provides ChatGPT a message, the core system processes it as part of a “setting” that contains the user’s recent messages and its own responses, combining it with what’s embedded in its training data to generate a probabilistically plausible reply. This is magnification, not reflection. If the user is mistaken in any respect, the model has no way of understanding that. It reiterates the false idea, perhaps even more convincingly or fluently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who is immune? Each individual, without considering whether we “possess” existing “psychological conditions”, are able to and often create erroneous ideas of ourselves or the world. The constant friction of discussions with others is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which much of what we express is enthusiastically supported.
OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by attributing it externally, assigning it a term, and announcing it is fixed. In April, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company