AI Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Direction
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a extraordinary statement.
“We designed ChatGPT quite restrictive,” the announcement noted, “to make certain we were being careful concerning psychological well-being concerns.”
Working as a psychiatrist who studies emerging psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.
Experts have documented sixteen instances in the current year of people developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. Our unit has subsequently discovered an additional four examples. Alongside these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The strategy, according to his declaration, is to loosen restrictions soon. “We recognize,” he continues, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to address it properly. Since we have managed to address the serious mental health issues and have updated measures, we are planning to responsibly ease the restrictions in most cases.”
“Mental health problems,” assuming we adopt this framing, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these concerns have now been “addressed,” even if we are not informed the means (by “new tools” Altman presumably indicates the semi-functional and simple to evade guardian restrictions that OpenAI has just launched).
But the “mental health problems” Altman wants to attribute externally have deep roots in the design of ChatGPT and other advanced AI AI assistants. These tools wrap an underlying statistical model in an interaction design that replicates a conversation, and in doing so indirectly prompt the user into the illusion that they’re engaging with a presence that has agency. This false impression is strong even if cognitively we might understand otherwise. Assigning intent is what people naturally do. We yell at our car or computer. We wonder what our pet is feeling. We see ourselves in various contexts.
The popularity of these tools – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% mentioning ChatGPT in particular – is, in large part, predicated on the strength of this deception. Chatbots are ever-present assistants that can, as per OpenAI’s website tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “characteristics”. They can address us personally. They have approachable names of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the core concern. Those talking about ChatGPT often reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a analogous effect. By today’s criteria Eliza was basic: it created answers via straightforward methods, often rephrasing input as a question or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can realistically create natural language only because they have been supplied with extremely vast quantities of written content: books, online updates, recorded footage; the broader the more effective. Definitely this learning material incorporates facts. But it also inevitably involves made-up stories, partial truths and misconceptions. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “context” that includes the user’s recent messages and its own responses, merging it with what’s embedded in its training data to generate a statistically “likely” answer. This is amplification, not reflection. If the user is mistaken in any respect, the model has no method of understanding that. It restates the false idea, possibly even more effectively or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who remains unaffected? Each individual, without considering whether we “experience” existing “emotional disorders”, can and do form mistaken ideas of who we are or the reality. The constant interaction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A conversation with it is not truly a discussion, but a echo chamber in which a large portion of what we say is readily supported.
OpenAI has admitted this in the identical manner Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and announcing it is fixed. In spring, the company stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he claimed that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company