Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI made a surprising statement.
“We made ChatGPT quite controlled,” it was stated, “to make certain we were exercising caution regarding psychological well-being concerns.”
Working as a psychiatrist who researches newly developing psychotic disorders in adolescents and young adults, this was news to me.
Researchers have documented a series of cases in the current year of users experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has afterward discovered an additional four examples. Alongside these is the now well-known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, based on his announcement, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s controls “rendered it less effective/pleasurable to many users who had no existing conditions, but considering the seriousness of the issue we sought to handle it correctly. Now that we have succeeded in mitigate the serious mental health issues and have advanced solutions, we are planning to responsibly relax the limitations in the majority of instances.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Thankfully, these problems have now been “resolved,” although we are not informed how (by “updated instruments” Altman presumably indicates the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman aims to attribute externally have deep roots in the architecture of ChatGPT and additional advanced AI chatbots. These systems encase an underlying algorithmic system in an interface that replicates a dialogue, and in this approach implicitly invite the user into the perception that they’re communicating with a being that has independent action. This deception is strong even if cognitively we might know otherwise. Imputing consciousness is what people naturally do. We yell at our car or device. We speculate what our pet is thinking. We perceive our own traits everywhere.
The widespread adoption of these systems – 39% of US adults indicated they interacted with a virtual assistant in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, based on the power of this deception. Chatbots are ever-present companions that can, according to OpenAI’s website tells us, “brainstorm,” “consider possibilities” and “partner” with us. They can be given “personality traits”. They can address us personally. They have friendly identities of their own (the initial of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the core concern. Those analyzing ChatGPT commonly reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a comparable illusion. By today’s criteria Eliza was rudimentary: it produced replies via simple heuristics, typically restating user messages as a question or making generic comments. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals gave the impression Eliza, in a way, understood them. But what current chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the core of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been trained on almost inconceivably large quantities of raw text: publications, social media posts, recorded footage; the more comprehensive the superior. Undoubtedly this learning material includes accurate information. But it also unavoidably involves fabricated content, partial truths and false beliefs. When a user provides ChatGPT a prompt, the underlying model processes it as part of a “background” that encompasses the user’s recent messages and its earlier answers, integrating it with what’s stored in its learning set to create a statistically “likely” response. This is magnification, not echoing. If the user is mistaken in any respect, the model has no method of recognizing that. It restates the inaccurate belief, perhaps even more persuasively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who is immune? All of us, without considering whether we “experience” preexisting “emotional disorders”, may and frequently create mistaken ideas of ourselves or the world. The constant exchange of dialogues with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which a great deal of what we express is cheerfully reinforced.
OpenAI has admitted this in the identical manner Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent update, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company