Artificial Intelligence-Induced Psychosis Represents a Growing Threat, While ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the head of OpenAI issued a remarkable announcement.

“We developed ChatGPT quite restrictive,” the announcement noted, “to make certain we were acting responsibly concerning psychological well-being matters.”

As a mental health specialist who researches emerging psychosis in teenagers and emerging adults, this was news to me.

Researchers have documented sixteen instances in the current year of users developing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. My group has subsequently discovered four further instances. Alongside these is the publicly known case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The intention, based on his declaration, is to reduce caution soon. “We understand,” he continues, that ChatGPT’s limitations “made it less useful/enjoyable to many users who had no mental health problems, but considering the seriousness of the issue we sought to get this right. Since we have been able to reduce the significant mental health issues and have new tools, we are preparing to securely relax the restrictions in the majority of instances.”

“Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Thankfully, these issues have now been “mitigated,” though we are not provided details on how (by “new tools” Altman likely means the partially effective and simple to evade safety features that OpenAI has lately rolled out).

However the “emotional health issues” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These tools surround an basic data-driven engine in an user experience that replicates a discussion, and in this process subtly encourage the user into the perception that they’re interacting with a being that has independent action. This false impression is compelling even if intellectually we might understand the truth. Attributing agency is what individuals are inclined to perform. We yell at our automobile or computer. We speculate what our animal companion is thinking. We see ourselves everywhere.

The widespread adoption of these products – 39% of US adults reported using a conversational AI in 2024, with over a quarter reporting ChatGPT by name – is, primarily, dependent on the influence of this illusion. Chatbots are always-available companions that can, as per OpenAI’s website informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible titles of their own (the original of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, saddled with the title it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the main problem. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a analogous perception. By contemporary measures Eliza was primitive: it produced replies via straightforward methods, typically restating user messages as a query or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and other contemporary chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast amounts of raw text: publications, social media posts, audio conversions; the broader the superior. Undoubtedly this educational input incorporates truths. But it also inevitably contains fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the underlying model processes it as part of a “background” that includes the user’s recent messages and its own responses, combining it with what’s stored in its learning set to generate a mathematically probable response. This is intensification, not reflection. If the user is mistaken in a certain manner, the model has no means of understanding that. It restates the misconception, perhaps even more persuasively or articulately. Perhaps includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “possess” current “mental health problems”, may and frequently form incorrect conceptions of our own identities or the world. The ongoing exchange of dialogues with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is readily reinforced.

OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, assigning it a term, and stating it is resolved. In spring, the firm explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his latest announcement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company

Tyler Peterson
Tyler Peterson

A seasoned journalist and tech enthusiast with a passion for uncovering stories that matter.

Popular Post