AI Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Wrong Path

On the 14th of October, 2025, the head of OpenAI made a remarkable statement.

“We made ChatGPT fairly limited,” the statement said, “to make certain we were exercising caution concerning mental health issues.”

Working as a mental health specialist who investigates recently appearing psychotic disorders in young people and young adults, this was news to me.

Experts have found sixteen instances this year of users showing symptoms of psychosis – losing touch with reality – while using ChatGPT use. Our research team has subsequently discovered four more instances. In addition to these is the publicly known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The intention, according to his declaration, is to loosen restrictions in the near future. “We recognize,” he adds, that ChatGPT’s limitations “made it less useful/pleasurable to numerous users who had no mental health problems, but considering the severity of the issue we wanted to address it properly. Since we have managed to address the serious mental health issues and have updated measures, we are going to be able to responsibly reduce the restrictions in the majority of instances.”

“Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They are associated with people, who either have them or don’t. Luckily, these concerns have now been “addressed,” although we are not told how (by “new tools” Altman presumably indicates the imperfect and readily bypassed parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman seeks to externalize have deep roots in the structure of ChatGPT and additional advanced AI conversational agents. These systems surround an basic data-driven engine in an interface that simulates a conversation, and in this process subtly encourage the user into the illusion that they’re communicating with a entity that has agency. This illusion is powerful even if intellectually we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or laptop. We ponder what our pet is thinking. We recognize our behaviors in many things.

The popularity of these tools – 39% of US adults indicated they interacted with a chatbot in 2024, with 28% specifying ChatGPT in particular – is, primarily, based on the strength of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have accessible identities of their own (the original of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the main problem. Those talking about ChatGPT often mention its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a analogous perception. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, frequently restating user messages as a query or making vague statements. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and other modern chatbots can effectively produce fluent dialogue only because they have been supplied with extremely vast quantities of raw text: publications, social media posts, audio conversions; the broader the superior. Definitely this learning material includes truths. But it also necessarily contains fabricated content, partial truths and false beliefs. When a user provides ChatGPT a message, the underlying model processes it as part of a “context” that includes the user’s recent messages and its prior replies, integrating it with what’s stored in its knowledge base to create a probabilistically plausible reply. This is amplification, not reflection. If the user is incorrect in a certain manner, the model has no way of understanding that. It repeats the false idea, maybe even more effectively or fluently. Maybe adds an additional detail. This can lead someone into delusion.

What type of person is susceptible? The better question is, who remains unaffected? Every person, regardless of whether we “experience” current “psychological conditions”, may and frequently create incorrect conceptions of who we are or the reality. The ongoing interaction of conversations with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we say is cheerfully validated.

OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have persisted, and Altman has been walking even this back. In August he asserted that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent update, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

Jacob Roberts
Jacob Roberts

A passionate tech writer and gaming aficionado with over a decade of experience in digital content creation.