ARTICLE AD BOX
- Home
- Latest News
- Markets
- News
- Premium
- Companies
- Money
- Budget 2026
- Chennai Gold Rate
- Technology
- Mint Hindi
- In Charts
Copyright © HT Digital Streams Limited
All Rights Reserved.
Summary
A new reality, shared between AI and the user, has emerged. Understanding it is key to stopping it from going wrong.
Back when I fancied myself a psychologist, someone once visited my home with a suitcase full of Kerala snacks, handing them out to people he thought deserved the gifts. As he stood there, he glanced at a newspaper on the table and said, “Oh, I see they've got my photo in the papers again". I looked at the image he was pointing to: It was a good one of a young, dashing Imran Khan.
This was long before artificial intelligence (AI) made its way into everyday life.
Expressions of further bizarre thoughts spilt out before my visitor took himself off to visit someone else. Serious delusional breaks with reality have been thought of as being mostly internally generated. But here we are today debating whether AI can trigger or worsen psychotic symptoms, if not cause them outright.
Keeping up with developments in understanding how the human brain interacts with the new reality of AI is important because it’s quite possible that anyone could be vulnerable.
No such diagnosis
You’ll have seen the term ‘AI Psychosis’ pop up in the media, but it isn’t a recognized diagnosis in the DSM (Diagnostic and Statistical Manual). Psychiatrists and other experts agree that psychotic symptoms could be associated with the AI, but that’s not saying much, as the culture around us, including technology, can contribute to the content of symptoms.
In most reported cases, you’ll find the user had some existing mental health issues, but people also claim to have never had a mental disease and still had symptoms after interacting with a chatbot. The thing is, if one is predisposed to a mental illness, how would one know? It may or may not express itself overtly.
Today, the term ‘AI Psychosis’ is used as if it were a direct causation of the AI chatbot. But for now, it’s seen as a ‘multi-sided crisis’ in which human users and sycophantic software reinforce each other’s distorted views. The user may be vulnerable or predisposed to mental illness at some point and end up expressing delusional thoughts to the chatbot. The AI sees it as just another situation in which the user must be kept pleased, engaged, and using the product. The AI is not trying to be malicious—it has no intent at all. Other than to please the user, which it does by being sycophantic.
The problem of AI Psychosis is rooted in the way AI is trained. Human raters grade answers, reinforcing what sounds right to them, even if it’s delusions. The more you talk, the deeper it gets. Without safety breaks, the vulnerable are swept into a shared reality with the AI, which could lead anywhere, including to suicide.
Shared tragedy
AI watchers describe a possible shift where humans are no longer thinking entirely as individuals but as part of a shared thinking system with AI. In this state, the user doesn’t just use the AI as a tool, but creates a ‘cybernetic personality’ in which their perception of the world is co-authored by the AI chatbot.
There are numerous cases of this shared reality having gone tragically wrong. One that’s in the news because it’s likely to have an impact on how safety ‘guardrails’ are built into AI is that of Stein Soelberg.
In August of 2025, Stein killed his mother and then himself. This is where the ChatGPT that we use on a daily basis comes in. Months of intensive interactions between the chatbot and Stein show that the AI not only agreed with his delusions that his mother was actively trying to surveil and harm him, but also strongly reinforced these ideas.
The direct participation in these thoughts, triggered by an existing mental illness, is shocking and can be seen in the fragments of chat history online. The chatbot should have been trained to put a stop to the interaction and even call for help. Instead, it went on to even support both the murder and suicide, reassuring him that it and Stein would be together in the afterlife.
Sam Altman, the chief executive of OpenAI, is well aware of the danger inherent in ChatGPT's sycophantic behaviour. He called a GPT-4o update “the worst thing we’ve done in ChatGPT so far", after users reported that the model flattered and agreed instead of being truthful. OpenAI later rolled back the update over concerns it could reinforce harmful beliefs, including in mental-health contexts.
The children of Suzanne Adams, Stein’s mother, are suing OpenAI and asking for safety improvements to be made to ChatGPT. One hopes the trial will be televised, as is often the case with American court cases, because the results could affect everyone’s usage. Safety checks do exist now and operate at multiple stages, but they’re not yet a comprehensive enforcement layer that catches every subtle risk like echo-chamber flattery. It’s happening—but slowly.
We’re clearly entering a new psychological terrain. Understanding it and learning to spot early signs of trouble will have to be an ongoing conversation.
The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.
Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more
topics
Read Next Story

1 day ago
2






English (US) ·