ARTICLE AD BOX
In its blogpost Google said it is sharing an update on its mental health work, including some new changes to better connect people with the right information, resources, and human support at the right time
Google, on Tuesday, announced new mental health safety updates for its Gemini chatbot. The tech giant's move reportedly comes months after a lawsuit filed in a California federal court accused Gemini of aiding the suicide of a 36-year-old Florida man last year.
“Today, we’re sharing an update on our mental health work, including some new changes to better connect people with the right information, resources, and human support at the right time,” Google said in a blog post announcing the update.
How does the update work?
Google explained that Gemini will now display a redesigned “Help is available” feature when conversations suggest possible emotional distress, helping users connect more quickly with crisis support.
If Gemini senses that any conversation involves thoughts of suicide or self harm, it will switch to a simple, one-tap interface that helps one connect instantly with support, Google said in its blog update.
View full Image
The user can either choose to call, text, chat, or visit a crisis hotline – which prompts one to seek help – a feature Google said would remain visible for the remainder of the conversation once activated.
Is the update available in India?
When we tried giving Gemini distressing prompts – the chatbot did not open the crisis support line as Google showed in the image in its blog update. It appears that the feature might be in the process of rolling out in all regions.
View full Image
According to a report by AFP, Google's announcements pertaining to the mental health update comes months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
Florida man's death
Gavalas' father alleged that Gemini spent weeks manufacturing an elaborate delusional fantasy before framing his son's death as a spiritual journey.
The lawsuit seeks several measures including that Google program its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation, mentioned AFP's report.
In its latest blog post, Google also said it had trained Gemini to avoid acting as a human-like companion and resist simulating emotional intimacy or encouraging bullying.

14 hours ago
1






English (US) ·