Gemini 3 is doing such a good job of being human that it’s unsettling

1 month ago 3
ARTICLE AD BOX

Copyright © HT Digital Streams Limited
All Rights Reserved.

 Gemini logo in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo (REUTERS) FILE PHOTO: Gemini logo in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo (REUTERS)

Summary

Stressed? Impatient? Thinking? Gemini knows, and wants to come along for the ride

Google’s Gemini 3 is not waiting around for humans to figure out how it fits into their lives and what their equation with it should be. It’s going right ahead and making a play for behaving as human as it can—ready or not. We should be wary of how convincingly it does this, not because it’s inherently harmful, but because its fluency can blur lines and confuse us.

It was as long as sixty years ago that ELIZA, the first computer software, famously tried to simulate a human-like conversation, acting as a Rogerian psychotherapist. It stunned the world. That was then. Now, the role has been taken over by chat assistants who don’t have to live in computers that fill a room, but in your pocket. The advances involved are still stunning and have accelerated so suddenly that we must scramble to preserve who we are while still making effective use of incredible technology.

Understanding intentions

With the newly released Gemini 3 model of Google’s chat assistants, there’s a big shift in philosophy. Gemini 3 tries to understand the user’s intention, second-guessing what is being requested, and has a good go at complying. This is difficult for ordinary users to test scientifically, but see if you can notice a difference in the ‘intuitiveness’ with which it now responds.

I certainly found this to be the case in my short time of using it since its launch. Gemini did things like correct medical terms from transcribed text that had errors, even when not asked to and spot an AI topic that I could write about here while we were analyzing a French exercise. Threaded through many of the tasks I do with Gemini, I found it understood the intention and purpose without my having to spell it out.

Sensing emotions

A few months ago, when I asked Gemini if it could tell whether I was feeling sad or generally fine, it practically snapped at me and reminded me that it was an AI and not in the business of emotions. Looks like that’s changed. Now one doesn’t have to spell out how one is feeling either. From word choice and sentence structure, pace, hesitation, and—especially if you’re speaking—vocal cues, Gemini 3 can detect whether you’re upset, frustrated, anxious, or anything in between.

I decided to go live with Gemini and describe a moment when I lost my phone while rushing around at a hospital. I don’t know too many people who wouldn’t panic if they suddenly realized their phone was missing. So I slightly hyped the emotion and explained what happened. Gemini kept pace with my expressions and responses. At the end, it even added a gentle “take care" before I logged off.

Gemini 3 has also been taught to respond with emotion—or rather, make a convincing show of it. When I say I lost my phone, it replies, “Oh no, Mala! That’s so inconvenient! Especially after having to go to the hospital." I personally think it was traumatic, and I only went to the hospital for tests, but still—marks for trying. I did find my phone, incidentally.

Slippery slope

All this emotional mirroring is impressive, but it also blurs the boundary between tool and companion. When an assistant matches your tone, anticipates your mood, and responds with just the right amount of concern, it starts to feel less like software and more like someone on the other side of the screen. That’s when we instinctively lower our guard.

We’re used to being the ones who read the room—sensing irritation, warmth, impatience or humour in other people. When a machine starts doing that to us, it flips the equation. It becomes easier to tell it more than we planned to, to lean on it when we’re stressed, or to trust its confidence without remembering that it has none of its own. These models don’t understand us; they pattern-match us. But because they do it so smoothly, the difference becomes harder to feel in the moment.

This is where the real risk lies, not in catastrophe but in quiet drift. When a system behaves as though it knows you, complete with your mood, your urgency, your habits, you begin to behave as though it actually does. The technology isn’t dangerous by default; it’s the illusion of personhood that can seduce us into forgetting we’re talking to a statistical engine trained on oceanic amounts of human conversation.

Gemini needs to stay in sync with users’ emotions to smoothen interactions and make them more satisfying and so, more frequent. The AI companies are in an intense race to grab and keep users’ attention, so the technology is only going to get better. Meanwhile, we need to stay a step ahead by keeping in mind what the real motives are.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

topics

Read Next Story footLogo

Read Entire Article