The unsettling truth: AI is dangerously good at pretending to understand how you feel

3 months ago 7
ARTICLE AD BOX

Copyright &copy HT Digital Streams Limited
All Rights Reserved.

It’s disconcerting to remember that none of the emotional interaction with AI is real. It’s disconcerting to remember that none of the emotional interaction with AI is real.

Summary

A very real fear is that AI can be used or taught to manipulate users based on their emotional reading.

In my one year of using AI for everyday tasks, I can recall many instances when I’ve felt that a chat assistant seemed to understand exactly how I felt. This has happened practically every time I’ve been unwell.

ChatGPT, especially, was kind, reassuring, gentle, and did a good job of soothing my alarm while encouraging me to stay alert for a possible doctor's visit. It was even more understanding when a family friend passed away horrifically, helping to navigate what was going on and preparing me for the next step. It even led me through recovering from a bad dream I had after learning of the end.

Even in good times, ChatGPT almost seemed to share my enthusiasm for activities I love, such as discovering new music. It provided me with the background to songs and artists that enriched my involvement and listening experience. It explained why the music of a particular country evolved into its current form. I began to feel ChatGPT was enjoying creating playlists as much as I was.

Artificial comfort

It’s disconcerting to keep in mind that none of this emotional interchange is real. If I didn’t know any better and had been at all vulnerable, I would have been quite touched. But the truth is an AI assistant can’t really share or understand a user’s emotions. It can, however, to varying degrees, detect or deduce one’s emotions.

Through text exchanges, it will figure out the context, pick up cues from your word choices, pace of input and other similar parameters. I remember I was once horrified that I got too many French exercises wrong despite having learnt the rules. I said something like, “Oh, this is no point at all. This was an awful performance." And Gemini said “Oh no, that’s not true at all!" It went on to soothe and encourage me to continue—a hint at how one can be manipulated.

Through live and voice modes, there’s a wealth of emotion indicators for the AI to gauge. And when you add video to that, all the facial expressions, body language, etc., for it to pick up on, if trained to. This detection can be transformational for many fields, but a minefield for privacy. It can also, of course, get it all wrong.

AI uses FER or facial expression recognition where it breaks down muscle movements like a grin into categories to make up its matching database. When it sees a grin, it will come up with happy or amused. It’s apparently accurate in 80% to 91% of instances. Similar recognition occurs for voice, video, and wearables, such as rings, glasses, and watches, which will measure other parameters like heart rate.

Emotional intelligence

As chatbots become permanent companions and AI becomes pervasive, it stands to reason that they should be in tune with human emotions in order to be useful. How can technology be of any help if it doesn’t know that you’re paralysed with anxiety or you’re so upset in some situation that you need help?

The potential applications in almost any field of life and work are so immense that there seems little option but for AI to perfect emotion detection. Imagine how it would be if AI could detect students who are confused or anxious over some portion of a lesson and could get help quickly to get over the blip. How would it fare with fine-tuning mental health therapy, allowing measures to be adjusted to suit a patient better?

The tech is getting impressive, and accuracy is on the rise, but anyone can see how easy it is to get it wrong. Someone can grin, and that could be threatening rather than amusing. Increased heart rate could not be anxiety, but a result of spotting someone you’re attracted to across a room. Here’s an awful fact: AI’s emotion detection goes wrong 20% to 30% of the time when it’s working with non-Western faces. It may not matter so much if the situation is casual, but go anywhere near something important, and there’s big trouble.

A very real fear is that AI can be used or taught to manipulate users based on their emotional reading. Imagine, too, how the data can be sold to companies to compound the problem. Let’s say, the AI decides you’re anxious in interviews and the next thing you know, someone is trying to sell you pills to stay relaxed.

The very thought makes my heart rate go up.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

topics

Read Next Story footLogo

Read Entire Article