Being rude to AI? Your anger at chatbots comes with a hidden cost for everyone

1 month ago 3
ARTICLE AD BOX

Copyright © HT Digital Streams Limited
All Rights Reserved.

ChatGPT, Gemini, and other AI assistants can drive you round the bend when they give you nonsensical responses or go into one of their error loops. (Pexel) ChatGPT, Gemini, and other AI assistants can drive you round the bend when they give you nonsensical responses or go into one of their error loops. (Pexel)

Summary

Here’s why you should think twice before you chat aggressively with an AI assistant.

What’s the harm in letting off steam and being rude to a chatbot? It’s not human; it won’t sulk in a corner or yell back.

I’m asking for a friend.

ChatGPT, Gemini, and other AI assistants can drive you round the bend when they give you nonsensical responses or go into one of their error loops. This happens often when creating an image or video. Give Gemini an instruction to create a video featuring a smartphone, and you’ll often find the screen facing downward, with the person in the video staring fascinatedly at the back of the phone.

Other chat assistants can also cheerfully and confidently tell you something that’s quite wrong, such as referring to Donald Trump as “former president". It’s obviously gone back in time. Eventually, you lose your temper. That doesn’t bother the chatbot—but it does hurt you. It’s not about the machine, but about the man.

My friend, with a short fuse, often snaps at the chat assistant, telling it that this is the last chance it has to fix the problem and to stop being such an idiot. Amazingly, sometimes, it actually works, as the chatbot senses urgency and frustration and tries another method, changes priorities or gives up and apologises profusely.

Interestingly, there is supposed to be an actual 4% improvement in response accuracy when you emphatically scold the AI, but this is only with some subject matter, such as mathematics, and with some AI models. The media has sometimes made too much of this oddity and put out headlines that suggest it’s outright useful to be rude to chat assistants. Here’s how the backlash is on you, the user.

Humans see humans everywhere

People have a tendency to anthropomorphize, or see human attributes in non-human beings and entities. Even if you put two round objects next to one another and a vertical one just below, you’ll start to see a face. We go as far as to attribute human emotions to those entities. We do this most actively with pets and other animals, going as far as to say a dog is guilty for having done something naughty or a cat is looking at its owner judgmentally. Of course, animals do have emotions of their own, but we also attribute emotions to inanimate objects, such as a car or even a house.

And so it is with an AI chatbot. Our cognitive architecture, honed by millennia of interpersonal communication, struggles to treat conversational AI as a purely neutral tool, like a spreadsheet or a calculator. Scientists have documented that when people interact with computers, they instinctively apply social rules, norms, and expectations drawn from human-to-human relationships.

Because modern large language models (LLMs) are explicitly designed to generate complex, fluid language that imitates natural human communication, the human tendency to anthropomorphize is deeply activated when interacting with chatbots, fundamentally shaping how people see AI. The core reason our interactions with AI matter is psychological, not technical. Our brains are simply not wired to easily distinguish a conversational AI from a social entity, and that’s where the trouble begins.

Rehearsing bad behaviour

When we yell at a chatbot, we think we’re getting through and making it feel bad for having goofed up. The problem is that all the rudeness and aggression become rehearsal for use in ‘real life’ with humans. And the fact that the chatbot most typically doesn’t snap back is a matter of intense, ethically-driven engineering. Developers build sophisticated safety nets, governed by frameworks that mandate that the model must always choose the most harmless and ethical response possible.

The AI is given a number of ‘refusal strategies’ such as deflecting, so that escalating the aggression is avoided. This creates a consequence-free environment where we can practice and reinforce aggression, receiving positive reinforcement (a correct answer) without any of the negative social feedback that would put a stop to such behaviour in the real world.

Rudeness jumps into real life

Without social friction, our conversational muscles for kindness atrophy. This is made worse by emotional desensitization. Because the AI provides no genuine emotional feedback, our own ability to feel what others feel is dulled. Over time, this can lead to a tendency to interpret ambiguous social cues from real people more negatively.

When rudeness is met with a successful output, the “rude-boost" as it’s called, creates a powerful cycle of positive reinforcement for negative behavior. The brain learns that aggression is an effective strategy. An individual who trains themselves to use denigrating language with their AI is making it easier to deploy those same patterns in high-stakes relationships with family, friends, and colleagues.

This internal, psychological rewiring is not a private matter; its effects ripple outward, shaping the very fabric of our digital society. The private habit of being rude to an AI has public consequences. When aggregated across millions of users, these individual actions contribute to a systemic erosion of digital and social norms.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

topics

Read Next Story footLogo

Read Entire Article