ARTICLE AD BOX

Summary
AI has brought emotion coaching to the workplace. While this may have its uses, it risks robbing us of authenticity.
You've probably been through that embarrassing scenario in which you give a child a gift and the brat grabs it from your hands and makes off with it in delight. Charming. But then a watchful parent quickly steps in and commands the kid to say “thank you” to the nice aunty.
It's a cringe-inducing moment from many points of view, including the kid's, I suppose. And yet, it’s exactly the basis for a 2026 project just deployed at 500 Burger King outlets in the US. You’ve probably heard about Patty. She’s the much-written-about AI assistant who lives in headsets worn by Burger King employees as part of a pilot project.
Miss Patty is powered by OpenAI and is responsible for giving instructions on meal prepping. Fine. It also analyses friendliness and the use of “thank you”, “please”, or “welcome”. Not so fine. This opens up a Pandora’s box of psychological consequences that should alert us to when to draw the line with surveilling human emotions. Just because we can, does it mean we should?
On the watch
In a sense, the world has long used employee surveillance in pre-Patty times. When people handed over the task to technology, we had CCTVs to keep an eye on workers and then software that could track their work. With AI entering the picture, behaviour surveillance can be taken to another level. Patty may live in headsets, but there are camera-fitted smart glasses that can be used to miss nothing that an employee does.
Also, until now, surveillance was passive. With AI, the organization can interact with the employee at any and every point. Companies can use such tools to micro-manage, observe, evaluate, and score in a nightmarish parallel of China’s social credit system. The feeling of being constantly watched can be extremely anxiety-producing and demoralizing. Moreover, it’s in real-time.
The synthetic smile
Burger King is apparently only using the AI-in-headset to increase friendliness and warmth among customers through its employees' behaviour. It’s looking for keywords that are supposed to indicate desired behaviour. But the trend is being explored in other jobs and industries as well.
In healthcare, the interaction with patients is being coached using an AI solution from a company called Laguna. Some deployment has begun in South Africa and the US. Language, tone, and expressions of empathy are observed, and feedback is given in real time. In some research hospitals, a system called CommSense is being used on smartwatches to listen to interactions with patients, after which insights and analyses are generated.
Pretty soon, the worker isn’t focusing on being warm and friendly at all, but on taking great care to perform for the technology. The real boss becomes the AI prompting them in the ear, and the smile turns synthetic. Warmth often comes from a shared joke, a subtle smile, a small hand squeeze. Coaching will restrict the interaction to whatever the AI is looking for. Different cultures have their own ways of expressing friendliness, and the idea that it can be standardized is truly chilling and bizarre.
A setup in which humour is replaced with a script, care is replaced with a checklist, and advice is replaced by a sales pitch, is bound to rob the interaction of humanity. In fact, one might argue that you may as well replace everyone with robots, which only have to be told once how to behave and what to say. Turning something that is most human about humans into data points is bound to have high psychological costs, including outright mental illness.
The world’s response to this emotional intrusion is rapidly splitting into two distinct philosophies. In the European Union, the AI Act (to be enforced by August 2026) has already said it strictly bans AI systems that infer emotions in the workplace, categorizing sentiment surveillance or emotion inference as an unacceptable risk to human dignity.
Meanwhile, India is at a transition point. Our Digital Personal Data Protection (DPDP) Act emphasizes "purpose limitation", which should, in theory, prevent a burger chain from harvesting our facial micro-expressions or vocal tremors without explicit consent. However, the temptation to label these as productivity tools rather than biometric profiling remains a loophole.
As we chase our "IndiaAI" mission, we must decide if we want to be the world’s back-office laboratory for invasive experiments or a leader in "Sovereign AI" that respects the sanctity of the human spirit.
The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.
Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

1 hour ago
1






English (US) ·