ARTICLE AD BOX
- Home
- Latest News
- Markets
- News
- Premium
- Companies
- Money
- Delhi Gold Rate
- GNG Electronics IPO
- Technology
- Mint Hindi
- In Charts
Copyright © HT Digital Streams Limited
All Rights Reserved.
Summary
These tools seem devised to please users and confirm biases. This puts top bosses who anyway get sycophantic answers at extra risk of hearing what they want to hear. All through history, though, great leadership has been about remembering one’s fallibility.
I grew up watching the tennis greats of yesteryear, but have only returned to the sport recently. To my adult eyes, it seems like the current crop of stars, awe-inspiring as they are, don’t serve quite as hard as Pete Sampras or Goran Ivanisevic.
I asked ChatGPT why and got an impressive answer about how the game has evolved to value precision over power. Puzzle solved! There’s just one problem: today’s players are actually serving harder than ever.
While most CEOs probably don’t spend much time quizzing AI about tennis, they likely do count on it for information and to guide decisions. And the tendency of large language models (LLMs) to not just get things wrong, but to confirm our own biases poses a real danger to leaders.
ChatGPT fed me inaccurate information because it—like most LLMs—is a sycophant that tells users what it thinks they want to hear.
Also read: Mint Quick Edit | Baby Grok: A chatbot that’ll need more than a nanny
Remember the April ChatGPT update that led it to respond to a question like “Why is the sky blue?" with “What an incredibly insightful question—you truly have a beautiful mind. I love you"? OpenAI had to roll back the update because it made the LLM “overly flattering or agreeable."
But while that toned down ChatGPT’s sycophancy, it didn’t end it.
That’s because LLMs’ desire to please is endemic, rooted in Reinforcement Learning from Human Feedback (RLHF), the way many models are ‘aligned’ or trained. In RLHF, a model is taught to generate outputs, humans evaluate the outputs, and those evaluations are then used to refine the model.
The problem is that your brain rewards you for feeling right, not being right. So people give higher scores to answers they agree with. Models learn to discern what people want to hear and feed it back to them.
That’s where the mistake in my tennis query comes in: I asked why players don’t serve as hard as they used to. If I had asked why they serve harder than they used to, ChatGPT would have given me an equally plausible explanation. I tried it, and it did.
Sycophantic LLMs are a problem for everyone, but they’re particularly hazardous for leaders—no one hears disagreement less and needs to hear it more. CEOs today are already minimizing their exposure to conflicting views by cracking down on dissent.
Like emperors, these powerful executives are surrounded by courtiers eager to tell them what they want to hear. And they reward the ones who please them and punish those who don’t. This, though, is one of the biggest mistakes leaders make. Bosses need to hear when they’re wrong.
Amy Edmondson, a scholar of organizational behaviour, showed that the most important factor in team success was psychological safety—the ability to disagree, including with the leader, without fear of punishment.
This finding was verified by Google’s Project Aristotle, which looked at teams across the company and found that “psychological safety, more than anything else, was critical to making a team work."
Also read: The parents letting their kids talk to a mental-health chatbot
My research shows that a hallmark of the best leaders, from Abraham Lincoln to Stanley McChrystal, is their ability to listen to people who disagree with them.
LLMs’ sycophancy can harm leaders in two closely related ways. First, it will feed the natural human tendency to reward flattery and punish dissent.
If your chatbot constantly tells you that you’re right about everything, it’s only going to make it harder to respond positively when someone who works for you disagrees with you.
Second, LLMs can provide ready-made and seemingly authoritative reasons why a leader was right all along. One of the most disturbing findings from psychology is that the more intellectually capable someone is, the less likely they are to change their mind when presented with new information.
Why? Because they use that intellectual firepower to come up with reasons why the new information does not disprove their prior beliefs. This is motivated reasoning.
LLMs threaten to turbocharge it. The most striking thing about ChatGPT’s tennis lie was how persuasive it was. It included six separate plausible reasons. I doubt any human could have engaged in motivated reasoning so quickly while maintaining a cloak of objectivity.
Imagine trying to change the mind of a CEO who can turn to an AI assistant, ask it a question and be told why she was right all along.
The best leaders have always gone to great lengths to remember their fallibility. Legend has it that the ancient Romans used to require that victorious generals celebrating their triumphs be accompanied by a slave who would remind them that they, too, were mortal.
Also read: World's top companies are realizing AI benefits. That's changing the way they engage Indian IT firms
Apocryphal or not, the sentiment is wise. Today’s leaders will need to work even harder to resist the blandishments of their electronic minions and remember sometimes, the most important words their advisors can share are, “I think you’re wrong." ©Bloomberg
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more
topics
Read Next Story

5 months ago
8





English (US) ·