ARTICLE AD BOX
- Home
- Latest News
- Markets
- News
- Premium
- Companies
- Money
- Tata Motors demerger
- AI Tech4Good Awards
- Technology
- Mint Hindi
- In Charts
Copyright © HT Digital Streams Limited
All Rights Reserved.
Summary
As AI generated content floods our screens, telling truth apart from fiction has become nearly impossible. India’s new draft rules propose labelling AI content, but will this work? Here’s a market-friendly way to help us separate fact from deception.
With deepfakes and other AI-generated content flooding the virtual world, it’s all but impossible to make out what’s authentic and what’s not. The government is trying to control the phenomenon.
On Wednesday, it proposed draft rules that require AI content to be labelled by its creators and social media platforms; the latter would also need to scrutinize such content for takedowns.
The menace of fakes is real and intervention might help, but policing every byte of content that goes online may simply not be feasible.
A more pragmatic approach may be to enlist market incentives for the task. We could devise a system of provenance certification for authentic content to be tested and labelled as such.
Those who want to convey the truth have an incentive to stand out for it, while those aiming to deceive us would prefer to operate below the fake-spotting radar.
This would lead to a self-sorting market, with discerning consumers of content looking out for authenticity tags. Whatever is left uncertified would lose relative credibility. So long as these tags can’t be gamed, this may offer a better solution to our problem of whether to believe what we see online or not. The idea is worth a shot.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more
topics
Read Next Story

3 months ago
7





English (US) ·