ARTICLE AD BOX

Summary
While Anthropic, OpenAI, Google, xAI and Meta charge ahead with powerful AI models, India’s Economic Survey proposes an AI Safety Institute to track and tackle AI risks. But any such body must have all major players in the race, including China.
Dario Amodei, Sam Altman, Demis Hassabis, Mark Zuckerberg, Elon Musk—a recent Economist article describes them as the small, powerful group of five men who will determine the path along which artificial intelligence (AI) evolves.
Their fellow AI pioneer Geoffrey Hinton, who invented the neural network system that enables AI models to learn like humans, quit Google in 2023 to alert the world that while further development of AI could lead us to a utopian future, it could equally lead us to a dystopian one. Since then, a debate has raged over which way AI is taking us.
This is an open question. But the fact that a dystopian future is even a possible scenario requires the world to establish guardrails to protect itself against the ambitions of these horsemen of an AI apocalypse.
That points us to another question: whether competition among AI innovators will drive us towards an apocalypse, and if so, what can be done to pre-empt it. That question has to be asked in the context of global geopolitics, in particular the intense competition for hegemony between the US and China.
Seen through such a geopolitical lens, the AI race is best compared with the history of the race for arms—from guns, tanks and submarines to warplanes, missiles, drones and now cyber-weapons and the development of increasingly lethal nuclear bombs. AI models have already been weaponized as parts of virtually autonomous weapon systems.
The recent hostilities in Gaza, Iran and Lebanon have demonstrated the gruesome collateral cost in lives and livelihoods when frontline weapons like guns, missiles and drones are linked to back-end AI models via satellites.
Surveillance models like Claude, rolled out by Amodei’s firm Anthropic, were reportedly deployed by Israel and the US to hack into enemy communication networks and extract information about the characteristics, behaviour patterns and locations of potential targets.
The information generated by these surveillance models is integrated with other models in the AI stack to create databases that are classified and used for setting targets—be it locations or individuals—with the help of a third set of models, leading finally to the execution of a strike.
Since the targets are not directly identified but selected on the basis of pattern recognition and behaviour, the probability of error is significant and the weapons are so lethal that even a few errors can result in thousands of deaths.
Mythos, which was recently released by Anthropic, and other similar AI models launched by others have spread alarm. These can hack into virtually any communication system.
Tomorrow, they could be deployed to disrupt power grids, water supply systems, rail networks or airline schedules in an enemy country, bringing it to its knees. It is possible that China and Russia may have already pilot-tested their own versions of such AI models.
In our search for global guardrails against an AI-driven apocalypse, perhaps the best analogy is the nuclear test ban treaty. The original treaty, banning all except underground nuclear tests, came into force in 1963, soon after the 1962 Cuban missile crisis demonstrated the dangers of nuclear confrontation.
Originally signed by the US, Soviet Union and UK, it eventually had more than 100 countries as signatories. The significant exceptions were France, China, India and Pakistan, which considered it an unequal treaty. It took another 45 years for a comprehensive nuclear test ban treaty (CTBT) to be signed and ratified in 2007 by 44 member countries of the United Nations Conference on Disarmament that possess nuclear reactors.
India, Pakistan and North Korea were again notable exceptions, but even these countries have abided by the CTBT ban on testing.
Since the US dropped atomic bombs on Hiroshima and Nagasaki at the end of World War II in 1945, no country has deployed a nuclear weapon against an enemy country so far. When countries, even enemies, face the prospect of mutually assured destruction from the deployment of such lethal weapons, they come together to establish global guardrails to pre-empt such a disaster.
In its discussion of a roadmap for India’s AI future, the Economic Survey for 2025-26 refers to the findings of AI Lab Watch, an independent organization dedicated to AI safety, on how big technology firms obfuscate or misreport the internal safety evaluations of their models. It argues the case for international collaboration to establish an AI Safety Institute proposed by the IT ministry’s Governance Guidelines.
The AI Safety Institute would, among other functions, analyse emerging AI risks, identify regulatory gaps, coordinate among concerned parties on AI safety issues and undertake safety training and advocacy.
The Economic Survey recommends collaboration with the AI Security Institute in the UK and National Institute of Standards and Technology in the US for the establishment of a global AI Safety Institute.
Although this proposal is most welcome, it does not go far enough. China, which is among the main players in the AI race underway, must have its skin in this game too. Effective global guardrails against an AI apocalypse would require close cooperation among all the main players—especially China, the US, UK, EU, Russia, Japan, Korea and India, and preferably under UN auspices.
These are the author’s personal views.
The author is chairman, Centre for Development Studies.
About the Author
Sudipto Mundle
Sudipto Mundle is Chairman of the Board of Centre for Development Studies, India, and serves on the boards of other organizations. Formerly he was an Emeritus Professor at the National Institute of Public Finance and Policy (NIPFP), a Distinguished Fellow at the National Council of Applied Economic Research and Visiting Faculty at the Indian School of Public Policy. He was also a member of the Fourteenth Finance Commission, India, the erstwhile Monetary Policy Advisory Committee of the Reserve Bank of India and the National Statistical Commission, where he also acted as Chairman.<br><br>He spent much of his career until 2008 at the Asian Development Bank, Manila, where he held several positions including that of a Director in the Strategy and Policy Department as his final assignment. In his earlier career in India, he served in a number of academic institutions including the Indian Institute of Management, Ahmedabad, the Centre for Development Studies and NIPFP, New Delhi. He was an economic adviser in India’s Ministry of Finance from 1986 to 1989. <br><br>Dr. Mundle graduated from St. Stephen’s College, Delhi, and has a PhD in economics from the Delhi School of Economics. He was a Fulbright Scholar at Yale University, a Joan Robinson Memorial Fellow at King’s College, Cambridge, and has had visiting assignments at the Institute of Social Studies at the Hague and the Japan Foundation, Tokyo.<br><br>His research includes contributions to development economics, fiscal and monetary policy, macroeconomic modelling and governance. His current research focus is on inter-state comparative studies of public service delivery and state finances, longitudinal village studies and employment policy. He has published several books and papers in professional journals and is a columnist for the financial newspaper<br><br>Mint. He is a life member and current President of the Indian Econometric Society.

18 hours ago
1





English (US) ·