Strategic autonomy: Why India should call off the LLM debate and develop its very own AI models

13 hours ago 1
ARTICLE AD BOX

logo

India’s ambitions may need to take a quantum leap if its goal is to join the big league of large language models. (istockphoto)

Summary

The argument that developing large language models (LLMs) locally would waste Indian resources has weakened. Put to strategic use, AI could have implications for cyber and national security. Clearly, India must develop its own frontier AI models.

In the field of artificial intelligence (AI), should India create its own large language models (LLMs) that can work on a trillion-plus parameters?

Scale-wise, this would put them in contention with LLMs created by US players OpenAI, Google, Anthropic, Meta and their Chinese rivals Alibaba, DeepSeek and Moonshot, with ByteDance, Tencent and Zhipu AI not far behind. Or should India focus on creating AI tools and agents based on available models?

Eminent leaders of India’s success in IT services have argued in favour of the latter option. Scarce resources need not go into building frontier models from scratch, they say, as we could gain more by using what already exists to go further. This might seem to make economic sense; why reinvent the AI wheel?

The rising use of AI for strategic purposes, however, has shifted the calculus. Today, AI is not just a force multiplier on the battlefield, it can be wielded for blackmail. At war, advanced AI models can identify targets for attack; at peace, cyber-security models can exploit gaps to wreak havoc on digital systems.

Globally, a loud alarm has been rung by the restricted release of Anthropic’s Mythos, a cyber gap-spotting model claimed to be so powerful that institutional systems could come apart if it falls into rogue hands.

To ensure that this high-end scanner is only used for fixing vulnerabilities, Anthropic has reportedly granted access to just over 40 trusted entities, most of which are based in the US. If the hype around Mythos is true, then India finds itself locked out of an important cyber-safety league.

That is just one example of tech deprivation. There are others too. In all, India’s strategic autonomy demands that homegrown LLMs be developed even if it means deploying significant catch-up funds.

Without the defence that cyber-security AI tools offer, for example, we would be beholden to companies based in countries whose interests may or may not align with ours. The urgency would have been far less if AI were just another digital technology. Its versatility, however, has combined with recent developments to make it clear that those in possession of cutting-edge models may want to extract a price for sharing what they have.

Moreover, the Cold War logic of ‘mutually assured destruction,’ by which peace was held by the fear that neither side could survive an exchange of nukes, might be valid in the context of AI warfare as well. To deter an attack by, say, an AI-guided swarm of armed drones, we must have the equivalent capacity to strike back.

That said, can India actually develop frontier models? We have the tech talent for it, surely. Funding should not be a constraint either. Sarvam AI, a startup based in Bengaluru, has a model that works on 105 billion parameters and takes a special approach that minimizes computing needs.

But India’s ambitions may need to take a quantum leap, so to speak, if the country’s goal is to join the big league of LLMs.

Weak access to high-end AI chips, the sort that US-based Nvidia is famous for, need not hold us back. China’s DeepSeek has shown how chip denial can be overcome; since then, Huawei’s Ascend chips have proven up to the task, going by the pace at which Chinese AI models have emerged.

With government support, there is no reason why India should not be able to do what China has. What has been missing is the will to join what’s turning out to be a US-China race. But the dangers lurking out there should change that.

Read Entire Article