Mint Explainer | AI at war: The guardrails debate—and India’s absence

3 weeks ago 3
ARTICLE AD BOX

Summary

As AI tools move from principle to practice in conflict zones, the gap between Big Tech’s voluntary safeguards and real-world use is widening, raising questions about their credibility and India’s absence from the debate.

While artificial intelligence (AI) firms such as OpenAI and Anthropic have created ‘pledges’ and ‘constitutions’ promising that their AI would “do no harm”, recent conflicts in West Asia have shown the limits of such voluntary guardrails.

Even as companies articulate principles, AI tools are finding their way into military and strategic use. The gap between promise and practice is raising a broader question: do these self-imposed rules carry weight, and why have Indian firms largely avoided them? Mint explains.

What are AI constitutions?

AI constitutions are internal, self-regulatory frameworks that outline how a company intends to build and deploy its technology. They go beyond standard terms of service, setting broad principles for safety, governance and societal impact.

In practice, these frameworks reflect a company’s stated philosophy on innovation, defining what it will and will not do. Dario Amodei’s Anthropic, for instance, has published an 84-page constitution detailing its approach to industry, governance and society.

Such constitutions are not mandated by law in most geographies, including India. They are also different from general terms of service that most companies must have.

Are these binding on companies?

No. AI constitutions and similar policies are voluntary and generally not legally enforced. Even so, they can shape how companies are perceived by employees, governments and clients, and serve as internal guardrails for product development. Silicon Valley, as well as the European Union, have been the biggest proponents of these self-imposed guardrails.

There is also precedent for such principles influencing real-world decisions. In 2015, Apple refused to break encryption on iPhones for US law enforcement in the San Bernardino shooting case. Then, chief Tim Cook stood by the company’s stated policies, underscoring how internal principles can guide strategic choices.

Have such pledges existed before AI?

Yes. The idea of companies signalling ethical intent predates AI.

Google, for instance, long anchored its code of conduct around the motto “Don’t be evil”, before later shifting to “Do the right thing”. While not a formal constitution, such statements served a similar purpose, articulating boundaries for corporate behaviour.

That said, more structured frameworks like AI constitutions remain largely a feature of Western technology firms. Comparable efforts have been limited in India.

Why don’t Indian firms have such constitutions?

Indian companies have historically relied on simpler guiding principles rather than detailed governance frameworks. For example, Bajaj Auto’s “humara Bajaj” positioned the brand around mass-market accessibility.

However, full-fledged constitutions that explicitly define technological boundaries or ethical red lines are largely absent. Experts attribute this to differences in how innovation is approached. Indian firms have typically focused on frugality, functionality and margins, while Western counterparts have placed greater emphasis on formalizing ethical commitments.

If AI is used in war, what good are these pledges?

Recent developments have intensified scrutiny of voluntary AI guardrails. Policymakers have raised concerns after Anthropic’s Claude was reportedly used, through defence contractor Palantir, for strategic targeting in Iran, including incidents that led to the death of women and children.

OpenAI, too, has said that its work with the US ‘Department of War’ will include safeguards, though such assurances remain verbal.

Consultants argue that AI use in conflict is likely to expand, making it imperative for companies to rethink their constitutions. The focus, they say, should be on ensuring that even in military contexts, AI systems are deployed for “good” rather than unchecked destruction.

For now, Indian AI startups have stayed away from publishing formal constitutions, and have no publicly disclosed defence contracts.

Read Entire Article