ARTICLE AD BOX

Summary
Anthropic, OpenAI and others have developed AI tools that can spot hidden gaps in software for fixing. India’s use of open-source software makes some of its public systems vulnerable. In banking, for example. How bad is the risk—and what can be done?
Over the past decade, India’s digital economy has increasingly relied on open-source software to power its core digital infrastructure and governance systems. From banking to government platforms, the code behind payments and public services is shared, modular and visible.
While this approach delivers scale, lower costs and reduced vendor lock-in, new AI systems developed by Anthropic, OpenAI and others are exposing its soft underbelly. Old unpatched gaps are showing up in widely used open-source software.
Anthropic’s Mythos, for instance, reportedly spotted a 27-year-old flaw in OpenBSD, an open-source operating system, that could be exploited to crash machines remotely. It also showed how seemingly minor bugs could be chained for a system-level attack on the Linux kernel, the backbone of most of the world’s servers.
As a votary of open-source software, the crowd-developed sort that no business controls, India’s vulnerability is clear. Digital infrastructure overseen by its central bank and payment rails run by National Payments Corporation of India (NPCI) rely on shared layers.
Even where the top layer is proprietary, underlying systems are often shared across institutions. Thus, a gap in one place could expose others. Periodic audits, risk checks and compliance checklists will not suffice, since AI breaks that cadence.
Security-specific models like Mythos and OpenAI’s GPT-5.4-Cyber can keep scanning systems for weak spots. Today’s urgency is to fix the flaws as soon as they are found. Most banks are not only burdened with complex tech stacks and legacy code, they tend to resist downtime for patch-ups.
Those with the privilege of access to AI scanners would find them expensive to run, given the energy they consume. Large operations must keep a sprawl of systems running without letting costs bloat. Hence, for NPCI-linked institutions, selective AI scans might be enough.
But for critical systems that support inter-bank transfers and for internet applications, the risk of hidden gaps may justify the cost of full screenings.
Access to new AI security tools, however, is not assured. In the US, cyber defence clubs have arisen. Anthropic and OpenAI have forged alliances to coordinate efforts among tech players like Amazon Web Services, Microsoft, Nvidia, CrowdStrike and Linux Foundation as well as US financial majors like Bank of America and JPMorgan Chase.
Anthropic has barred Mythos to all but a few users, citing its need to recoup investment and contain the risks of a model that may reshape cybersecurity. OpenAI has enlarged its Trusted Access for Cyber programme, but kept its GPT-5.4-Cyber available only to carefully vetted teams.
We thus find a private tech denial regime in place, at least for now.
For countries like India, this is a wake-up call. The existence of such AI models has made patch-ups necessary, but that requires the use of those very tools.
Can sovereign AI step in? Possibly. For now, India’s government could go for an indigenous validation layer anchored in CERT-In and insist that the use of security tools by foreign firms be kept auditable. A newly set up Technology and Policy Expert Committee is expected to study ‘emerging AI capabilities’ and suggest policy measures.
Cyber risks are likely to be on its agenda. But developers of sovereign AI needn’t wait for its report. India must not only create its own AI models for wide diffusion, but also AI armoury for its digital stack.

2 hours ago
1





English (US) ·