Central Banks To Tech Giants, All Fear This New AI Model: Can Anthropic's Mythos Hack Any System?

9 hours ago 1
ARTICLE AD BOX

Last Updated:April 17, 2026, 07:00 IST

Claude Mythos has achieved what many experts feared was years away. It is the ability to identify and exploit software vulnerabilities at superhuman speeds

 AFP/File)

Anthropic, founded by former OpenAI researchers who prioritised AI safety, now finds itself at the centre of a firestorm. (Image: AFP/File)

Call it alarm bells, panic, or warnings – it won’t change the fact that there is a “superhuman AI threat" lurking in the shadows.

From central banks to tech giants and even governments around the world, all fear this new entity that can practically destroy traditional boundaries of cybersecurity as we know.

Claude Mythos: the latest artificial intelligence (AI) model from Anthropic has achieved what many experts feared was years away. It is the ability to identify and exploit software vulnerabilities at superhuman speeds.

As the world comes to terms with this breakthrough, the global financial system finds itself most vulnerable as the threat looms larger than ever before.

WHY ARE BANKS IN A PANIC?

The emergence of Mythos has not merely caused a ripple in the tech world, it has sparked a full-scale alarm among global financial leaders and government regulators.

According to reports, the heads of America’s largest banks recently convened for an emergency meeting with Federal Reserve chairman Jerome Powell and US Treasury Secretary Scott Bessent. The primary agenda of this high-level meeting, held on the sidelines of an event in Washington, was to weigh the security implications of a system that can autonomously scan vast amounts of code to find and “chain together" previously unknown security flaws.

The fear is palpable. Anthropic and its early partners have said Mythos can execute these tasks at a scale no human hacker could ever match.

“What once required elite specialists can now be performed by software agents," Shlomo Kramer, CEO of Cato Networks, told AFP.

The implications are clear: Mythos could theoretically be used to bring down banks, hospitals, or critical national infrastructure within a matter of hours.

WHAT IS PROJECT GLASSWING?

So, to manage this terrifying reality, Anthropic has taken the unprecedented step of postponing a full public release of Claude Mythos.

Instead, they have implemented a highly restricted preview known as “Project Glasswing". This initiative grants access to only 40 major tech players and verified partners – giants like Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, and JPMorgan Chase are included.

The logic behind ‘Project Glasswing’ is a race against time. Anthropic is providing approximately USD 100 million worth of computing resources to these partners so they can use the model’s capabilities to bolster their own defences before malicious actors can weaponise the technology.

“The window between a vulnerability being discovered and being exploited by an adversary has collapsed," Elia Zaitsev, CTO of CrowdStrike, told AFP. “What once took months now happens in minutes with AI."

By giving “verified defenders" a head start, the company hopes to patch thousands of “zero-day" vulnerabilities – flaws previously unknown to developers – that Mythos has already identified across every major operating system and web browser.

SO, IS THIS REAL OR JUST HYPE?

Not everyone, however, is convinced that Anthropic’s restrictive approach is purely altruistic. The tech industry is divided between those who see a genuine threat and those who suspect “strategic gatekeeping" or marketing hype.

Critics, including David Sacks – an entrepreneur and adviser to US President Donald Trump – have accused Anthropic of using “doomsday warnings" to promote its own allure, especially amid rumours of the company going public. Sacks has pointed to a history of “scare tactics" used by the firm to engineer “regulatory capture" – a strategy where rules are crafted to benefit the incumbent company while hampering its rivals.

Alex Stamos of the AI safety startup Corridor even quipped about Anthropic’s presentation of the danger. “They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won’t even let people use them… It’s like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon," he was quoted by AFP.

Despite the cynicism, many cybersecurity experts believe the threat of an “agent-to-agent war" is very real. Stamos said as AI models become superhuman at writing code, it becomes impossible for human beings to find the bugs within that code. He predicted a future where human supervisors sit on the sidelines while AI agents protect networks against other AI agents used by hackers.

Adam Meyers of CrowdStrike added to this grim outlook, saying the ultimate weapon will be “malware that has no pre-programming" capable of adapting its attacks on the fly.

IS THERE AN ALARM OUT THERE FOR EVERYONE?

The potential for “massive cyber risks" has reached the highest levels of international governance. Kristalina Georgieva, managing director of the International Monetary Fund (IMF), recently warned that the global monetary system is not prepared for the escalating risks posed by models like Mythos.

“We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI," Georgieva said, calling for urgent international cooperation.

The European Union is already in active discussions with Anthropic. European Commission spokesman Thomas Regnier said officials have met with them to gain more information about the risks inherent in the model.

A particular point of concern for international regulators is that ‘Project Glasswing’ has excluded foreign entities, raising questions about whether the rest of the world is being left vulnerable while US tech giants fortify their systems.

HOW CAN THIS IMPACT INDIAN SYSTEMS?

Beyond the immediate security threats, Mythos is expected to send shockwaves through the global economy, particularly the IT services sector.

A report by Kotak Institutional Equities said the model’s proficiency in software engineering tasks poses a significant disruption risk to the industry, especially in regions like India. It highlighted a “step-jump" in performance that could lead to massive efficiency gains across all IT segments.

But these gains come with a heavy price. The report said the increased automation in coding is expected to pressurise valuation multiples for IT companies and compound “near-term deflation risks" for services.

It noted that their estimate of a 3 to 3.5 percent annual growth headwind for the industry has shifted from “prudent to practical" due to the rapid advancements seen in Mythos. While some segments like legacy system modernisation may see an acceleration of opportunities, application development services could see a widening gap in productivity that may not be evenly distributed.

WHAT HAPPENS NEXT?

Anthropic, founded by former OpenAI researchers who prioritised AI safety, now finds itself at the centre of a firestorm.

Whether Claude Mythos is the herald of a catastrophic “tsunami" of cyberattacks or a masterfully marketed tool for regulatory dominance remains a subject of fierce debate. What is certain is that its “agentic" capabilities have changed the rules of engagement. The world is no longer just watching the evolution of AI, it is racing to build the walls before the machine learns how to tear them down.

(With agency inputs)

Handpicked stories, in your inbox

A newsletter with the best of our journalism

First Published:

April 17, 2026, 07:00 IST

News tech Central Banks To Tech Giants, All Fear This New AI Model: Can Anthropic's Mythos Hack Any System?

Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Read More

Read Entire Article