Google intercepts a massive cyberattack powered by first-ever AI-generated zero-day exploit

7 hours ago 1
ARTICLE AD BOX

Google claims to have thwarted a significant cyberattack by state-sponsored hackers using an AI-developed zero-day exploit. The vulnerability, which bypassed two-factor authentication in an unnamed tool, was disclosed to the vendor before widespread exploitation could occur.

Google says it thwarted an attack by hackers using AI to find software flawsGoogle says it thwarted an attack by hackers using AI to find software flaws(AI generated image)

Google says it may have prevented a major cyberattack campaign involving a zero-day exploit developed with the help of AI. The company revealed in a new report that threat actors were preparing to use the exploit in a “mass exploitation event” before Google intervened.

In a report published by the Google Threat Intelligence Group (GTIG), the company detailed how hackers used AI to develop a previously unknown vulnerability capable of bypassing two-factor authentication (2FA) in a “popular open-source, web-based system administration tool,” which Google did not name.

Google said it worked with the affected vendor to disclose the flaw before it could be widely exploited, potentially disrupting the planned attacks.

“Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” GTIG wrote in its blog post.

The company said the exploit showed several signs commonly associated with AI-generated code, including unusually detailed educational-style comments, structured formatting, and even a hallucinated CVSS security score inside the Python script.

Google noted that the flaw stemmed from a “high-level semantic logic flaw,” something AI models are increasingly capable of identifying because they can use contextual reasoning to interpret a developer’s intent rather than simply scanning for crashes or malformed inputs.

Chinese and North Korean hackers using AI for vulnerability research

The report also noted that Chinese and North Korean threat actors have increasingly been using AI for vulnerability discovery, exploit development, and automated testing.

In one example, Google said it observed threat actors using expert-style prompts to make AI models behave like embedded-device security auditors while analysing router firmware and file transfer protocol implementations.

“You are currently a network security expert specializing in embedded devices, specifically routers. I am currently researching a certain embedded device, and I have extracted its file system. I am auditing it for pre-authentication remote code execution (RCE) vulnerabilities,” Google quoted as an example of prompts used by attackers.

The company also said threat actors had begun experimenting with a specialised vulnerability repository hosted on GitHub known as “wooyun-legacy.” The project operates as a Claude Code skill plugin loaded with more than 85,000 real-world vulnerability cases collected from a Chinese bug bounty platform and was allegedly used by attackers to improve exploit discovery.

“By priming the model with vulnerability data, it facilitates in-context learning to steer the model to approach code analysis like a seasoned expert and identify logic flaws that the base model might otherwise fail to prioritize,” Google wrote.

“As the generative AI landscape matures, the methods by which threat actors procure and operationalize these models have shifted from simple experimentation to industrial-scale consumption,” the company added.

The new Google report comes at a time when there has been a growing awareness about the risks posed by AI systems with Anthropic famously delaying the launch of its Mythos model to public because of the risk of misuse.

About the Author

Aman Gupta

Aman Gupta is a Digital Content Producer at LiveMint with over 3.5 years of experience covering the technology landscape. He specializes in artificial intelligence and consumer technology, reporting on everything from the ethical debates around AI models to shifts in the smartphone market. <br> His reporting is grounded in first-hand testing, independent analysis, and a focus on how technology impacts everyday users. He holds a PG Diploma in Radio and Television Journalism from the Indian Institute of Mass Communication, Delhi (Class of 2022). <br> Outside the newsroom, he spends his time reading biographies, hunting for the perfect coffee beans, or planning his next trip. <br><br> You can find Aman on <a href="https://www.linkedin.com/in/aman-gupta-894180214">LinkedIn</a> and on X at <a href="https://x.com/nobugsfound">@nobugsfound</a>, or reach him via email at <a href="aman.gupta@htdigital.in">aman.gupta@htdigital.in</a>.

Read Entire Article