Technology

Google stops zero-day exploit tied to AI

Google says it disrupted a zero-day exploit it believes used AI cues, highlighting how attackers increasingly blend AI with 2FA bypass attempts.

Google says it has disrupted a zero-day exploit that, according to its researchers, shows signs of being developed with AI.. The company’s account points to a growing reality for security teams: even when a vulnerability is unknown to the public. attackers may now be accelerating the process of turning code into an actionable attack.

In a report from its Threat Intelligence Group. Google Threat Intelligence Group (GTIG). the company said it spotted activity linked to “prominent cyber crime threat actors” planning what it described as a “mass exploitation event.” The target of that planned campaign was an unnamed “open-source. web-based system administration tool. ” and the intended impact was severe: the vulnerability would have enabled attackers to bypass two-factor authentication.

GTIG’s researchers said they found clues in the Python script associated with the exploit.. Among the signals they highlighted were an “hallucinated CVSS score” and formatting that they described as “structured” and “textbook” in a way consistent with how large language models are trained.. In other words. the script contained artifacts that looked less like the typical evolution of exploit code over time and more like language-model output shaped into something that could be executed.

At the heart of the vulnerability is described as a high-level semantic logic flaw.. Google said the developer hardcoded a trust assumption in the platform’s 2FA system.. That kind of weakness is especially dangerous in authentication paths because it can undermine the security intent of two-factor protections. not just introduce a minor bug.

The report also situates the finding within a broader shift that Google says has been unfolding over weeks of debate about AI capabilities in cybersecurity.. It references recent concerns over what specialized security-focused AI models can do. and it also points to another disclosed Linux vulnerability that was discovered with AI assistance—reinforcing the idea that AI is increasingly present at multiple stages of the vulnerability lifecycle.

While Google said this is the first time it has found evidence that AI was involved in an attack of this type. the company added an important limitation: its researchers “do not believe Gemini was used.” That distinction matters. because it signals Google is not claiming its own model was the tool behind the exploit—only that AI-like characteristics showed up in the exploit materials.

GTIG said it was able to “disrupt” the exploit tied to this activity.. Even so, the company also warned that the direction of travel is concerning.. Hackers. it said. are increasingly using AI to identify and exploit security vulnerabilities. which may compress timelines from discovery to weaponization.

Beyond the exploit itself, Google’s report discussed AI as both target and tool.. It said GTIG has observed adversaries increasingly target the “integrated components” that give AI systems their utility—such as autonomous skills and third-party data connectors.. The implication is that even if an AI model is not the main weapon. attackers may seek to undermine the surrounding features that allow it to act. retrieve information. or follow instructions.

The report also described a technique it called “persona-driven jailbreaking.” In practice. the idea is to craft prompts that push an AI into behaving as if it has a specific role—such as “pretend it’s a security expert”—with the goal of coaxing the system into providing vulnerability-finding help it otherwise might refuse.. Google’s framing suggests that attackers are refining how they interact with AI. turning conversational control into a method for extracting actionable security information.

In another tactic. hackers reportedly feed AI models entire repositories of vulnerability data so the systems can search. summarize. or connect patterns across large sets of known issues.. Google also referenced OpenClaw in ways it said suggest “an interest in refining AI-generated payloads within controlled settings” to improve exploit reliability before any deployment.

This matters because it reframes “AI in cybersecurity” from a single story about discovery into a full pipeline problem.. If attackers can accelerate the early reconnaissance and also iterate on payloads in more controlled conditions. then defenders may face not just more vulnerabilities. but faster attempts at turning those vulnerabilities into working attacks.

For organizations relying on two-factor authentication. the most unsettling part of Google’s account is the described mechanism: a trust assumption built into a 2FA system.. Even robust authentication methods can fail if the logic that decides when trust is granted can be manipulated. and the use of a zero-day pathway raises the stakes because there is no public patch or prior warning.

Meanwhile, the mention of adversaries targeting AI integrations suggests a parallel threat to the systems that businesses are increasingly adopting.. As AI tools gain permissions. connectors. and automation hooks. the most valuable targets may shift from the model itself to the ways those models are deployed and integrated with other tools and data sources.

Google’s report leaves defenders with a clear message: securing endpoints and apps remains essential, but threat modeling now has to account for AI-enhanced adversaries, AI-targeted systems, and AI-assisted workflows that can shorten the gap between vulnerability and exploit.

Google Threat Intelligence Group zero-day exploit AI cybersecurity two-factor authentication OpenClaw persona-driven jailbreaking

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you human? Please solve:Captcha


Secret Link