Google says it has disrupted an attempt by a criminal group to use artificial intelligence (AI) to exploit a previously unknown software vulnerability, highlighting growing concerns over the use of AI in cybercrime.

The technology company disclosed this on Monday, saying the hackers planned to exploit a “zero-day” vulnerability in an unnamed company’s digital system before the attack was intercepted.
A zero-day vulnerability refers to a software flaw unknown to the vendor or security teams, leaving no time to develop a protective fix before exploitation.
John Hultquist, Chief Analyst at Google’s threat intelligence arm, described the incident as a significant shift in cybersecurity threats.
“It’s here. The era of AI-driven vulnerability and exploitation is already here,” Hultquist said.
According to Google, the attackers intended to exploit a security flaw that would have allowed them to bypass two-factor authentication and gain access to a widely used online system administration tool.
The company said it had notified the affected firm and relevant law enforcement agencies, successfully disrupting the operation before any damage occurred.
Google, however, declined to identify the targeted company, the threat actors involved, or the AI model used in the attempted breach.
The firm said preliminary analysis suggests the hackers used a large language model, similar to those powering popular AI chatbots, to identify the vulnerability.
Google noted there was no evidence linking the attempted attack to any government-backed operation, although groups connected to China and North Korea have reportedly explored similar AI-enabled cyber techniques.
The disclosure comes amid growing debate over the regulation of increasingly advanced AI systems capable of coding, vulnerability discovery, and cybersecurity analysis.
Recent advancements in AI hacking tools, including Anthropic’s newly announced cybersecurity-focused model, Mythos, have intensified concerns among governments and technology firms worldwide.
The U.S. government has also stepped up oversight of advanced AI systems, with the Commerce Department recently announcing agreements with major technology firms, including Google, Microsoft, and xAI, to evaluate powerful AI models before public release.
Industry analysts warn that while AI could strengthen cybersecurity by helping organisations detect and patch software flaws faster, it could also accelerate cyber threats by enabling criminals to discover and weaponise vulnerabilities at unprecedented speed.
Dean Ball, Senior Fellow at the Foundation for American Innovation, said governments may need to consider stronger oversight of advanced AI tools.
“I don’t like regulation. I would prefer for things not to be regulated. But I think we need to in this case,” Ball said.
Experts say the growing use of AI in cybersecurity could create a transitional period of heightened digital risk before stronger defences are developed globally.
![]()
























































