Google Thwarts First AI-Generated Zero-Day Exploit

Google Cloud

Google Threat Intelligence Group (GTIG) said Monday (May 11) that it identified and may have prevented the use of the first zero-day exploit developed with artificial intelligence.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Writing in the latest GTIG AI Threat Tracker, which was released Monday (May 11), GTIG said a criminal threat actor planned to use the zero-day exploit in a mass exploitation event, but GTIG may have prevented it with proactive counter discovery.

    After identifying the zero-day vulnerability in a Python script that enables the user to bypass two-factor authentication on an open-source, web-based system administration tool, GTIG worked with the impacted vendor to responsibly disclose the vulnerability and disrupt the threat activity, according to the report.

    GTIG said it has “high confidence” that the threat actor used an AI model to discover and weaponize the vulnerability.

    “As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development, including for zero-day vulnerabilities,” GTIG said in the report. “While these tools empower defensive research, they also lower the barrier for adversaries to reverse-engineer applications and develop sophisticated, AI-generated exploits.”

    Other AI-related threat activity highlighted by GTIC in the report includes AI-augmented development for defense evasion, autonomous malware operations in which models dynamically generate commands, and AI-augmented research and information operations campaigns that generate synthetic media and deepfake content at scale.

    Advertisement: Scroll to Continue

    The report also spotlighted obfuscated LLM access, in which threat actors pursue anonymized access to models to illicitly bypass usage limits, and supply chain attacks in which adversaries target AI environments and software dependencies as an initial access vector.

    “Attackers rarely shy away from experimentation and innovation, but neither do we,” GTIG said in the report. “In addition to sharing our findings and mitigations with the larger security and AI community, Google employs proactive measures to stay ahead of these constantly changing threats.”

    In earlier editions of the GTIG AI Threat Tracker, the organization noted a new form of intellectual property theft called “model extraction attacks” or “distillation attacks” and threat actors’ use of AI for not only productivity gains but also “novel AI-enabled operations.”

    The International Monetary Fund (IMF) said in a Thursday (May 7) blog post that at a time of rapidly accelerating cyber risk driven by AI, cybersecurity is a core financial stability issue and should be treated as such by policymakers.