Google Reports AI-Powered Bug Hunter Detected 20 Security Vulnerabilities

Google Reports AI-Powered Bug Hunter Detected 20 Security Vulnerabilities

Google’s AI-driven defect detection system has unveiled its inaugural set of security issues.

Heather Adkins, Google’s vice president of security, announced that Big Sleep, their LLM-based vulnerability analysis tool, identified and reported 20 security flaws within popular open-source software.

According to Adkins, Big Sleep, developed by Google’s AI division DeepMind and hacker team Project Zero, reported its initial vulnerabilities, predominantly in open-source applications like the audio and video library FFmpeg and image processing suite ImageMagick.

Specifics about the vulnerabilities’ effects or seriousness are unavailable, as Google is holding back details until the issues are resolved, which is customary. The mere discovery of these vulnerabilities by Big Sleep is noteworthy, indicating progress in AI tools, even with human involvement in this case.

“To maintain high-quality and actionable reports, we have a human expert review before reporting, but each vulnerability was discovered and reproduced by the AI agent without human help,” stated Google’s spokesperson Kimberly Samra to TechCrunch.

Royal Hansen, Google’s vice president of engineering, commented that the discoveries highlight “a new frontier in automated vulnerability discovery.”

LLM-powered tools capable of detecting vulnerabilities are becoming prevalent. Beyond Big Sleep, there are tools like RunSybil and XBOW, among others.

XBOW has gained recognition for topping a U.S. leaderboard on bug bounty platform HackerOne. Predominantly, a human is involved at some stage to verify legitimate vulnerabilities found by AI, as seen with Big Sleep.

Vlad Ionescu, co-founder and CTO at RunSybil, noted that Big Sleep is a “legit” endeavor due to its strong design and experienced team, emphasizing Project Zero’s expertise and DeepMind’s capabilities.

While these tools show great potential, they’re not without drawbacks. Some software developers have expressed frustration over bug reports that turn out to be unfounded, equating them to AI-generated noise.

“That’s the issue people face; it looks promising but often ends up worthless,” Ionescu previously stated to TechCrunch.

Leave a Reply

Your email address will not be published. Required fields are marked *