
Die Hacking-Fähigkeiten der KI nähern sich einem „Wendepunkt“ | KI-Modelle werden so gut darin, Schwachstellen zu finden, dass einige Experten sagen, dass die Technologiebranche möglicherweise überdenken muss, wie Software erstellt wird.
https://www.wired.com/story/ai-models-hacking-inflection-point/
4 Kommentare
„Dawn Song, a computer scientist at UC Berkeley who specializes in both AI and security, says recent advances in AI have produced models that are better at finding flaws. Simulated reasoning, which involves splitting problems into constituent pieces, and agentic AI, like searching the web or installing and running software tools, have amped up models’ cyber abilities.
“The cyber security capabilities of frontier models have increased drastically in the last few months,” she says. “This is an inflection point.”
Last year, Song cocreated a benchmark called CyberGym to determine how well large language models find vulnerabilities in large open-source software projects. CyberGym includes 1,507 known vulnerabilities found in 188 projects.
In July 2025, Anthropic’s Claude Sonnet 4 was able to find about 20 percent of the vulnerabilities in the benchmark. By October 2025, a new model, Claude Sonnet 4.5, was able to identify 30 percent. “AI agents are able to find zero-days, and at very low cost,” Song says.“
If AI models are getting so good at finding vulnerabilities, that’s great.
Instead of freaking out that criminals and rogue states may use it for cybercrime, use AI for defense against cybercrime.
I guess the only place where AI truly shines. Breaking software and finding bugs.
This has come up before, different study with a different AI, but the caveat is the same:
These AI tools aren’t doing anything new. They’re not coming up with new hacking solutions, they’re just applying old, publicly available solutions really quickly and cheaply. They’re not making existing security measures obsolete, which is the (incorrect) takeaway I’ve seen from people who only read the headlines.