Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

6 Kommentare
[Anthropic’s](https://archive.is/o/w1TAx/https://www.axios.com/2026/01/30/ai-anthropic-enterprise-claude) latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting, the company shared first with Axios.
**Why it matters**: The advancement signals an [inflection point](https://archive.is/o/w1TAx/https://www.axios.com/2025/12/16/ai-models-hacking-stanford-openai-warnings) for how AI tools can help cyber defenders, even as AI is also making attacks more dangerous.
**Driving the news:** Anthropic debuted [Claude](https://archive.is/o/w1TAx/https://www.axios.com/2025/09/17/ai-anthropic-amodei-claude) Opus 4.6, the latest version of its largest AI model, on Thursday.
* Before its debut, Anthropic’s frontier red team tested Opus 4.6 in a sandboxed environment to see how well it could find bugs in open-source code.
* The team gave the Claude model everything it needed to do the job — access to Python and vulnerability analysis tools, including classic debuggers and fuzzers — but no specific instructions or specialized knowledge.
* Claude found more than 500 previously unknown zero-day vulnerabilities in open-source code using just its „out-of-the-box“ capabilities, and each one was validated by either a member of Anthropic’s team or an outside security researcher.
Hmm but i thought ai was only bad, could never be better than a human, and had no positive uses? /sar
Finally starting to see the results ive been expecting. Just the tip of the iceberg that’ll force everyone to realize
Yeah, because people vibe coded the libraries, probably with Anthopic’s tools.
This isn’t impressive. It’s terrifying that these AI agents are just churning out security vulnerabilities, and no one is picking them up.
I could write a script to loop through NPM packages and do
npm audit –audit-level=high
too
I know the maintainers of a medium popularity piece of open source. They’ve decided to shut down their public bounty program because people keep claiming that they’ve used AI to find security vulns. But when you scratch the surface, they’re not at all.
But are these real security flaws, or the sort of [„security flaws“ curl is bombarded with](https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/)? Did a human check they were actually real? Or did the AI write the blog post too even?