Wie das Internet alle Bullshit-Detektoren kaputt machte | Von KI-generierten Bildern bis hin zu eingeschränkten Satellitendaten – die Systeme, mit denen überprüft wird, was online echt ist, haben Schwierigkeiten, mitzuhalten

https://www.wired.com/story/how-the-internet-broke-everyones-bullshit-detectors/

9 Kommentare

  1. Issues of note:

    >A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.
    >
    >Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.
    >
    >Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.
    >
    >“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”
    >
    >…
    >
    >Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.
    >
    >But the harder problem is what van Ess calls the hybrid.
    >
    >In these cases, 95 percent of an image is a real photograph: real metadata, real sensor noise, real lighting physics. The manipulation sits in a single detail—a patch added to a uniform, a weapon placed into a hand, a face subtly swapped in. Pixel-level detectors often clear it because they are scanning what is, in most respects, a genuine image. The fake can be one square inch.
    >
    >“Every old method assumed the image was a record of something,” says van Ess. “Generative media breaks that assumption at the root.”
    >
    >Henry Ajder, a deepfake researcher and AI adviser who has tracked synthetic media since 2018, goes further. AI is no longer obvious, he says, it is embedded. The volume of high-quality synthetic content now circulating online means the era of visible errors is ending. What replaces it is content that looks entirely credible.
    >
    >The tools built to detect it have their own limits. Detection systems are not truth engines, Ajder says. Even the strongest tools fail often enough to matter, and most return a confidence score without explaining how that score was reached. “Detection tools should never be used as a sole signal to determine action,” Ajder says.
    >
    >…
    >
    >Ajder, who’s advised companies including Adobe and Synthesia, argues that the long-term solution is not better detection alone, but provenance—systems that can verify origin rather than endlessly chasing what is fake. Until that infrastructure exists at scale, the burden doesn’t disappear—it shifts.
    >
    >In a system where synthetic content moves faster than it can be verified, the only real defense may be behavioral: hesitation. A pause before the repost. A few minutes of scrutiny in a system designed to reward none.

    One of the major challenges for people trying to operate in this kind of low-truth and low-trust environment is that some people will react by completely disbelieving everything. The consequences of this kind of disengagement can readily be seen in the political sphere where this ends up significantly affecting public engagement with critical issues and ultimately with voter turnout as well.

  2. Honestly I go to so few websites anymore its ridiculous. I dont even use Google hardly anymore unless its an image search only because I can’t ask ChatGPT for that; but, saying that I’m not exactly leaning on ChatGPT for my answers in life but that Google is so overrun with sponsored links and curated results that many times to get a straight-forward answer I turned ChatGPT into a search engine where I search then verify.

    But as the fabric of the remaining social places degrade I find myself asking a lot why I bother. Its not fun or engaging and as mentioned I’m jaded and cynical enough to default to disbelieving most things unless I have preexisting knowledge of the matter. And if you’re in that kind of mindset, what value am I really getting by being online?

    I see the return of smaller web communities that are curated as semi-walled gardens. Discord kinda kicked this off though its not exactly *unique* or special and certainly not the first platform of its kind to do so, it just made it accessible and popular. But as Discord sprints through its enshitification phase it most likely wont be the one to usher in a new era. Realistically you would say: what if Discord but not centralized through a single company but you still used a single application to explore the individual communities? And you just described using a browser to explore individually-ran websites like we had 30 years ago. We dont need new technology, we just need a new non-corporate internet.

  3. Dudes just avoid the „for you“ tabs. Those tabs became way more scary and fakeish with AI.

    Make your own lists, see only those.

  4. This is junk, human bullshit detectors have always been horrificly bad.

    You haven’t been able to verify what is real online ever.

  5. There’s a quote attributed to Mark Twain: “By the time lies travel halfway around the world, the truth is still putting on its shoes.”

  6. LordMuffin1 on

    You just have to accept. Internet is fucked, social media is propaganda/desinformation and AI-slop. The only thing Internet is good for is viditing The few sites you trust, like wikipedia and a few others.

  7. Showy_Boneyard on

    There’s been ways to „digitally sign“ media files for decades. It won’t prove that something isn’t AI, but it can absolutely prove that a piece of media is actually coming from the source it claims to be. IE you can be sure that a photography claiming to be from AP is actually theirs and not just someone who slapped an AP logo on some AI slop and trying to pass it off as their own. It still requires that you have trust in some insutitions, since they could choose to sign something that’s fake for nefarious purposes. But if a company/agency/etc is ever caught doing that, it’d basically through their reputation into the trash forever, so I don’t think many would risk it. Although in this age I wouldn’t trust the White House to be one of those trustworthy sources.

    But anyway, why don’t we do this already? Its something that should’ve been implemented years ago. We already transitioned from HTTP to using HTTPS for everything, and this really wouldn’t be that different.

  8. Ironically, it may be the thing that brings back our collective critical thinking. Something we lost to the crazies some decade ago.

Leave A Reply