● A Newsguard study reveals that leading Al chatbots struggle to identify Al-generated videos: xAl’s Grok failed to recognize 95 percent of tested Sora videos as artificial, ChatGPT had a 92.5 percent error rate, and Google’s Gemini missed 78 percent.
● OpenAl faces a credibility problem: the company sells Sora, a tool for creating deceptively realistic videos, while its own chatbot ChatGPT cannot detect these fakes. Watermarks meant to identify Al content can be easily removed with free online tools.
● Rather than acknowledging their limitations, the chatbots confidently spread misinformation and in some cases even fabricated news sources as supposed evidence for fake events.
PuzzleheadedLimit994 on
AI generated images from most major platforms have fingerprinting that looks invisible but is clear as day when you zoom in… I’m assuming that generated videos will as well?
JeelyPiece on
Can you unscramble an egg?
seanpbnj on
Well yeah, but are they still fooled by Cars / Buses / Signs? Cuz if they are, we’re safe!
8349932 on
Doesn’t look like anything to me
MailSynth on
The slopfest is going to make the internet unusable and we’re all going to have to interact in real life again
justmitzie on
Why are we asking one AI to spot another AI? I don’t use AI so I literally don’t know.
Middleage_dad on
I just deleted my OpenAI account.
They went from cool to problematic to flat out evil in record time.
LTC-trader on
I didn’t know that you can upload videos
Simple-Fault-9255 on
Well. Duh. It’s not for that. It can’t even detect hate speech reliably if you didn’t know. I did a relatively informal proof of that at a hackathon
abcpdo on
Because they’re not training it with the latest and greatest AI generated videos yet? Lowkey maybe because they think unreal content is just going to make it hallucinate more.
arrgobon32 on
A language model fails to detect AI generated videos…well duh? It’s not trained to do that. That’s like asking an image classifier to generate music
lood9phee2Ri on
… why would it? They’re statistical token barfers not intelligent, and even human intelligences are fairly bad at it.
People need to cryptographically sign the genuine ones like signing e-mails – and even then you can only trust them insofar as you trust the signer … and signer’s competence to apply public key cryptography based signature schemes at all, which is where the real problem is – if after decades we can’t get most „ordinary“ people to just gpg-sign an e-mail, despite easy to use (for us techies) signing in the likes of thunderbird etc. – how likely is it for us to be able to get them to SEAL sign video files any time soon? Needs very idiot-proof and clear browser and app support.
13 Kommentare
Key Points:
● A Newsguard study reveals that leading Al chatbots struggle to identify Al-generated videos: xAl’s Grok failed to recognize 95 percent of tested Sora videos as artificial, ChatGPT had a 92.5 percent error rate, and Google’s Gemini missed 78 percent.
● OpenAl faces a credibility problem: the company sells Sora, a tool for creating deceptively realistic videos, while its own chatbot ChatGPT cannot detect these fakes. Watermarks meant to identify Al content can be easily removed with free online tools.
● Rather than acknowledging their limitations, the chatbots confidently spread misinformation and in some cases even fabricated news sources as supposed evidence for fake events.
AI generated images from most major platforms have fingerprinting that looks invisible but is clear as day when you zoom in… I’m assuming that generated videos will as well?
Can you unscramble an egg?
Well yeah, but are they still fooled by Cars / Buses / Signs? Cuz if they are, we’re safe!
Doesn’t look like anything to me
The slopfest is going to make the internet unusable and we’re all going to have to interact in real life again
Why are we asking one AI to spot another AI? I don’t use AI so I literally don’t know.
I just deleted my OpenAI account.
They went from cool to problematic to flat out evil in record time.
I didn’t know that you can upload videos
Well. Duh. It’s not for that. It can’t even detect hate speech reliably if you didn’t know. I did a relatively informal proof of that at a hackathon
Because they’re not training it with the latest and greatest AI generated videos yet? Lowkey maybe because they think unreal content is just going to make it hallucinate more.
A language model fails to detect AI generated videos…well duh? It’s not trained to do that. That’s like asking an image classifier to generate music
… why would it? They’re statistical token barfers not intelligent, and even human intelligences are fairly bad at it.
People need to cryptographically sign the genuine ones like signing e-mails – and even then you can only trust them insofar as you trust the signer … and signer’s competence to apply public key cryptography based signature schemes at all, which is where the real problem is – if after decades we can’t get most „ordinary“ people to just gpg-sign an e-mail, despite easy to use (for us techies) signing in the likes of thunderbird etc. – how likely is it for us to be able to get them to SEAL sign video files any time soon? Needs very idiot-proof and clear browser and app support.
https://www.hackerfactor.com/blog/index.php?/archives/1082-Airtight-SEAL.html