„…hundreds of such videos, bearing my face and synthesising my voice, have proliferated across YouTube and social media. Even this weekend, there has been another crop, depicting a deepfaked me saying fictitious things about the coup in Venezuela. They lecture, they say things I might have said, sometimes intermingled with things I would never say. They rage, they pontificate. Some are crude, others unsettlingly persuasive. Supporters send them to me, asking: “Yanis, did you really say that?” Opponents circulate them as proof of my idiocy. Far worse, some argue that my doppelgangers are more articulate and cogent than me. And so I find myself in the bizarre position of being a spectator to my own digital puppetry, a phantom in a technofeudal machine I have long argued is not merely broken, but engineered to disempower.
My initial reaction was to write to Google, Meta and the rest to demand that they take down these videos. Several forms were filled in anger before, a week or more later, some of these channels and videos were taken down, only to reappear instantly under different guises. Within days I had given up: whatever I did, however many hours I spent every day trying my luck at having big tech take down my AI doppelgangers, many more would grow back, Hydra-like.“
MarketCrache on
It would be the simplest move for AI companies to add a visible watermark to their photos and videos but they won’t because it might impact their business model. We’re in the hands of oligarchs.
OneOnOne6211 on
What I don’t get is… why don’t AI companies incorporate signals that something is AI in the actual video? I don’t mean a huge watermark or whatever. But, for example, let’s say that every 30 frames a small collection of pixels always go specific colours in a specific order regardless of what the video is. A normal person can’t pick up on that maybe, but machines could. Youtube’s algorithms could. And they could mark the video as AI. They can also force it into its metadata, but that’s slightly less secure because that can still be edited usually. Incorporating it into the video itself is much harder to fix. Although it would have to be with some kind of „changing code“ to avoid easy removal by machines.
Youtube is too occupied in banning normal YouTubers and protect big corpos interests with their AI slop
Cryptic_Goat459 on
Would doing this with politicians, perhaps the party in power, get the gears of change moving faster?
chris14020 on
The obvious solution is to make sharing deepfakes with ANY attempt to misrepresent it as genuine for malicious, defamatory, or personal gain reasons, a fraud charge. Require any deepfakes to have obvious indicators (like a watermark), for instance, and hold those that knowingly create and share it accountable.
Leave A Reply
Du musst angemeldet sein, um einen Kommentar abzugeben.
7 Kommentare
„…hundreds of such videos, bearing my face and synthesising my voice, have proliferated across YouTube and social media. Even this weekend, there has been another crop, depicting a deepfaked me saying fictitious things about the coup in Venezuela. They lecture, they say things I might have said, sometimes intermingled with things I would never say. They rage, they pontificate. Some are crude, others unsettlingly persuasive. Supporters send them to me, asking: “Yanis, did you really say that?” Opponents circulate them as proof of my idiocy. Far worse, some argue that my doppelgangers are more articulate and cogent than me. And so I find myself in the bizarre position of being a spectator to my own digital puppetry, a phantom in a technofeudal machine I have long argued is not merely broken, but engineered to disempower.
My initial reaction was to write to Google, Meta and the rest to demand that they take down these videos. Several forms were filled in anger before, a week or more later, some of these channels and videos were taken down, only to reappear instantly under different guises. Within days I had given up: whatever I did, however many hours I spent every day trying my luck at having big tech take down my AI doppelgangers, many more would grow back, Hydra-like.“
It would be the simplest move for AI companies to add a visible watermark to their photos and videos but they won’t because it might impact their business model. We’re in the hands of oligarchs.
What I don’t get is… why don’t AI companies incorporate signals that something is AI in the actual video? I don’t mean a huge watermark or whatever. But, for example, let’s say that every 30 frames a small collection of pixels always go specific colours in a specific order regardless of what the video is. A normal person can’t pick up on that maybe, but machines could. Youtube’s algorithms could. And they could mark the video as AI. They can also force it into its metadata, but that’s slightly less secure because that can still be edited usually. Incorporating it into the video itself is much harder to fix. Although it would have to be with some kind of „changing code“ to avoid easy removal by machines.
Well, at least the [proof-of-concept warning](https://youtube.com/watch?v=9WfZuNceFDM) about this we had a few years ago was funny.
Youtube is too occupied in banning normal YouTubers and protect big corpos interests with their AI slop
Would doing this with politicians, perhaps the party in power, get the gears of change moving faster?
The obvious solution is to make sharing deepfakes with ANY attempt to misrepresent it as genuine for malicious, defamatory, or personal gain reasons, a fraud charge. Require any deepfakes to have obvious indicators (like a watermark), for instance, and hold those that knowingly create and share it accountable.