Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

15 Kommentare
This is just gonna be the AI version of the meme where Peter Griffin is checked against the ’safe‘ colours card isn’t it?
We can’t even get cars to safely drive themselves now you want to use A.I to work out if someone is autistic or a sexual predator? I dunno man.
Maybe we just employ more security guards and place them in areas women find intimidating in their local communities, and then hold a national dialogue about how we address this problem in the long term.
Oh yay, now a computer gets to decide I’m a rapist because I act slightly unusually.
How does it know what ‚moving in an unexpected way‘ means.
This sounds like a great crowd control tool for events at stadiums and managing customer/passenger traffic etc but I am not sold on how it is supposed judge malicious intentions.
Close enough, welcome back Five Nights at Freddy’s 2.
>On a dark winter evening, a woman waits for a train on a deserted platform. A man arrives and sits right beside her, making her feel uncomfortable and unsafe.
Thank God the latest technology is here to ensure we clamp down on criminal malcontents intent on menacing society by *checks notes* sitting on a public bench
It’s tricky, but if you look at the treeline you can usually see a little bit of visual distortion.
Also if you manage to make it bleed, it’s a vivid green color.
That’s how I spot Predator anyway.
But what happens if I say in front of a networked microphone something like “The Metropolitan Police urgently needs reform”? This AI is going to label me a predator twice.
This is like something from Brasseye, let alone Back Mirror.
Also, train stations are a stupid example given how many are unmanned these days. Who’s doing to be responding to the automated alerts?
Of course it would be great if this technology helps protect women but we also need to be concerned about false positives. We’ve already seen how police AI facial recognition tools are falsely targeting people of colour. Would this new technology also falsely accuse innocents while missing actual perpetrators.
This feels very dodgy to me and will almost certainly lead to lots of people being arrested because the ‚computer‘ decided they where a wrongun.
This reminds of these ‚life 360‘ type apps, they don’t nothing to keep you safe rather they exploit and escalate peoples fears and normalise us being surveilled 24/7.
Oh yes, because ai is renowned for being accurate.
How long till it makes comically bad errors or profiles on something that looks a bit too much like something we can’t discriminate for
As nice an idea as this tech is, I highly doubt it’s going to work. You just know this AI is somehow going to miss people with actual criminal records for sexual assault and just target some innocent guy in a tracksuit because he looks moody.
I’m sure the non existent security staff at the station are going to jump onto the cctv and attend in person because a notification says one person sat next to another on a bench.
Seems interesting yet overbearing technology but really could have thought up a better example.
„Cumbria firm sued for mistakenly identify neurodivergent man as potential sexual predator.“