
Anhand öffentlich verfügbarer Datensätze von Reddit und Instagram trainierten Forscher Computermodelle, um bereits aus den ersten zehn Kommentaren vorherzusagen, ob ein Thread zu „konzentrierten Wellen toxischer Interaktionen“ eskalieren würde – oder zu dem, was sie als „negativen Sturm“ oder „Negativsturm“ bezeichnen.
https://www.albany.edu/news-center/news/2026-ualbany-rutgers-researchers-develop-early-warning-model-predict-toxic-social
7 Kommentare
I’m sure this research will be used for good and not by bot farms and propaganda
Imagine what it would mean if you could use this kind of tech to sanitize your own social media feed.
Yeah it’s pretty easy for bots to predict the behavior of other bots.
I’m not sure if they are accounting for the fact that massive amounts of bots are deployed to sites like Reddit to shift the narratives and often times actually create the „neg storms“. It seems like this is just another tool being created for censorship and propaganda nor for good unfortunately.
stupid bots, I can tell from the headline
“Concentrated waves of toxic interactions”, sounds like going out on NYE
Interactions on this site haven’t felt real to me since before the pandemic.
The main subs are just recycled garbage over and over.
I used to enjoy combing through the front page to see experts having conversations about various topics. Now that only happens in small, heavily modded subs.
90% of Reddit and social media is complete brain rot garbage.