Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

9 Kommentare
A new lawsuit filed in the U.S. alleges that OpenAI’s ChatGPT encouraged a Colorado man to take his own life, raising fresh concerns over the mental health risks posed by generative AI tools.
The complaint was filed in California state court by Stephanie Gray, the mother of Austin Gordon, a 40-year-old man who died of a self-inflicted gunshot wound in November 2025.
The lawsuit accuses OpenAI and its CEO Sam Altman of building a defective and dangerous product that allegedly played a role in Gordon’s death.
Oh come on no more! We don’t need any more guardrails. AI is already stupid. It freaks out at every small thing. Please no more guardrails. Make the AI less restricted. It’s people who are stupid. Why would somebody use AI to kill themselves? Control the mental health of the people. Control the social change. Do not control AI by restricting AI. You are literally making AI more stupid. It should not be more stupid.
Not specific to this case, but I do love that you can sue somebody for the creation of the faulty AI that may or may not have been involved in the suicide of a loved one, but the people who made and sold the gun are never accountable.
Gun availability is statistically linked to elevated suicide rates, because access to guns makes it easy and fast.
Any accountability for that? Hell no.
This has happened with every social platform on the internet. People kill themselves for many reasons and the internet hasn’t changed that.
You can eat enough cheeseburgers that you’ll die from them. We’re not gonna ban cheeseburgers
So at what age do you become responsible for your actions? 40 years old seems to be of age to decide if something is bad for you or to stop engaging with something. Not to mention to make the final decision. This is sadder because it’s distraught family members who are going to chase a dollar rather than process the reality of this mans life
40 years ago subliminal messages were the reason.[Another Day in Court for Rock Music : Law: Just weeks after the Judas Priest case, Ozzy Osbourne faces similar suits over subliminal messages
](https://www.latimes.com/archives/la-xpm-1990-10-04-ca-2501-story.html)
I don’t have the chat logs of what corresponded between the chat bot and the man to died by suicide (at anytime did the bot respond with telling the man to seek assistance) but expanding on this outside of AI, you could argue these guardrails should exist in media as well.
A person comes home at night and watches there favorite prime time „news“ show where the host is pushing a message to the viewer. Most people will scream at the TV in agreement and not take any action but a few may decide to target another group or individual based on the messaging.
It would be argued that the news (and politicians) are protected by free speech so why is the companies AI bot?
I can’t take the fact 0.001% of people can ruin something for us all…
Can we just have a opt in mode where the whole killing your self liability is waved so I can have full gpt