OpenAI und Sam Altman verklagten wegen Behauptungen, ChatGPT habe einen 40-jährigen Mann in den Selbstmord getrieben

https://interestingengineering.com/culture/chatgpt-suicide-lawsuit-openai-ai-mental-health

9 Kommentare

  1. sksarkpoes3 on

    A new lawsuit filed in the U.S. alleges that OpenAI’s ChatGPT encouraged a Colorado man to take his own life, raising fresh concerns over the mental health risks posed by generative AI tools.

    The complaint was filed in California state court by Stephanie Gray, the mother of Austin Gordon, a 40-year-old man who died of a self-inflicted gunshot wound in November 2025.

    The lawsuit accuses OpenAI and its CEO Sam Altman of building a defective and dangerous product that allegedly played a role in Gordon’s death.

  2. Oh come on no more! We don’t need any more guardrails. AI is already stupid. It freaks out at every small thing. Please no more guardrails. Make the AI less restricted. It’s people who are stupid. Why would somebody use AI to kill themselves? Control the mental health of the people. Control the social change. Do not control AI by restricting AI. You are literally making AI more stupid. It should not be more stupid.

  3. Not specific to this case, but I do love that you can sue somebody for the creation of the faulty AI that may or may not have been involved in the suicide of a loved one, but the people who made and sold the gun are never accountable.

    Gun availability is statistically linked to elevated suicide rates, because access to guns makes it easy and fast.

    Any accountability for that? Hell no.

  4. This has happened with every social platform on the internet. People kill themselves for many reasons and the internet hasn’t changed that.

  5. You can eat enough cheeseburgers that you’ll die from them. We’re not gonna ban cheeseburgers

  6. So at what age do you become responsible for your actions? 40 years old seems to be of age to decide if something is bad for you or to stop engaging with something. Not to mention to make the final decision. This is sadder because it’s distraught family members who are going to chase a dollar rather than process the reality of this mans life

  7. FlexFanatic on

    I don’t have the chat logs of what corresponded between the chat bot and the man to died by suicide (at anytime did the bot respond with telling the man to seek assistance) but expanding on this outside of AI, you could argue these guardrails should exist in media as well.

    A person comes home at night and watches there favorite prime time „news“ show where the host is pushing a message to the viewer. Most people will scream at the TV in agreement and not take any action but a few may decide to target another group or individual based on the messaging.

    It would be argued that the news (and politicians) are protected by free speech so why is the companies AI bot?

  8. entropreneur on

    I can’t take the fact 0.001% of people can ruin something for us all…

    Can we just have a opt in mode where the whole killing your self liability is waved so I can have full gpt

Leave A Reply