OpenAI unterstützt Gesetzesentwurf, der die Haftung für KI-gestützte Massentodesfälle oder finanzielle Katastrophen begrenzen würde

https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

21 Kommentare

  1. Here’s an excerpt:

    > OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

    >The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

    >**The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website**. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.

    >…

    > Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.

    I’ve seen some bad AI bills before, but this one might just take the cake. Complying with federal standards and not acting recklessly does *not* shield companies from liability under normal circumstances—drugs, cars, consumer products, none of them get exemptions like this.

    I sincerely hope that lawmakers are sane enough to not let this pass.

  2. thegooddoktorjones on

    A law that does absolutely nothing for the vast majority of citizens. Pure corrupt graft.

  3. The issue with AI models is you can’t hold them accountable and companies don’t want to be liable for their product

  4. Similar to the Price-Anderson Act (PAA), enacted in 1957, which capped nuclear power plant operators‘ liability. 

  5. Error_404_403 on

    AI is a tool with no agency. The person wielding it carries the responsibility.

  6. DistinctSpirit5801 on

    If an ai model makes mistakes which have already occurred multiple times whose supposed to be responsible

    You can’t incarcerate an AI model

  7. RandomUwUFace on

    AI is becoming „too big to fail.“ How does one fight back against this?

  8. Spez_is-a-nazi on

    Remember kids, corporations are all about privatizing gains and socializing losses. We are all on the hook for the environmental damage caused, the increased energy bills, the noise, the impact of the disinformation campaigns, all the different types of harms they cause. But those subscription revenues? They belong just to Sammy. 

  9. Significant_You_2735 on

    This is absolutely part of why some corporations want to use AI in the first place – escaping accountability for destructive and dangerous decisions in the pursuit of wealth at any cost. “We didn’t do that, IT did.”

  10. bluestreakxp on

    Ah I didn’t know skynet wanted indemnity and hold harmless arrangements

  11. This makes me think AI is likely to enable mass deaths or financial disasters. Can we stop that before the liability part?

  12. Practical_Rip_953 on

    I’m so glad to see the government heard the people’s concerns about AI and jumped in to address the real issues with AI /s

  13. Capable-Student-413 on

    So tired of Americans‘ false surprise about this type of shit. It’s not news. Your country sucks and the world knows it.  
    Decades of school shootings every week and a pedophile President.  Cops shooting children on camera, alcoholic supreme court justices…. 

    But this injustice is the surprise?

  14. plan_with_stan on

    soooo, AI Company – decides to release a model that among other things can create bio weapons for a terrorist organization, who would not normally have this capability. Terrorist org uses that and kills a lot of people, kills power grids and sets off mass casualty and chaos events … and the AI company can go „well…. we didn’t do that the terrorists did“ and it will all be fine and dandy??

    that’s just bullshit – there needs to be oversight and liability so they make sure their models don’t fuck around.

    imagine Airbus decided to go the SpaceX route and just .. test their airplanes live, with passengers. a new wing design we dont know works? yeah put it on the plane from Amsterdam to Auckland.. lets see if it works.

Leave A Reply