Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

21 Kommentare
What?
Here’s an excerpt:
> OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
>The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
>**The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website**. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
>…
> Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
I’ve seen some bad AI bills before, but this one might just take the cake. Complying with federal standards and not acting recklessly does *not* shield companies from liability under normal circumstances—drugs, cars, consumer products, none of them get exemptions like this.
I sincerely hope that lawmakers are sane enough to not let this pass.
A law that does absolutely nothing for the vast majority of citizens. Pure corrupt graft.
The issue with AI models is you can’t hold them accountable and companies don’t want to be liable for their product
Similar to the Price-Anderson Act (PAA), enacted in 1957, which capped nuclear power plant operators‘ liability.
AI is a tool with no agency. The person wielding it carries the responsibility.
If an ai model makes mistakes which have already occurred multiple times whose supposed to be responsible
You can’t incarcerate an AI model
I’m sure they do
AI is becoming „too big to fail.“ How does one fight back against this?
Remember kids, corporations are all about privatizing gains and socializing losses. We are all on the hook for the environmental damage caused, the increased energy bills, the noise, the impact of the disinformation campaigns, all the different types of harms they cause. But those subscription revenues? They belong just to Sammy.
In US this will pass with no problem.
This is absolutely part of why some corporations want to use AI in the first place – escaping accountability for destructive and dangerous decisions in the pursuit of wealth at any cost. “We didn’t do that, IT did.”
Ah I didn’t know skynet wanted indemnity and hold harmless arrangements
This makes me think AI is likely to enable mass deaths or financial disasters. Can we stop that before the liability part?
I’m so glad to see the government heard the people’s concerns about AI and jumped in to address the real issues with AI /s
So tired of Americans‘ false surprise about this type of shit. It’s not news. Your country sucks and the world knows it.
Decades of school shootings every week and a pedophile President. Cops shooting children on camera, alcoholic supreme court justices….
But this injustice is the surprise?
Things you do before you kill many people, for 100, Alex?
NO WAY!! UNBELIEVABLE!
soooo, AI Company – decides to release a model that among other things can create bio weapons for a terrorist organization, who would not normally have this capability. Terrorist org uses that and kills a lot of people, kills power grids and sets off mass casualty and chaos events … and the AI company can go „well…. we didn’t do that the terrorists did“ and it will all be fine and dandy??
that’s just bullshit – there needs to be oversight and liability so they make sure their models don’t fuck around.
imagine Airbus decided to go the SpaceX route and just .. test their airplanes live, with passengers. a new wing design we dont know works? yeah put it on the plane from Amsterdam to Auckland.. lets see if it works.
Everyday I hate AI more.
I can’t wait for Ted Cruz to defend this one