Regierung erhält frühzeitigen Zugriff auf OpenAI und anthropogene KI, um Weltuntergangsszenarien zu testen

https://arstechnica.com/tech-policy/2024/08/feds-to-get-early-access-to-openai-anthropic-ai-to-test-for-doomsday-scenarios/

3 Comments

  1. MetaKnowing on

    “OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

    This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

    The announcement comes as [California is poised to pass one of the country’s first AI safety bills](https://archive.is/o/0vQ1g/https://arstechnica.com/ai/2024/08/as-contentious-california-ai-safety-bill-passes-critics-push-governor-for-veto/), which will regulate how AI is developed and deployed in the state.

    Among the most controversial aspects of the bill is a requirement that AI companies build in a [“kill switch”](https://archive.is/o/0vQ1g/https://arstechnica.com/ai/2024/06/outcry-from-big-ai-firms-over-california-ai-kill-switch-bill/) to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

  2. Pretty sure they’re just going to hire OpenAi employees/ex-employees as consultants, everything passess with flying colors, while billions of dollars were spent on “safety”. Everyone involved becomes richer. Yay.

  3. lol, like I would trust any government to be able to actually do this…

Leave A Reply