Ehemaliger politischer Leiter von OpenAl gründet ein gemeinnütziges Institut und fordert unabhängige Sicherheitsüberprüfungen von Al-Frontier-Modellen

https://fortune.com/2026/01/15/former-openai-policy-chief-creates-nonprofit-institute-calls-for-independent-safety-audits-of-frontier-ai-models/

2 Kommentare

  1. Trusting corporations to police themselves is just silly.

    We’ve done this before a million times and it never works.

    You’ve got to have somebody independent making sure they’re not harming society.

  2. Power saws are dangerous. Everyone knows they are dangerous. We wouldn’t let a kid or an unstable person play with a power saw every day. That doesn’t mean we don’t need power saws.

    Chatgpt, Gemini, etc are basically the „public“ models and since they have chosen to be public with no login required I think they should face real scrutiny and regulation. They should be safe for everyone.

    I should also as an emotionally stable adult who has never committed a crime be able to access one without many of those same restrictions that make it safe for everyone.

Leave A Reply