Business Insider: Wichtige Alignment-Forscher verlassen OpenAI

https://www.businessinsider.com/openai-safety-researchers-quit-superalignment-sam-altman-chatgpt-2024-5

1 Comment

  1. **Submission Statement:**

    This recent development signals emerging warning signs at OpenAI, with two key safety and governance researchers, Daniel Kokotajlo and William Saunders resigning.

    These underscore the existential concerns surrounding the rapid advancement toward Artificial General Intelligence.

    Here are the reasons Kokotajlo left:

    *”Kokotajlo* [*wrote*](https://www.lesswrong.com/users/daniel-kokotajlo) *on his profile page on the online forum LessWrong that he quit “due to losing confidence that it would behave responsibly around the time of AGI.”*

    *In a separate post on the platform in April, he partially explained one of the reasons behind his decision to leave. He also weighed in on a discussion about pausing AGI development.*

    *”I think most people pushing for a pause are trying to push against a ‘selective pause’ and for an actual pause that would apply to the big labs who are at the forefront of progress,” Kokotajlo wrote.”*

    And here is what Saunders worked on:

    *”Saunders was also a manager of the interpretability team, which researches how to make AGI safe and examines how and why models behave the way they do. He has* [*co-authored*](https://openai.com/index/critiques) *several papers on AI models.*


    *Saunders said in a* [*comment*](https://www.lesswrong.com/posts/GFc2G5Lda6TpzwsLr/william_s-s-shortform?commentId=DwSTNg24oaGsPSAu8) *on his LessWrong profile* [*page*](https://www.lesswrong.com/users/william_s) *that he resigned that month after three years at the ChatGPT maker.”*

    These events prompt the discussion: **Is OpenAI listening to its employees’ calls for safety measures?**

Leave A Reply