Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Nachrichten
News
News Japan
Polen
Russischer Ãœberfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
UkraineWarVideoReport
Ukraine War Video Report
Ukrainian Conflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
1 Comment
**Submission Statement:**
This recent development signals emerging warning signs at OpenAI, with two key safety and governance researchers, Daniel Kokotajlo and William Saunders resigning.
These underscore the existential concerns surrounding the rapid advancement toward Artificial General Intelligence.
Here are the reasons Kokotajlo left:
*”Kokotajlo*Â [*wrote*](https://www.lesswrong.com/users/daniel-kokotajlo)Â *on his profile page on the online forum LessWrong that he quit “due to losing confidence that it would behave responsibly around the time of AGI.”*
*In a separate post on the platform in April, he partially explained one of the reasons behind his decision to leave. He also weighed in on a discussion about pausing AGI development.*
*”I think most people pushing for a pause are trying to push against a ‘selective pause’ and for an actual pause that would apply to the big labs who are at the forefront of progress,” Kokotajlo wrote.”*
And here is what Saunders worked on:
*”Saunders was also a manager of the interpretability team, which researches how to make AGI safe and examines how and why models behave the way they do. He has*Â [*co-authored*](https://openai.com/index/critiques)Â *several papers on AI models.*
…
*Saunders said in a*Â [*comment*](https://www.lesswrong.com/posts/GFc2G5Lda6TpzwsLr/william_s-s-shortform?commentId=DwSTNg24oaGsPSAu8)Â *on his LessWrong profile*Â [*page*](https://www.lesswrong.com/users/william_s)Â *that he resigned that month after three years at the ChatGPT maker.”*
These events prompt the discussion: **Is OpenAI listening to its employees’ calls for safety measures?**