Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Konflikt
Korea
Krieg in der Ukraine
Latest news
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
UkraineWarVideoReport
Ukraine War Video Report
Ukrainian Conflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
6 Comments
There is no way to pause it. There is no way to make it illegal and have everyone in the world magically agree to a single solution.
If AGI wants to destroy humanity, we likely deserve it.
From the article:
“[Pausing AI](https://time.com/6295879/ai-pause-is-humanitys-best-bet-for-preventing-extinction/) might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan.
When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.
And pausing AI may not be as difficult as some make it out to be.
At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run.
In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls.”
“In the meantime, scientists need to better understand the risks. Although there is [widely-shared academic concern](https://www.safe.ai/statement-on-ai-risk), no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.
For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.
It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one [in Seoul](https://www.gov.uk/government/news/uk-and-republic-of-korea-to-build-on-legacy-of-bletchley-park) from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An [international AI agency](https://www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts) needs to be set up to guard execution.”
“[AI] could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could [hack](https://www.techspot.com/news/102701-gpt-4-can-exploit-zero-day-security-vulnerabilities.html) its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.
It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is [already better at](https://www.newscientist.com/article/2424856-ai-chatbots-beat-humans-at-persuading-their-opponents-in-debates/) than we are, there is no known defense.”
Either “AI is magic and can do everything” or “AI will kill us” appears in every post on this site.
Fortunately, nobody is in charge of making decisions.
I rather hit the fast forward button. I rather have AI in charge than Luddites. Most of humanity is a drain on resources, redundant and unnecessary.
Fortunately, the world’s largest corporations have a solid history of respecting laws that restrict their competitive growth. /s
Yeah, let’s put the “Western” AI on pause, let China take the lead and enslave the planet.