Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Konflikt
Korea
Krieg in der Ukraine
Latest news
Maps
Nachrichten
News
News Japan
Polen
Russischer Ãœberfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
UkraineWarVideoReport
Ukraine War Video Report
Ukrainian Conflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
2 Comments
From the article
>They also put fear of unemployment into many people who have previously avoided the threat of automation – journalists included.
>However, the legitimate question remains: does AI pose an existential threat? After over half a century of false alarms, are we finally going to be under the thumb of a modern day Colossus or Hal 9000? Are we going to be plugged into the Matrix?
>According to researchers from the University of Bath and the Technical University of Darmstadt, the answer is no.
>In a [study](https://aclanthology.org/2024.acl-long.279/) published as part of the 62nd Annual Meeting of the [Association for Computational Linguistics](https://2024.aclweb.org/) (ACL 2024), AIs, and specifically LLMs, are, in their words, inherently controllable, predictable and safe.
>”The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Dr. Harish Tayyar Madabushi, computer scientist at the University of Bath.
>”The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” added Dr. Tayyar Madabushi. “This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.
Perhaps the foremost danger of AI tools is that we might grow so reliant upon them that we are obliged to accept all of their negative externalities, no matter what they are or how they increase.
The internet, for example, is something that humanity cannot do without now. If someone had the power to shut it down for everyone, the consequence would be the largest humanitarian disaster and loss of human life in history.
AI now is not so indispensable, but it is growing more so all the time. It needn’t take control of the nukes or dispatch robot soldiers to wipe us out. It need only be very useful, let us rely upon it, let more of our bloodstream and oxygen supply flow through it.