Share.

    14 Kommentare

    1. If companies are legally liable for everything their AIs do, you will see a swift decrease in the illegality of the AIs.

    2. Seems like he kind of gets it, although it’s obvious that the potential for harm goes well beyond people losing jobs or becoming emotionally dependent on chatbots. These technologies can infect language and ideology itself, and given that they are controlled by companies with less than noble goals for humanity, certain forms of AI should be treated as weapons of mass destruction and regulated as such. Few people seem to grasp how easily AI can be weaponized to control the masses. 

    3. If the headline sounds like word salad, it’s because they omitted a word from the actual quote.

      >Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.

      >“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”

    4. LolAtAllOfThis on

      How are people into AI? There are horror/sci-fi movies written about it. And I’m not being overly dramatic. It’s advancing so quickly.

    5. Unique-Coffee5087 on

      Letter_AI-Asilomar

      Artificial Intelligence development must be assessed for risk and regulated to ensure that it is only beneficial.

      „As AI wipes jobs, Google CEO Sundar Pichai says it’s up to everyday people to adapt accordingly: ‘We will have to work through societal disruption’“ [https://fortune.com/2025/12/02/ai-wipes-jobs-google-ceo-sundar-pichai-everyday-people-to-adapt-accordingly-we-have-to-work-through-societal-disruption/]

      NO. There doesn’t have to be a disruption. We are in control of the adoption of AI. It is not some force of nature. We can simply stop and reassess the benefits and costs, and then decide how to proceed.

      When someone deliberately sets a forest afire, they are put in prison for the arson. If it’s a corporation that does this because of negligence, it is fined to recoup the cost of putting out the fire and for the restoration of property and order. Google and other entities are developing AI while knowing that it will be disruptive and even destructive.

      There’s an opportunity to stop them and to make them submit to guidance and oversight to help the new technology benefit society without excessive disruption. I am reminded of the Asilomar Conference on Recombinant DNA, in which leaders in the new field of molecular biology/recombinant DNA tried to project the impact of the new science and its application, anticipate risks, dangers, and pitfalls, and develop guidelines for regulating research and development of the technology and regulation of research. These responsible acts were organized and performed by the scientific community at a time when the technology was virtually unknown outside of close academic and medical circles.

      The Asilomar Conference should be a model for assessing potentially dangerous new knowledge and technology, including Artificial Intelligence. Instead, there is a feeling of inevitability, as though the development and growth of AI is something that is happening to us. This is wrong.

    6. Effective-Toe-8108 on

      Half of the public cant even properly define what AI even is. This is just fearmongering

    7. Because it barely fucking works and we’re instituting massive investment and layoffs because of it?

    Leave A Reply