If companies are legally liable for everything their AIs do, you will see a swift decrease in the illegality of the AIs.
Bakedads on
Seems like he kind of gets it, although it’s obvious that the potential for harm goes well beyond people losing jobs or becoming emotionally dependent on chatbots. These technologies can infect language and ideology itself, and given that they are controlled by companies with less than noble goals for humanity, certain forms of AI should be treated as weapons of mass destruction and regulated as such. Few people seem to grasp how easily AI can be weaponized to control the masses.
elihu on
If the headline sounds like word salad, it’s because they omitted a word from the actual quote.
>Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.
>“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”
LolAtAllOfThis on
How are people into AI? There are horror/sci-fi movies written about it. And I’m not being overly dramatic. It’s advancing so quickly.
phantom_metallic on
Idk, we did nuke another country at one point.
DadOfPete on
All hail our robot overlords!
RickyTrailerLivin on
Guns were.
fatuousfatwa on
So write a bill, Bernie. You haven’t written a serious one yet.
Unique-Coffee5087 on
Letter_AI-Asilomar
Artificial Intelligence development must be assessed for risk and regulated to ensure that it is only beneficial.
„As AI wipes jobs, Google CEO Sundar Pichai says it’s up to everyday people to adapt accordingly: ‘We will have to work through societal disruption’“ [https://fortune.com/2025/12/02/ai-wipes-jobs-google-ceo-sundar-pichai-everyday-people-to-adapt-accordingly-we-have-to-work-through-societal-disruption/]
NO. There doesn’t have to be a disruption. We are in control of the adoption of AI. It is not some force of nature. We can simply stop and reassess the benefits and costs, and then decide how to proceed.
When someone deliberately sets a forest afire, they are put in prison for the arson. If it’s a corporation that does this because of negligence, it is fined to recoup the cost of putting out the fire and for the restoration of property and order. Google and other entities are developing AI while knowing that it will be disruptive and even destructive.
There’s an opportunity to stop them and to make them submit to guidance and oversight to help the new technology benefit society without excessive disruption. I am reminded of the Asilomar Conference on Recombinant DNA, in which leaders in the new field of molecular biology/recombinant DNA tried to project the impact of the new science and its application, anticipate risks, dangers, and pitfalls, and develop guidelines for regulating research and development of the technology and regulation of research. These responsible acts were organized and performed by the scientific community at a time when the technology was virtually unknown outside of close academic and medical circles.
The Asilomar Conference should be a model for assessing potentially dangerous new knowledge and technology, including Artificial Intelligence. Instead, there is a feeling of inevitability, as though the development and growth of AI is something that is happening to us. This is wrong.
Diligent-Ranger7087 on
No it’s not. Folks losing their minds.
NimusNix on
No, that was electricity, you old man.
Effective-Toe-8108 on
Half of the public cant even properly define what AI even is. This is just fearmongering
iAMguppy on
Because it barely fucking works and we’re instituting massive investment and layoffs because of it?
scottycurious on
That’s an observation not a criticism.
Leave A Reply
Du musst angemeldet sein, um einen Kommentar abzugeben.
14 Kommentare
If companies are legally liable for everything their AIs do, you will see a swift decrease in the illegality of the AIs.
Seems like he kind of gets it, although it’s obvious that the potential for harm goes well beyond people losing jobs or becoming emotionally dependent on chatbots. These technologies can infect language and ideology itself, and given that they are controlled by companies with less than noble goals for humanity, certain forms of AI should be treated as weapons of mass destruction and regulated as such. Few people seem to grasp how easily AI can be weaponized to control the masses.
If the headline sounds like word salad, it’s because they omitted a word from the actual quote.
>Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.
>“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”
How are people into AI? There are horror/sci-fi movies written about it. And I’m not being overly dramatic. It’s advancing so quickly.
Idk, we did nuke another country at one point.
All hail our robot overlords!
Guns were.
So write a bill, Bernie. You haven’t written a serious one yet.
Letter_AI-Asilomar
Artificial Intelligence development must be assessed for risk and regulated to ensure that it is only beneficial.
„As AI wipes jobs, Google CEO Sundar Pichai says it’s up to everyday people to adapt accordingly: ‘We will have to work through societal disruption’“ [https://fortune.com/2025/12/02/ai-wipes-jobs-google-ceo-sundar-pichai-everyday-people-to-adapt-accordingly-we-have-to-work-through-societal-disruption/]
NO. There doesn’t have to be a disruption. We are in control of the adoption of AI. It is not some force of nature. We can simply stop and reassess the benefits and costs, and then decide how to proceed.
When someone deliberately sets a forest afire, they are put in prison for the arson. If it’s a corporation that does this because of negligence, it is fined to recoup the cost of putting out the fire and for the restoration of property and order. Google and other entities are developing AI while knowing that it will be disruptive and even destructive.
There’s an opportunity to stop them and to make them submit to guidance and oversight to help the new technology benefit society without excessive disruption. I am reminded of the Asilomar Conference on Recombinant DNA, in which leaders in the new field of molecular biology/recombinant DNA tried to project the impact of the new science and its application, anticipate risks, dangers, and pitfalls, and develop guidelines for regulating research and development of the technology and regulation of research. These responsible acts were organized and performed by the scientific community at a time when the technology was virtually unknown outside of close academic and medical circles.
The Asilomar Conference should be a model for assessing potentially dangerous new knowledge and technology, including Artificial Intelligence. Instead, there is a feeling of inevitability, as though the development and growth of AI is something that is happening to us. This is wrong.
No it’s not. Folks losing their minds.
No, that was electricity, you old man.
Half of the public cant even properly define what AI even is. This is just fearmongering
Because it barely fucking works and we’re instituting massive investment and layoffs because of it?
That’s an observation not a criticism.