KI-CEOs befürchten, dass die Regierung KI verstaatlichen wird

https://yro.slashdot.org/story/26/03/07/2058213/ai-ceos-worry-the-government-will-nationalize-ai

23 Kommentare

  1. gadgetygirl on

    Just days ago, OpenAI CEO Sam Altman said „It has seemed to me for a long time it might be better if building artificial general intelligence were a government project.“ And Palantir’s CEO thinks it’s inevitable – that if a technology takes millions of jobs *and* threatens the U.S. military, that you’d be crazy to think it *wouldn’t* get nationalized.

    Do AI companies secretly want this? Imagine guaranteed government contracts and guaranteed funding for research and development. Or will government inevitably *need* this? If society is transformed, will they have no choice but to seize the private companies building it so they can direct its development?

  2. aparallaxview on

    And how would that be a worse outcome than the current neo-feudalist one? Not saying it would be better, just that it likely wouldn’t be worse 🤷

  3. quantumpencil on

    I mean, the government is going to do this. They may not do it explicitly but they will use force to ensure they are ultimately in control of the technology one way or another.

  4. They don’t worry. It was always the plan. AI can be used as a mass surveillance tool and most AI shops are impossible to make profitable.

    Nationalization disguised as a bail out was always the goal.

  5. SkiHotWheels on

    Isn’t this what all the meetings were about back in the presidential race? Biden had told these guys that the gov wanted to highly regulate AI. Then, Trump told them he’d stay out of their way. Hearing that they got behind him in Q2 of 2024.

  6. PM_ME_NUNUDES on

    It’s not high tech enough. The moat is currently just language and training. What’s a government going to do about open source models? Maaaaaybe if the government banned individual hardware ownership they could do it, but i don’t see the public accepting that. This is just more evidence that Altman and Co don’t really have a clue about what they are doing.

    Anyway I’m off to the thrift store to buy every single book they’ve got.

  7. Yeah, I mean, that’s what happens when you push your product to be an integral part of how the government operates and make the government and military dependent on your product never going offline. 

  8. BalerionSanders on

    It shouldn’t be nationalized.

    It should be banned.

    (Anyway, all corporations and rich people, and us too, are nationalized already right now. This criminal regime can do anything at any time and nothing except the implicit threat of armed rebellion and mass resistance- whether or not we are any longer capable of actually effecting that kind of fight for our liberty- is able to stop them. If the DOJ rolled up to Anthropic or OpenAI or even JP Morgan, and said, ‘pay us, or else,’ they’d pay, or they’d be destroyed. Nazism makes no compromises, capitalism is only useful to Nazis as long as it is useful, or it will be destroyed. You get, what you campaign financed, you Silicon Valley apocalypse incel cult of dunces)

  9. KratosLegacy on

    Nationalize another 70% of industries while you’re fucking at it. Healthcare, education, banking, energy, etc. But it needs to be in the hands of the people, not the government. Keep that in mind people. It needs to be held accountable by *the people* not *politicians* who don’t represent the people and sign data center deals behind closed doors while we pay higher energy and water bills.

  10. Well, they are presently making the mistake of trying to move too far too fast. And doing that will generate a significant back reaction against it.

  11. DejectedTimeTraveler on

    We need something like the FED for AI. Some board of appointed, long term individuals whos only job is to try and keep the AI’s somewhat in check.

  12. Spitfire1900 on

    I think the bigger risk to profiteers is that AI models look to be democratized too easily. A decently funded government project can release an open source model for a tenth the price at 90% the capability.

  13. Wait wouldn’t that directly transfer the loan obligations to the US? Is that a backdoor out of this fustercluck?

  14. SolidLikeIraq on

    I was listening to a podcast that spoke about how the race for AGI is essentially the race for total power.

    If anyone creates genuine AGI – it would instantly be able to control the world’s nuclear arsenal. It would break encryption that was unbreakable. It would game economic models and connected systems in a way that we likely couldn’t comprehend.

    It’s hard to even imagine something that has the power to be world shifting. In weird way it’s like the old adage – “Imagine trying to explain cars to horse traders.”

    If AGI is possible, it will be so beyond comprehension that even our most residents safe systems will be at the threat of complete destruction.

    I think what we likely end up with is a lot of very useful specific “agents” that are perfect for operating within specific tasks.

    The ability to connect all those tasks and adjust for the nuances of the weird contextual clues that we all use in our daily lives will be difficult.

    This isn’t a “what word most commonly follows this word if the following parameters and probabilities are true for the words before it.” Problem.

    I just hope we don’t completely destroy the next 10-15 years with this shit. Technology is fire. Fire can be a great thing and we’ve learned how to use it to better our lives. It STILL burns a ton of shit down accidentally though.

  15. JUST_A_LITTLE_PUSH on

    What I’m more concerned about is the government enforcing the AI to learn pro Israeli bias and feed it back to the millions of users. People are already taking what chatbots tell them as the gospel truth. Another medium for the Mossad to infiltrate and manipulate; if they haven’t already.

  16. DeLoresDelorean on

    Is not that reliable nor accurate, so it will fit perfectly with other government assets.

  17. pimpeachment on

    They can’t. Anyone can run their own models on their own hardware. They can nationalize the services offering pre built models for consumers. 

  18. No shit, you’ve built the best automated surveillance and de-anonymizing system in history.

  19. SnooDucks4472 on

    We will get to a point where AGI when/ if it becomes possible, will be akin to a superweapon. At that point the hope would be that a reasonable government realizes that this power is dangerous and cannot be left in any one persons hands.

  20. Altruistic_Koala_122 on

    It’s already secretly eyeing your browser tabs if you haven’t dismantled it completely. Not to mention the future of PCs, that all will have a.i. on the hardware itself.

    People keep forgetting, the evil and mean people will always be the first to abuse everything and everyone for quick profits.

  21. I mean, surely this is the way forward? I’ve been shouting from the rooftops at work about something similar. We’re pissing around trying to implement AI in different ways. Every day someone has something new to trial. Nothing sticks. All the while we’re burning tokens like mad. All people are doing is vibe coding.

    Anyway, at some point, some key workflows using AI will stick and we’ll be bound by some models. At some point the VC funding will cease and the rug will be pulled. Then all the companies reliant on these LLMs will have their costs increase massively (potentially). So why not invest in open source or inner source now? Isn’t that the smart move? Then keep some funding for some premium models.

    I mean, anyone doing OpenClaw stuff is going that straight off the bat… so why are massive companies not? I mean we all know why.

    So by scaling this up and having this at government level, surely that makes sense. Fuck em. They would fuck you over in a heartbeat. To be at the behest of a load of tech bros who were all sexually repressed teenagers is madness!

Leave A Reply