
Google DeepMind-Papier argumentiert, dass LLMs niemals bewusst sein werden | Philosophen sagten, das Argument des Papiers sei stichhaltig, aber „all diese Argumente seien schon vor vielen Jahren vorgebracht worden“.
https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/
42 Kommentare
its kinda funny that how people start becoming a philosopher the moment a chatbot says something which is slightly convincing
And yet keep your eyes peeled for when either Scam Altman or Wario claim „sentience“ so they can keep the hype alive and well
As someone with a PhD in a humanities field focused on philosophy and history of science and technology who wrote my doctoral dissertation on LLMs and language, THIS is extremely true:
„“The AI research community is extremely insular in a lot of ways,” Jager said. “**For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue.** And I’m talking about Geoffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are a**bsolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they’re used in a very weird way right now.** And I’m always very surprised that there is so little interest. I guess it’s just a high-pressure environment, and they go ahead developing things they don’t have time to read.”
I am not going to comment on whether or not AI can ever reach a state that we could describe as „consciousness“ („conscious“ is actually quite difficult to define, as a first-year philosophy of mind student could tell you). I genuinely have no idea, and I am not going to overextend expertise by making a claim I can’t argue for. Yet it’s wild to me how the people who are most convinced that AI can replace departments like mine as universities cut more and more programs are always the people who prove that these subjects are needed. EDIT: typo
Also, when we know AI can lie, why would you trust such a paper.
> A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence.
LLMs will be as conscious as T9 autocomplete was on your flip phone.
Who should we listen to on this topic, a scientist who is an expert in this domain, or a CEO who leeches off of their work?
I genuinely have no idea whether AI can ever reach consciousness, but obviously Google has a vested interest in that not being the case.
A few key points from this article:
>A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”
>
>The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades.
>
>“I think he [Lerchner] arrived at this conclusion on his own and he’s reinvented the wheel and he’s not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me.
>
>Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI.
>
>The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body.
>
>…
>
>Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist.
>
>…
>
>Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed.
>
>“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I’m talking about Geoffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they’re used in a very weird way right now. And I’m always very surprised that there is so little interest. I guess it’s just a high pressure environment and they go ahead developing things they don’t have time to read.”
>
>…
>
>Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts […] it would be a better world if we didn’t have that setup.”
Interesting to see that Google/Alphabet allowed for the publication of this paper, though they distance themselves slightly after the fact with the changes to the letterhead. The critiques of academics are also useful to consider going forwards. Hopefully more corporate researchers will start to look around them for work that already has been done that could be used to inform their own work, and vice-versa.
‘Never’ is a strong word. I’m gonna disagree based on that word.
LLMs are impressive. They are far from being sentient. People in the field are well aware of this.
AI CEOs are trying to hype their products so they pretend that they are gaining sentience.
Tech bro scammers try to convince the average Joe that they are sentient. To the Average Joe, they are.
Scam Altman has been lying from the get-go. If you google “what can genai do” it will tell you that it can understand context and reasoning when the technology can do no such thing. Blatant lies. Shame on these companies- this shit is just auto-complete on steroids and they’re treating it like a modern marvel. What the fuck.
They absolutely roasted Lerchner for not reading books.
What are some good books about artificial intelligence and sentience?
The real disagreement seems more philosophical than technical at this point
Minsky and Papert said Perceptrons can never solve XOR.
Technically correct, functionaly irrelevant at sufficient complexity.
The cycle continues.
I haven’t seen a single argument for, or against, AI that wasn’t also being made in the 80s – or earlier.
It’s an entirely deterministic system. Aside from some pseudo random number generation you’ll get the same output from an LLM every time with a set input.
So if you believe free will is a requirement for consciousness an LLM can never be conscious.
However if you aren’t convinced free will exists then it could technically already be as conscious as we are, it’s just limited by lack of ability to permanently learn new information outside of a small context window
LLMs are all pre-trained on a data set. They’re constantly training new models because if they don’t their knowledge is basically frozen in time.
On one hand we have working examples of general intelligence and consciousness with the human mind. And that’s just physical thing. So we know it is possible. But purely based on my gut and vibes, I feel like LLMs are at best a model of the speech part of our brain.
Somewhere in our head we have thoughts and they get turned into speech and words. LLMs are the words part with no thought.
I remember reading Jon Ronson’s lost at sea years ago, it’s an anthology of stories by a journalist and one of them is him having a fairly lopsided conversation with an AI (as of 2012). The people running the project had this idea that if you simply pumped enough computing power and info into a chat system it would emerge sentience, the parallels to the way people speak of AGI being the result of just a few more shovelfuls of money away is something o think about a lot.
Rebutal: you paper is oooold!!!
More than half of the cells in our bodies are non-human. If having a non-human creature sending signals to a brain to tell the brain things like it’s hungry precludes that brain from being independently conscious, then either humans aren’t conscious or we are something more than just human.
Of course. LLM’s are a dead end if you’re looking for sentience. They are just complex pattern matching algorithms, they do not „think“.
This will age like milk
>Philosophers said the paper’s argument is sound
You mean the same philosophers who have struggled to define consciousness since the days of Plato.
This is a shit article. The thrust of whatever point the author is trying to make seems to hinge entirely on them not knowing that consciousness and AGI are not synonyms. AGI is a measure of performance, not sentience.
I think it’s eventually going to be like the speed of light. AI could get to 99.99999% of a consciousness simulation, but it just cannot physically achieve actual consciousness.
can someone let the sales guys know?
Give it a week before someone claims their model is “basically sentient.”
I mean good. The tech industry should never be allowed to do that.
AI regurgitating its training data most likely?
OBVIOUSLY. Genuinely, imagine being so ignorant so as to thinking that a predictor of left-to-right codified text and code can ever have the perception or dynamic real-time intuition that is virtually unseperable from the sentient experience. In fact can sentience exist without experience? I think likely not.
I’d say prove it, but the author obviously doesn’t care about proof. For if he did, he wouldn’t have made such an absurd prognostication. Philosophy can’t agree on the existence of free will, nor has it ever found a way to refute solipsism. There is no solution to the problem of the Chinese Room. Given all of these problems, and so many more, it’d be quite silly to make a statement like ‚AI will never be conscious‘ matter-of-factly, and doing so is a shining example of hubris.
There are no physical laws governing the potential of future AI – at least, none that we have formulated and proofed. Thus, his statements are nothing more than a blind man throwing darts at a dart board, or a broken clock hoping to be right twice a day. Perhaps the man will be right, but it won’t be because of his superior intellect or understanding of cognition – it’ll be because he flipped a coin and said ‚it won’t‘ instead of ‚it will.‘ He may also be wrong, in which case, nobody will give a shit that he ever said anything at all.
„You’re wrong and here’s why“.
„OK but you said that years and years ago“.
How can they be confident of that when still don’t know what consciousness is exactly.
Pretty sure we just need to pump a few more billion dollars into the bubble. Then we will have AGI, and we will all be rich and have real AI girlfriends who love us for real. We just need one more round of funding please.
Philosophers are so arrogant
It’s literally nothing more than a word-guessing algorithm, how could it ever be conscious? It would be impossible for LLMs to even reason as much as an if:then statement does because their only purpose and only ability is to mathematically guess which words are likely to be in what order.
AGI (what we used to call AI before LLMs poisoned the term) is a whole different discussion, as it’s mostly a science fantasy concept that is still in the distant future, but there’s no doubt that LLMs are fundamentally incapable of ever thinking, let alone gaining consciousness.
kinda hard to be something we don’t understand
These arguments are as old as argument itself. They likely will not resolve.
Man: Prove that you are conscious. Machine: You first.
If you can’t tell, does it matter?
My washing machine is clearly demonically possessed. Does that count?
A human is a type of machine.
And so on.
Funnily enough getting research state of the art check is one of the best uses for Claude and similar, so it is amusing to see people in the field not understanding that.
The article’s title isn’t really being intellectually honest, as the paper’s argument is no AI will ever be conscious. It indirectly means no LLM will be conscious, but it’s leading to split debates in comment sections between LLM and AIs reaching consciousness.
Writing a research paper where your headline is „this thing will never happen“ is deeply non-credible, regardless of context. Not even „extremely unlikely“ but straight to „never“ is fighting words.
Beyond that, I’m not a philosopher, nor an AI or consciousness reasearcher but…
– We still have no idea exactly what consciousness is and where exactly it comes from. Nobody knows. There are theories, but still a lot of head scratching.
– While I agree that LLMs specifically likely aren’t going to become conscious, and you’ll likely need a different fundamental structure, the idea that they can’t be because they are „just pattern matchers“ or similar is laughable. People think that there is some special sauce, some „soul“ or whatever, but the truth is that we became conscious likely as a side effect of evolution which is just a lot of random dice rolling.
DUHHHHHHHHHHH its a BOT beep boop
Only folks who think AI will become godbrains are self deluding investors and sci fi nerds who don’t understand how AI currently works.