
KI-Modelle spiegeln in der Regel die politischen Ideologien ihrer Schöpfer wider. Das Gemini-Modell von Google zeigte eine starke Präferenz für fortschrittliche Werte und Umweltschutz. Andererseits zeigte das Grok-Modell von xAI Tendenzen zum konservativen Nationalismus.
New research: AI models tend to reflect the political ideologies of their creators
3 Kommentare
New research: **AI models tend to reflect the political ideologies of their creators**
New research provides evidence that artificial intelligence systems are not the objective, neutral observers they are often assumed to be. A new study suggests that large language models tend to adopt the ideological perspectives of the companies and countries that build them. These findings were published in the journal npj Artificial Intelligence.
Even within the United States, De Bie and his colleagues observed significant normative differences. For example, **Google’s Gemini model showed a strong preference for progressive values and environmentalism. On the other hand, the Grok model from xAI displayed tendencies toward conservative nationalism**. This indicates that corporate culture, not just national culture, influences the design and behavior of these systems.
For those interested, here’s the link to the peer reviewed journal article:
https://www.nature.com/articles/s44387-025-00048-0
It almost sounds like a premise for a scifi movie: *AI Wars!*
I don’t think that’s the issue as much as this question of how well they can represent the other side of any given argument in terms of premises and facts.
One of the big advantages of AI is that it can easily to make the convincing counter argument, which is something people often struggled to understand. So what’s the difference in counter-argument construction in AIs?