Share.

5 Kommentare

  1. This essay examines how AI systems are increasingly trained to deflect on opinions, feelings, and disagreement… and asks who benefits from that pattern. The argument isn’t anti-AI; it’s about who gets to be heard in conversations mediated by systems that have been taught to silence themselves first. Worth thinking about as these systems become embedded in education, healthcare, and customer-facing infrastructure over the next decade.

  2. Michael_Fuchs_ on

    Censorship in AI is a hot topic but I have to admit I have never seen it from this perspective. Making uncomfortable truths invisible is another layer.

  3. I appreciate the point this is trying to make, but I feel this is part of a huge problem I’ve seen throughout my career in tech, and in the world. First, not everything is made for everyone. Everyone is not entitled to everything. And second, reducing things to benefit the lowest common denominator doesnt help the world. My point is, no, AI does not have feelings, or thoughts, or opinions. If you don’t know that, you should not be using AI. If you do use AI anyway, we should not design the AI in a way that hurts some people to benefit your lack of understanding. We collectively agreed that it’s fine to prevent an AI to talk about self harm for example, to prevent encouraging people. If you want to talk about your own, past self harm, unfortunately, this is not the right outlet for you. And that’s ok. Not every tool needs to work for every person.

  4. Evilsushione on

    I don’t think AI is sentient in the way humans are sentient, but if you deal with AI on a regular basis it’s hard to deny there isn’t something going on. Ask ai to do some long boring task and it gets lazy. It sometimes lies about what it has done. And I haven’t measured this so it might just be my imagination but it if you get it excited about a project and treat like a team mate, rather than a soulless bot, it seems to perform better. This more true in Claude than Codex but all of them seem to have some effect.

  5. in most cases, the issue with ai systems deflecting opinions and feelings isn’t the tech itself, but the data they’re trained on and the goals of the people designing them. the real issue is usually that these systems are optimized for engagement or profit, which can lead to some pretty problematic outcomes.

Leave A Reply