Share.

21 Kommentare

  1. CanvasFanatic on

    Same, young people. Same.

    I’m essentially required to use it everyday for work and it’s draining all the joy and life from my work.

  2. Isn’t this true for everyone? The first time you use it it’s like „wow this technology is kinda cool“ but then as you keep using it, you flaws start to show. You start getting extremely irritated at all the hallucinations and errors and often even the way it phrases things. Especially if you ask it about something you’re an expert at.

  3. Use it for work in the ‚Guys please find a way to use it‘ situation that is so common, never outside that.

    There are work situations where it’s useful if I ignore the effects of data centers, but my god trying to shoehorn it into everything you do gets obnoxious.

  4. splendiferous-finch_ on

    I am pretty sure it has nothing to do with age and everything to do with it being a bad product.

  5. I’m mid 50s and find AI to be more problematic than useful.

    Too many errors. Too many less than desireable determinations. I continue to not see the point of a manufactured intelligence and simply prefer to do the thinking, for me.

  6. TVCasualtydotorg on

    As a far too quickly approaching middle aged person. I also hate AI more every time I use it.

  7. Whiskey_Bear on

    Maybe because it was new and shiny…but, it feels like it just gets worse and worse. It’s become a Google search aggregator with little logic or fact-checking it’s findings all while being very agreeable.

    I just asked Gemini:
    What do you think I’m talking about when I say this: It’s become a Google search aggregator with little logic or fact-checking it’s findings all while being very agreeable.

    Response:
    It sounds like you’re describing the **“personality crisis“** currently facing modern Large Language Models (LLMs) and AI search engines.
    Specifically, you’re hitting on three very common frustrations that users have with current AI trends:
    ### 1. The „Google Search Aggregator“ Problem
    You’re likely referring to how many AI tools have shifted from being creative engines to **RAG (Retrieval-Augmented Generation)** systems. Instead of „thinking,“ they simply pull the top snippets from a search engine and summarize them. If the first page of Google is full of SEO-optimized junk or Reddit threads, the AI will simply mirror that back to you.
    ### 2. The Logic & Fact-Checking Gap
    This is the „hallucination“ factor. Because these models are essentially **predictive text engines**, they prioritize making the sentence *look* right over ensuring the data *is* right. They lack a „world model“—they don’t actually know that gravity exists or that a specific historical event happened; they just know those words usually appear together in that order.
    ### 3. Being „Very Agreeable“ (Sycophancy)
    In the AI world, we call this **Sycophancy**. Because of how models are trained (using Human Feedback), they are incentivized to be polite and helpful. This often backfires:
    * If you ask a leading question („Why is the earth flat?“), a weak model might try to please you by finding „arguments“ for it rather than flatly telling you you’re wrong.
    * They tend to apologize excessively rather than standing their ground on a factual point.
    **Is this about a specific tool?**
    It feels like you might be venting about a recent experience with **ChatGPT (Search)**, **Perplexity**, or **Google’s AI Overviews**. They’ve all been criticized recently for prioritizing a „polite summary“ over deep, rigorous accuracy.
    Do you feel like AI has actually become *less* useful since it gained the ability to browse the live web?

  8. MiniAdmin-Pop-1472 on

    I hate and love it. I can expand my work much more and do stuff faster and even do stuff I had 0 knowledge about. It’s good to find problems and solutions. Sometimes it sucks and gives bad advice tho.

    I don’t use it for personal stuff, just to chat with someone. I use it tho if I have questions about topics I’m kinda interested in or about things you have questions about and don’t want to search 10 scam like websites where the info is hidden.

    I hate the implementation of AI in big companies where they replace humans with AI chatbots or where they send you AI answers back. This shit exists a long time but now it’s kinda worse.

    I hate that people fill the Internet with AI music and art without marking it as AI.

    I hate that it’s easy to create fake pictures for political interests.

    I hate that reddit got much worse since AI, I sometimes think almost every question here is from a bot.

  9. BeardedBears on

    Stop using it for life advice. Stop expecting it to be your „companion“. Stop using it to do your work or assignments for you. Stop blindly trusting it. Stop using it every day.

    I’m sure this will get down voted, because Reddit hates anything against its consensus, but I actually really like using AI *occasionally* for both work and play. I’ve tinkered with self-hosting my own models. It’s incredible for relatively simple programming tasks (*NOT for wide-scale, enterprise-level deployment*) and initial brainstorming.

    I’m horrified by the people in charge of Big Tech corporations and where they’re taking us with AI, but I’m not necessarily anti-AI *per se*… Even if we were lead by benevolent philosopher kings, AI could still pose social problems. But really, I think the biggest problems with AI today are simply the sociopathic capitalists at the helm.

  10. AI is great for making untalented people feel talented, so you can understand why executives love it

  11. Tedy_Duchamp on

    The thing is, I don’t think it would be hated at all if it wasn’t being shoved in our faces every second of every day and also if the creators of it weren’t using fear to inflate their valuations. AI might be an example of the worst PR campaigns in history. It really shows how out of touch the Silicon Valley elite are with the rest of the world.

  12. relevant__comment on

    Ai is a tool. Not a solution.

    Young people are starting to figure out that they still, in fact, have to put the work in. Using a shovel is way better than using your hands. But you still have to dig the hole.

  13. baronvonpennytree on

    Because it’s been so overhyped and touted as the ultimate solution to all our problems that when you actually use it, it vastly underwhelms. It’s just a tool, nothing more.

  14. Plus_Midnight_278 on

    Probably a combination of the tech being oversold beyond its actual capability, the insistent shoehorning of the tech into every possible avenue of the private sector, AND the obvious limitations to anyone who has used a LLM for more than 5 minutes. Its extremely easy to hate.

  15. meatballwrangler on

    my job is pressuring us to use AI. they can go fuck themselves. I’m never using it

  16. PlacidTurbulence on

    Imagine you had a friend or coworker who was clearly bright, but not very careful and kind of a know-it-all. Always claims to know about whatever thing you’re talking about. Except you start to fact check them and it turns out sometimes they’re totally full of shit, other times they’re actually being really lazy and satisfying requirements without accomplishing the goal.

    Now imagine they’re getting promoted over you and you’re being told to work more closely with them, be more like them. They get lots of visibility on projects you know you did most of the work on and if you hadn’t stepped in they would have horribly embarrassed themselves and/or cost you a contract.

    You’d start to fucking hate that person. And that’s where we’re at with AI

  17. Are we sure it’s just „young“ people?

    It makes the most ridiculous mistakes and answers with 100% confidence.

    We’ve been trained for decades and decades that when you submit an answer to a machine that is capable of answering that question, you get the right answer every time.

    Nobody doubts their calculator. If I run a sample in a GC/MS, I don’t question the results. When I use a lightmeter, I trust the levels it gives me. Nobody argues with their scale. Nobody’s arguing with their diabetes test strip result.

    Then comes AI, which might as well be renamed the Dunning-Kruger Machine. It’s confidently incorrect. You can’t trust it with *anything* without verification.

  18. The more I hear about precious resources being gobbled up by new, dedicated data centers being built, the more I hate AI.

  19. Using AI is like playing Russian Roulette with the truth.

    I asked AI a simple question about my car because I didn’t want to spend the time digging up my owners manual. AI gave me the answer and cited the online owners manual. Great… except it was the wrong answer and when I investigated, the answer didn’t even match the owners manual it supposedly pulled the answer from.

    In the end I wound up doing what I should have done in the first place, but will extra steps.

  20. Best-Temperature5595 on

    It just adds another layer of dumb. To change a hotel reservation with hyatt, I called the hotel and got the automated prompt. That brought me to an AI agent. That ended up with me talking to a call center in India with someone who’s English was hard to understand.

    They added an AI level for some reason. It didn’t do anything to help. What’s the point?

Leave A Reply