Amazon macht menschliche Mitarbeiter für den Fehler eines KI-Programmierers verantwortlich / Berichten zufolge kam es aufgrund von Aktionen der KI-Tools von Amazon zu zwei kleineren AWS-Ausfällen.

https://www.theverge.com/ai-artificial-intelligence/882005/amazon-blames-human-employees-for-an-ai-coding-agents-mistake

26 Kommentare

  1. urban_snowshoer on

    Given how common the „burn everything down and recreate“ strategy is among humans, especially in management/leadership roles, could Amazon’s AWS tools replace management/leadership roles?

  2. I hate the AI they added to the Alexa app. We also use Ring cameras and I tried turning the AI off. Nope not possible. Now I get notices on my echo show and my TV that A person is walking a brown dog in the alleyway.

    I thought I was able to adjust the notifications and it shows on my TV as well.

    Nope. But I will figure it out or I’m getting rid of my echo shows.

  3. coconutpiecrust on

    >Numerous unnamed Amazon employees told the FT that AI agent Kiro was responsible for the December incident affecting an AWS service in parts of mainland China. People familiar with the matter said the tool chose to “delete and recreate the environment” it was working on, which caused the outage.

    Nice. Put an LLM with no concept on anything in charge and this is what you get. 

    I find it interesting, though, that Amazon chooses to blame them filthy humanses instead of acknowledging that filthy humanses may have value, and the machine may have limitations. 

  4. BAJ-JohnBen on

    Imagine betting so much on AI you cannot claim the machine generated an error.

  5. band-of-horses on

    This is inevitably going to happen. Everyone knows AI tools make mistakes, and need a human in the loop to review and verify output. But it’s human nature to get lazy and if something is 98% accurate start to trust it and pay less and less attention.

    This season of The Pitt addressed this with AI dictation apps making mistakes. Ai being 98% accurate is great, except when the remaining 2% lead to serous issues… And honestly in some ways it’s almost worse to be that accurate as it makes it much easier to become complacent.

  6. man, PR there is spinning that shite as hard as they can. They stopped short of saying „our stock is up like 20%, why aren’t you talking about that?“

  7. Ai coding is going to make every day Xmas for hackers, I’ve noticed some apps now update about twice a week and just get buggier and buggier each time.

  8. Caraes_Naur on

    When the corporate dream of having no employees (but more importantly, no payroll) comes because everything is run by „AI“, who will they blame while there are no consumers to spend money?

  9. kyuzo_mifune on

    Well they are right, if you are pushing code written by AI you are still responsible for it.

  10. FauxLearningMachine on

    It is not the individual programmer’s problem. It is not the AI’s problem. It is a problem created by the organization and how it defines its risk reduction process during product delivery.

    We can’t say much from the outside but that the organization failed to account for increased risk associated with a new process they introduced. And that scapegoats do not help and organization grow.

  11. vomitHatSteve on

    Yes. Every AI-induced programming error *is* fundamentally a human error. The only point of question is whether that error was at the programmer level or the executive level or both.

    If a programmer mis-uses an AI tool to cause an outage, that’s a human error. If an executive puts in policies that don’t allow enough oversight over AI tools, that’s a human error.

    It’s been true since 1979: A computer cannot be held responsible; therefore, a computer must not make management decisions

  12. Dangerous_Drummer350 on

    Of course. With the massive investment in AI, how could it possibly make a mistake?

  13. Well shit… why didn’t I think of this… for now on forward I officially identify as an AI.

  14. arm-n-hammerinmycoke on

    They’ll probably fire the employees, extrapolating the actual problem

Leave A Reply