
In einer späten Augustnacht im Jahr 2025 schlich sich angeblich ein 19-jähriger Student der Missouri State University auf den Parkplatz eines Studienanfängers und tobte: 17 Autos zerschmettertFenster zersplittert, Spiegel abgerissen.
Minuten später tat er etwas, was Millionen von uns jeden Tag tun:
Er hat ChatGPT geöffnet.
Laut Gerichtsdokumenten tippte er Nachrichten wie:
und fragte den Bot, ob er ins Gefängnis kommen würde. Später stellte die Polizei das Gespräch von seinem Telefon wieder her, und die Staatsanwälte in Missouri sagen dies nun sein ChatGPT-Geständnis ist Teil der Beweise verwendet, um seine Festnahme und die Anklage wegen Sachbeschädigung zu rechtfertigen.
https://vector-space-ai.ghost.io/is-chatgpt-private-how-a-college-kids-arrest-exposed-the-truth-about-your-ai-chats/
10 Kommentare
This story raises a big question about the future of AI and privacy. If people use tools like ChatGPT during stressful moments or to think through personal decisions, what expectations of privacy should they actually have? As AI becomes more common, how should laws and platforms handle user data, and what rights should people have over their own chat history?
People are geniuses for expecting privacy online these days. Everyone is keeping logs, and courts are pulling those logs during your court hearings. Cops pulling up search history isnt new.
I am more surprised that people are actually surprised by this. Most governments have entire agency with the sole purpose to spy on everything you their citizens do. What do people think was going to happen? The government suddenly respecting your privacy because a chatbot is involved?
Of course it isn’t private. I cannot fathom how grown people think anything they do online with a traceable login cannot and will not be used against them by the surveillance state the second they step out of line, and especially not anything so beholden to federal or private capital funding.
If it’s not your server, there is no reasonable expectation of privacy, chapter eleventy-six.
I think the bigger question is…
Was he caught and then this conversation was discovered?
Or was he caught *because* of this conversation?
„Police later recovered that conversation from his phone“
So they opened the app and looked at it. That’s on the user.
Even if you theoretically keep the server side stuff secure and compartmentalized, you can still just open the app on the phone and read your previous prompt in plain text.
Another mark in the win column for Deep Seek. China isn’t going to give your data to the police.
Also, take the bio lock off your phone and replace it with a password.
If you delete messages are they deleted? Or do they stay server side?