Share.

24 Kommentare

  1. NerminPadez on

    Namesto da pogledamo kaj in koliko je ukradla, se bomo kregali a je posnetek pravi al ne.

  2. Janša pa zgleda kot, da ne ve kje je.
    Skrajn cajt, da gre v penzijo ta seniln starček 😄

  3. ArmordilloY on

    Grok pod objavo pravi, da ni avtentičen. Razen Svobode sicer večina pravi, da so avtentični.

    Grok na Groku: Ali znaš direktno na Xu analizirat avtentičnost zvoka posnetka ali gre za dejanski govor ali gre za umetno generirano?

    Ne, žal ne morem direktno na X-u (ali tukaj v klepetu) analizirati avtentičnosti zvoka posnetka – torej poslušati audio iz posta in reči, ali je pravi človeški govor ali AI-generiran (deepfake glas).

    Tudi sicer niti plačljivi Chat ni preveč spreten pri delanju transkriptov, se veliko probaval. Groka nisem probaval.

  4. Reasonable-Photo-776 on

    Kok so te politiki glupi jebote. Men je tut že reku bogve kaj gpt ne mors kr usmu vrjet

  5. timcek_lol3 on

    Dal je kontro Logarju, ki je prej omenil da je Grok vedel kdaj bo vojna v Iranu in on je to že takrat vedel. Menda od takrat, odkar je dobil nova očala, ne mara več Janše, ampak uporablja Groka.

  6. whyisitsoENET on

    Janša me spominja na našega direktorja preden ko je šal v penzijo. Dedu je bilo že čisto vseeno. Smejal se je drugim solastnikom v faco in se delal norca. Dobim flash back.

  7. Expensive_Special120 on

    “Ne bom se izrekal o istovetnosti”

    >pove da je Grok rekel, da je fake

    Which one is it, Luka?

  8. teolight332 on

    Mesec je dal Groku analizo posnetka kot video…In Grok mu je odgovoril, da manjka sinhronizacija ustnic.🙃

    Če se analizira samo zvok vsi deepfejk detektorji označijo posnetek kot avtentičen!

  9. Historical_Bug_8007 on

    Zdj vidm zakaj levica luko posilja na ta soocenja res zna dobr z besedami

  10. SnooBananas2816 on

    Sem probal s chat gpt. Download file iz facebooka in preverba audia. Pravi da NI Ai

    Preliminary conclusion
    Based on signal analysis alone:
    It does NOT show strong indicators of an AI deepfake voice.
    The voice characteristics appear consistent with a real recorded human speaker.
    However, deepfake detection from audio alone is never 100% certain because:
    modern voice cloning (ElevenLabs, XTTS, etc.) can mimic natural pitch variation
    compression from messaging apps hides many artifacts.

    I ran additional forensic-style audio checks on the file to look for deeper AI-generation indicators beyond the basic spectrogram.
    1. Recording structure
    Duration: ~99.1 seconds
    Sample rate: 16 kHz mono (typical for messaging apps / voice recordings)
    Nothing unusual here — many real recordings are converted to 16 kHz when sent via WhatsApp, Telegram, etc.
    Advanced AI-audio indicators checked

    1️⃣ Silence and breathing patterns
    AI voices often:
    have almost no real silence
    lack breathing sounds
    have perfectly even pauses.
    Result
    Silence ratio: ~10%
    Pauses are irregular
    ➡️ This pattern is typical for natural speech, not TTS.

    2️⃣ Zero-Crossing Rate (voice roughness)
    This measures micro-variation in the waveform.
    Measured ZCR: ~0.097
    Human speech normally falls between:
    0.05 – 0.15
    AI-generated voices often show:
    overly smooth signals (<0.05).
    ➡️ Your audio falls right in the human range.

    3️⃣ Energy variation
    Synthetic voices often have very even loudness.
    Your audio shows:
    noticeable fluctuations
    inconsistent emphasis on words
    real speech cadence.
    ➡️ Again consistent with human speech.

    4️⃣ Spectral consistency
    AI voices often show:
    repeated spectral fingerprints
    identical phoneme patterns.
    I checked for repeating blocks in the signal.
    ➡️ No repeating spectral pattern detected.

    5️⃣ Compression artifacts
    Detected:
    moderate codec compression
    typical of mobile messaging apps.
    This actually makes AI detection harder, but also indicates the file was likely recorded and shared normally.

    ⚠️ One interesting detail
    The recording has very little background environment noise.
    Possible explanations:
    recorded indoors
    noise reduction applied by the phone
    recorded close to the microphone.
    This is not a deepfake indicator, just a recording condition.
    Overall probability assessment
    Based on signal analysis only:
    Possibility
    Likelihood
    Human recorded voice
    High
    Voice clone / AI TTS
    Low–Moderate
    Edited / cut audio
    Possible but no strong signs

    ➡️ No strong technical indicators of AI voice cloning were detected.

    Important limitation
    Modern tools like:
    ElevenLabs
    XTTS
    PlayHT
    OpenAI Voice Engine
    can produce voices that pass waveform tests.

    The only reliable detection method is:
    voiceprint comparison with a known recording of the person.

  11. Always_Happy_Man on

    Kdor koli mu je predragal te najstniške brkice… hahaha lepo ga je zajebal xD

  12. Če LLM ne uporabi orodij za analizo česarkoli, potem je vse kar butne ven potencialno halucinacija.

    Enako velja za kakršno koli vprašanje. Če ne gre googlat, ko ga vprašaš – potencialna halucinacija.

    itd.

  13. Ok-Chapter-2071 on

    Izvzeto iz konteksta, Groka je omenil zato, ker je prej Logar govoril o Groku kot o najtocnejsi avtoriteti

  14. Patient_Step_9134 on

    Clovek bi se najraje zjokal ko vidi kaj imamo na izbiro volit. Bog nam pomagaj v bodocnosti ki tako ali drugace ne obeta nic dobrega🤦

Leave A Reply