Ouch.

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.

    • serenissi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.