• Jimmyeatsausage@lemmy.world
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    6 months ago

    LLMs are not general AI. They are not intelligent. They aren’t sentient. They don’t even really understand what they’re spitting out. They can’t even reliably do the 1 thing computers are typically very good at (computational math) because they are just putting sequences of nonsense (to them) characters together in the most likely order based on their training model.

    When LLMs feel sentient or intelligent, that’s your brain playing a trick on you. We’re hard-wired to look for patterns and group things together based on those patterns. LLMs are human-speech prediction engines, so it’s tempting and natural to group them with the thing they’re emulating.

    • Technological_Elite@lemmy.oneOP
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      6 months ago

      Yup, 100%. These “AIs” have issues filtering out misinformation, finding trusted sources, and are vulnerable to other forms of manipulation. Jokes and memes probably have an impact too, and because it’s not human, it’s not gonna think the same we do, realizing it’s a joke or just stupid people saying “3 + 4 × 8 = 56” B.S.

      And please for the love of god, don’t start the stupid math debates again, thank you.

      • maniclucky@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        6 months ago

        To pile on: They don’t filter anything, or search anything. They are clever parrots made up of huge streaks of linear algebra. It has no understanding of anything nor interest in doing more than generating sentences that look right given a prompt. Even saying that it has ‘no understanding’ or ‘interest’ is giving it too much credit, implying intelligence or decision making capability. It’s just ridiculously vast math.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      When LLMs feel sentient or intelligent, that’s your brain playing a trick on you.

      Sentient = prob a trick

      Intelligent? Maybe a broken clock is right twice a day?

      You write a sentence that don’t sound too good. You pretty much know how an author you respect would write it, but can’t remember the syntax & word choice exactly. You ask a model for a dozen revisions of the sentence in disparate styles. One of them clicks: “ooh! That’s what I mean!”

      Am I being pedantic to say the LLM can feel intelligent when it nails the exact word choice you were looking for, better than half your social circle could’ve written it? Half your friends aren’t dumb, but the LLM can sometimes sound better than them, so you think: “yeah sounds intelligent!”

      Of course…

      Later it totally misunderstands some context, needs unbelievable hand-holding and still doesn’t get it, confabulates moronically… and it’s back to stupid! Mmmm glue pizza