I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Toes♀
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    6 months ago

    Some options:

    It’s just a better Siri, still just as soulless.

    If you think they would understand the Chinese room experiment.

    Imagine the computer playing mad libs with itself and it picks the least funniest answers to present.

    Imagine if you tore every page out of every book in the library (about the things you mentioned) shuffled them and try to handout the first page that mostly makes sense to the last page given, now think about that with just letters.

    Demonstration of its capacity to make mistakes, esp continuity errors.