James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

  • Dr. Dabbles@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    5
    ·
    1 year ago

    And we were warned about Perceptron in the 1950s. Fact of the matter is, this shit is still just a parlor trick and doesn’t count as “intelligence” in any classical sense whatsoever. Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing. Call me when any of these systems actually comprehend the prompts they’re given.

    • ricecooker@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      EXACTLY THIS. it’s a really good parrot and anybody who thinks they can fire all their human staff and replace with ChatGPT is in for a world of hurt.

      • Meowoem@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Not if most their staff were pretty shitty parrots and the job is essentially just parroting…

        • Dr. Dabbles@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          At first blush, this is one of those things that most people assume is true. But one of the problems here is that a human can comprehend what is being asked in, say, a support ticket. So while an LLM might find a useful prompt and then spit out a reply that may pr may not be correct, a human can actually deeply understand what’s being asked, then select an auto-reply from a drop down menu.

          Making things worse for the LLM side of things, that person doesn’t consume absolutely insane amounts of power to be trained to reply. Neither do most of the traditional “chatbot” systems that have been around for 20 years or so. Which begs the question, why use an LLM that is as likely to get something wrong as it is to get it right when existing systems have been honed over decades to get it right almost all of the time?

          If the work being undertaken is translating text from one language to another, LLMs do an incredible job. Because guessing the next word based on hundreds of millions of samples is a uniquely good way to guess at translations. And that’s good enough almost all of the time. But asking it to write marketing copy for your newest Widget from WidgetCo? That’s going to take extremely skilled prompt writers, and equally skilled reviewers. So in that case the only thing you’re really saving is the amount of wall clock time for a human to type something. Not really a dramatic savings, TBH.

    • rusfairfax@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing.

      The best and most concise explanation (and critique) of LLMs in the known universe.