• BleatingZombie@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    28 days ago

    Why isn’t anyone saying that AI and machine learning are (currently) the same thing? There’s no such thing as “Artificial Intelligence” (yet)

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      27 days ago

      Its more like intelligience is very poorly defined so a less controversial statement is that Artificial General Intelligience doesn’t exist.

      Also Generative AI such as LLMs are very very far from it, and machine learning in general haven’t yielded much result in the persuit of sophonce and sapience.

      Although they technically can pass a turing test as long as the turing test has a very short time limit and turing testers are chosen at random.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      28 days ago

      I work in an ML-adjacent field (CV) and I thought I’d add that AI and ML aren’t quite the same thing. You can have non-learning based methods that fall under the field of AI - for instance, tree search methods can be pretty effective algorithms to define an agent for relatively simple games like checkers, and they don’t require any learning whatsoever.

      Normally, we say Deep Learning (the subfield of ML that relates to deep neural networks, including LLMs) is a subset of Machine Learning, which in turn is a subset of AI.

      Like others have mentioned, AI is just a poorly defined term unfortunately, largely because intelligence isn’t a well defined term either. In my undergrad we defined an AI system as a programmed system that has the capacity to do tasks that are considered to require intelligence. Obviously, this definition gets flaky since not everyone agrees on what tasks would be considered to require intelligence. This also has the problem where when the field solves a problem, people (including those in the field) tend to think “well, if we could solve it, surely it couldn’t have really required intelligence” and then move the goal posts. We’ve seen that already with games like Chess and Go, as well as CV tasks like image recognition and object detection at super-human accuracy.

    • nialv7@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      28 days ago

      that heavily depends on how you define “intelligence”. if you insist on “think, reason and behave like a human”, then no, we don’t have “Artificial Intelligence” yet (although there are plenty of people that would argue that we do). on the other hand if you consider the ability to play chess or go intelligence, the answer is different.

      • minyakcurry@monyet.cc
        link
        fedilink
        English
        arrow-up
        2
        ·
        28 days ago

        Honestly I would consider BFS/DFS artificial intelligence (and I think most introductory AI courses agree). But yea it’s a definition game and I don’t think most people qualify intelligence as purely human-centric. Simple tasks like pattern recognition already count as a facet of intelligence.

        • Adalast@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          28 days ago

          I forget the exact quote or who said it, but the gist is that a species cannot be considered sapient (intelligent) on an interplanetary/interstellar stage until they have discovered Calculus. I prefer to use that as my bar for the sapience of those around me as well.

    • TriflingToad@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      28 days ago

      It very much depends on what you consider AI, or even what you consider intelligence. I personally consider LLMs AI because it’s artificial.