• noxfriend@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    Anything a human can be trained to do, a neural network can be trained to do.

    Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.

      • noxfriend@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        I wouldn’t say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Try getting them to even open a door

      For now there is: AI vs. Stairs, you may need to wait for a future video for “AI vs. Doors” 🤷

      BTW, that is a rudimentary neural network.

      • noxfriend@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.

          Moravec’s paradox holds true because of two fronts:

          1. Computing resources required
          2. Lack of formal description of a behavior

          But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we’re brute-forcing away the lack of a formal description of the algorithm.

          We’re now looking towards neuromorphic hardware on the trillion-“core” scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler… before you copy the first trained NN to an identical body and re-training costs drop to O(0)… which is much less than even training a million toddlers at once.