• GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 months ago

    Weird that they act like the 1.7B model is too big for a laptop, in contrast to a…4060 with the same amount of memory as that laptop. A 1.7B model is well within range of what you can run on a MacBook Air.

    I don’t think a 170M model is even useful for the same class of applications. Could be good for real-time applications though.

    Looking forward to testing these, if they are ever made publicly available.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      They wrote “cheap m1 MacBook”. That’s 8gb ram in total. You can’t reasonably compare a GPU with 8gb of dedicated RAM to 8gb of shared main memory.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        11
        arrow-down
        3
        ·
        edit-2
        3 months ago

        No, the goal posts of “AI is evil and should be fought until <insert new criteria here> are resolved.”

        The question of whether training an AI even violates copyright in the first place is still unanswered, BTW, the various court cases addressing it are still in progress. This current target is about “ethics”, which are vague enough that anyone can claim that they’re being violated without having to go to the hassle of proving it.

        • JackGreenEarth@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          AGI is scary because it can’t be aligned with human values, seemingly leading to a Universal Paperclips/Nick Bostrom style scenario. The AI we have today is a tool, that seems on the whole beneficial.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          The only argument against it that I think is valid is that it shouldn’t be outputting copyrighted content, which seems like a pretty easy problem to solve but I’m no expert. If YouTube can do it with the volume of data they work with, it can’t be that hard to do it with text and pictures.

          But the idea that an AI model just looking at a picture is somehow stealing is just absurd to me.