• andallthat@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    4 months ago

    I only have a limited and basic understanding of Machine Learning, but doesn’t training models basically work like: “you, machine, spit out several versions of stuff and I, programmer, give you a way of evaluating how ‘good’ they are, so over time you ‘learn’ to generate better stuff”? Theoretically giving a newer model the output of a previous one should improve on the result, if the new model has a way of evaluating “improved”.

    If I feed a ML model with pictures of eldritch beings and tell them that “this is what a human face looks like” I don’t think it’s surprising that quality deteriorates. What am I missing?

    • Trailblazing Braille Taser@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      4 months ago

      In this case, the models are given part of the text from the training data and asked to predict the next word. This appears to work decently well on the pre-2023 internet as it brought us ChatGPT and friends.

      This paper is claiming that when you train LLMs on output from other LLMs, it produces garbage. The problem is that the evaluation of the quality of the guess is based on the training data, not some external, intelligent judge.

      • andallthat@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        ah I get what you’re saying., thanks! “Good” means that what the machine outputs should be statistically similar (based on comparing billions of parameters) to the provided training data, so if the training data gradually gains more examples of e.g. noses being attached to the wrong side of the head, the model also grows more likely to generate similar output.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 months ago

      Part of the problem is that we have relatively little insight into or control over what the machine has actually “learned”. Once it has learned itself into a dead end with bad data, you can’t correct it, only work around it. Your only real shot at a better model is to start over.

      When the first models were created, we had a whole internet of “pure” training data made by humans and developers could basically blindly firehose all that content into a model. Additional tuning could be done by seeing what responses humans tended to reject or accept, and what language they used to refine their results. The latter still works, and better heuristics (the criteria that grades the quality of AI output) can be developed, but with how much AI content is out there, they will never have a better training set than what they started with. The whole of the internet now contains the result of every dead end AI has worked itself into with no way to determine what is AI generated on a large scale.

    • Sir_Kevin@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      It takes a massive number of intelligent humans that expect to be paid fairly to train the models. Most companies jumping on the AI bandwagon are doing it for quick profits and are dropping the ball on that part.