• Alex@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    How well do the OpenLlama models perform against Llama2? AIUI the training data uses for OpenLlama is the same?

    • h3ndrik@feddit.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 months ago

      The training data for OpenLlama is called RedPajama if I’m not mistaken. And a reproduction of what Meta used to train the first LLaMA. Back then they listed the datasets in the scientific paper. Nowadays they and their competitors don’t do that anymore.

      OpenLlama performs about as good (slightly worse) as the first official LLaMA. And both perform worse than Llama2. It’s not day and night, but i think a noticeable improvement. And Llama2 has twice the context length which is a huge improvement for some use-cases.

      If you’re looking for models with a different license, there are some more. Mistral is Apache 2.0 and there are several more with permissive licenses.

      If you’re looking for info on what datasets the big players use, forget it (my opinion). The companies are all involved in legal battles over copyright and have stopped publishing what they use. Many (except for Meta) have kept it a (trade) secret from the beginning and never shared such information. It’s unscientific because it doesn’t allow for repeatability. But AI is expensive and everyone is currently trying to get obscenely rich with it or strives for world domination.

      But datasets are available, like the RedPajama one, several other collections for various purposes… Lots of datasets for fine-tuning and a whole community around that. Just for the base/foundation models, we don’t have access to a current state of the art dataset for that.