I’m curious what it is doing from a top down perspective.

I’ve been playing with a 70B chat model that has several datasets on top of Llama2. There are some unusual features somewhere in this LLM and I am not sure what was trained versus (unusual layers?). The model has built in roleplaying stories I’ve never seen other models perform. These stories are not in the Oobabooga Textgen WebUI. The model can do stuff like a Roman Gladiator, and some NSFW stuff. These are not very realistic stories and play out with the depth of a child’s videogame. They are structured rigidly like they are coming from a hidden system context.

Like with the gladiators story it plays out like Tekken on the original PlayStation. No amount of dialogue context about how real gladiators will change the story flow. Like I tried modifying by adding how gladiators were mostly nonlethal fighters and showmen more closely aligned with the wrestler-actors that were popular in the 80’s and 90’s, but no amount of input into the dialogue or system contexts changed the story from a constant series of lethal encounters. These stories could override pretty much anything I added to system context in Textgen.

There was one story that turned an escape room into objectification of women, and another where name-1 is basically like a Loki-like character that makes the user question what is really happening by taking on elements in system context but changing them slightly. Like I had 5 characters in system context and it shifted between them circumstantially in a story telling fashion that was highly intentional with each shift. (I know exactly what a bad system context can do, and what errors look like in practice, especially with this model. I am 100% certain these are either (over) trained or programic in nature. Asking the model to generate a list of built in roleplaying stories creates a similar list of stories the couple of times I cared to ask. I try to stay away from these “built-in” roleplays as they all seem rather poorly written. I think this model does far better when I write the entire story in system context. One of the main things the built in stories do that surprise me is maintaining a consistent set of character identities and features throughout the story. Like the user can pick a trident or gladius, drop into a dialogue that is far longer than the batch size and then return with the same weapon in the next fight. Normally, I expect that kind of persistence would only happen if the detail was added to the system context.

Is this behavior part of some deeper layer of llama.cpp that I do not see in the Python version or Textgen source, like is there an additional persistent context stored in the cache?

  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Sorry, I misunderstood you earlier. I thought you had switched form something like exllama to llama.cpp and now the same model behaved differently… And I got a bit confused because you mentioned Llama2 chat model. And I thought you meant the heavily restricted (aligned/“safe”) Llama2-Chat variant 😉 But I got it now.

    Euryale seems to be a fine-tune and probably a merge of different other models(?) So someone fed some kind of datasets into it. Probably also containing stories about gladiators, fights, warriors and fan-fiction. It just replicates this. So I’m not that surprised that it does unrealistic combat stories. And even if you correct it, tends to fall back to what it learned earlier. Or tends to drift into lewd stories if it was made to do NSFW stuff and has been fine-tuned also with erotic internet fiction. We’d need to have a look at the dataset to judge why the model behaves like it does. But I don’t think there is any other ‘magic’ involved but the data and stories it got trained on. And 70B is already a size where models aren’t that stupid anymore. It should be able to connect things and grasp most relevant concepts.

    I haven’t had a close look at this model, yet. Thanks for sharing. I have a few dollars left on my runpod.io account, so I can start a larger cloud instance and try it once I have some time to spare. My computer at home doesn’t do 70B models.

    And thanks for your perspective on storywriting.