I’m interested in automatically generating lengthy, coherent stories of 100,000+ words from a single prompt using an open source local large language model (LLM). I came across the “Awesome-Story-Generation” repository which lists relevant papers describing promising methods like “Re3: Generating Longer Stories With Recursive Reprompting and Revision”, announced in this Twitter thread from October 2022 and “DOC: Improving Long Story Coherence With Detailed Outline Control”, announced in this Twitter thread from December 2022. However, these papers used GPT-3, and I was hoping to find similar techniques implemented with open source tools that I could run locally. If anyone has experience or knows of resources that could help me achieve long, coherent story generation with an open source LLM, I would greatly appreciate any advice or guidance.

  • hisao
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    3 days ago

    This is a cool way to put it, but I think even just errors and randomness in reproduction of source ideas sometimes can count as original ideas. Nevertheless, I also think it doesn’t fully encompass all range of mechanisms by which humans come up with original ideas.

    • Deestan@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      3 days ago

      Randomness can give novel combinations, sure, but we shouldn’t call than an original idea.

      As for the various ways humans come up with original ideas, they are based on a level of reflection, reasoning and thought processing. We know that’s not possible for an LLM: while they are complex in their details, the way they work is very well defined. They imitate.

      • hisao
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        I agree with this in terms of process, but not necessarily agree in terms of result. If you enumerate the state space of target domain, you might realize that all the constructions there can be achieved by randomly introducing errors or modifications to finite set of predefined constructions. Most AI models don’t really work like this from what I know (they don’t try to randomize inference or introduce errors on purpose), otherwise they could probably evade model collapse. But I don’t see why they can’t work like this. Humans do often work like this though. A lot of new genres and styles appear when people simply do something inspired by something else, but fail to reproduce it accurately, and when evaluating it they realize they like how it turned out and continue doing that thing and it evolves further by slight mutations. I’m not saying I want AI to do this, or that I like AI or anything, I’m just saying I think this is a real possibility.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          I think so, too. I mean we also have human authors end up at a random camping site somewhere in Europe in the 70s and come up with the random idea of writing “The hitchhikers guide to the galaxy”. Either we allow randomness to inspire a novel. Or we’d need to say a lot of old novels aren’t original ideas either.