• Chozo@fedia.io
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    3 months ago

    I don’t understand why it’s so hard to sandbox an LLM’s configuration data from it’s training data.

    • MoondropLight@thelemmy.club
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      Because its all one thing. The promise of AI is that you can basically throw anything at it, and you don’t need to understand exactly how/why it makes the connections it does; you just adjust the weights until it kinda looks alright.

      There are many structural hacks used to give it better results (and in this case some form of reasoning) but ultimately they’re mostly relying on connecting multiple nets together and retrying queries and such. There’s no human understandable settings. Neural networks are basically one input and one output (unless you’re training it).