Get up and running with Llama 2 and other large language models locally - GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally
Ollama is pretty sweet, I’m self-hosting it using 3B models on an old X79 server. I created a neat terminal AI client that makes requests to it on the local network - called “Jeeves Assistant”.
Ollama is pretty sweet, I’m self-hosting it using 3B models on an old X79 server. I created a neat terminal AI client that makes requests to it on the local network - called “Jeeves Assistant”.