Stamets@lemmy.world to People Twitter@sh.itjust.works · 1 year agoThe dreamlemmy.worldimagemessage-square26fedilinkarrow-up136arrow-down13
arrow-up133arrow-down1imageThe dreamlemmy.worldStamets@lemmy.world to People Twitter@sh.itjust.works · 1 year agomessage-square26fedilink
minus-squarecandle_lighter@lemmy.mllinkfedilinkEnglisharrow-up5·1 year agoI want said AI to be open source and run locally on my computer
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up2·1 year agoIt’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally. I’m already doing it, but I have some higher end hardware.
minus-squareXanaus@lemmy.mllinkfedilinkarrow-up1·1 year agoCould you please share your process for us mortals ?
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up2·1 year agoStable diffusion SXDL Turbo model running in Automatic1111 for image generation. Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results. I have some beefy hardware that I run it on, but it’s not necessary to have.
minus-squareTalesFromTheKitchen@lemmy.mllinkfedilinkarrow-up0·edit-21 year agoI can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi Automatic1111 image generation They might not be the most performant applications but they are very easy to use.
minus-squarelad@programming.devlinkfedilinkarrow-up0·1 year agoYou seem to have missed the point a bit
minus-squareintensely_human@lemm.eelinkfedilinkarrow-up0·1 year ago“I wish I had X” “Here’s X” What point was missed here?
minus-squarelad@programming.devlinkfedilinkarrow-up1arrow-down1·1 year agoThe post “I wish X instead of Y” The comment: “And run it [X] locally” The next comment: “You can run Y locally” Also the one I told this literally admitted that I was right and you’re arguing still
minus-squarelad@programming.devlinkfedilinkarrow-up0·1 year agoFunny how these comments appeared only today in my instance, I guess there are some federation issues still
I want said AI to be open source and run locally on my computer
It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.
I’m already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it’s not necessary to have.
I can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi
Automatic1111 image generation
They might not be the most performant applications but they are very easy to use.
You seem to have missed the point a bit
“I wish I had X”
“Here’s X”
What point was missed here?
The post “I wish X instead of Y”
The comment: “And run it [X] locally”
The next comment: “You can run Y locally”
Also the one I told this literally admitted that I was right and you’re arguing still
deleted by creator
Funny how these comments appeared only today in my instance, I guess there are some federation issues still
new phone who dis?