Even_Adder@lemmy.dbzer0.com to Stable Diffusion@lemmy.dbzer0.comEnglish · 9 months agorupeshs/FastSD CPU Release v1.0.0 Beta 26github.comexternal-linkmessage-square3fedilinkarrow-up113arrow-down12
arrow-up111arrow-down1external-linkrupeshs/FastSD CPU Release v1.0.0 Beta 26github.comEven_Adder@lemmy.dbzer0.com to Stable Diffusion@lemmy.dbzer0.comEnglish · 9 months agomessage-square3fedilink
minus-squareturkishdelight@lemmy.mllinkfedilinkEnglisharrow-up1·8 months agoYou can’t shrink a model to 1/8 the size and expect it to run at the same quality. Quantization allows me to move from a cloud gpu to my laptops crappy cpu/igpu, so I’m ok with that tradeoff.
You can’t shrink a model to 1/8 the size and expect it to run at the same quality. Quantization allows me to move from a cloud gpu to my laptops crappy cpu/igpu, so I’m ok with that tradeoff.