• 2 Posts
  • 54 Comments
Joined 6 months ago
cake
Cake day: March 26th, 2024

help-circle
  • sunstoned@lemmus.orgtoCoffee@lemmy.worldAeroPress Premium
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    Comparing Switch Immersion vs Aeropress?

    That’s a good question. Been a while since I’ve gone actually immersion brewed with it (usually just pour over and use the stopper for pre heating with less water). I’ll make a couple of cups and get back to you.

    Edit:

    Cup making done! The immersion brew is super clean and easy. I do think some type of lid would help for longer brew times to keep the heat in on the Switch though.

    I think both the Aeropress and Switch Immersion lend themselves well to darker (chocolatier/nuttier) extractions. After this little test I will probably switch over to immersion brewing for my afternoon decaf for the foreseeable future!















  • Believe what you will. I’m not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.

    Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it’s gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.

    I would encourage you to look at more explanations like the one below. I’m not just blowing smoke, and I’m not dismissing the very real problem of massive training costs (in money, energy, and water) that you’re pointing out.

    https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption


  • I don’t disagree, but it is useful to point out there are two truths in what you wrote.

    The energy use of one person running an already trained model on their own hardware is trivial.

    Even the energy use of many many people using already trained models (ChatGPT, etc) is still not the problem at hand (probably on the order of the energy usage from a typical search engine).

    The energy use in training these models (the appendage measuring contest between tech giants pretending they’re on the cusp of AGI) is where the cost really ramps up.






  • Is there a reason you’re not considering running this in a VM?

    I could see a case where you go for a native install on a virtual machine, attach a virtual disk to isolate your library from the rest of the filesystem, and then move that around (or just straight up mount that directory in the container) as needed.

    That way you can back up your library separately from your JF server implementation and go hog wild.