I am a huge HA fan since dropping FHEM a few years ago and I am more than delighted about the progress overall in HA…specially year of voice…but I can’t quite wrap my head around some stack I want to achieve. Maybe one of you has a basic idea how?

I use HA with whisper/piper etc. and right now I am stuck in making data from plain text information available in piper. Like I have a wiki or a txt or anything that would work and that document has loads of text…dunno…a receipe…or some history articles… and now I need to extract a specific piece of information from said text.

Wakeword, can you please tell my how much salt is used.

or

Jasper, in which year did I buy that console?

I am able to run all my intends but it does not make sense to write a million to achive this. I also doubt chatGPT API is a good solution for a trivial thing like summarizing (like blinkest) a text of finding a specific info and also not like good with privacy.

Any suggestions? gpt4all and some API? Lists of intends? I mean it would be nice if I could just add a nextcloud instance and some addon will scrape the info for me there.

  • David From Space@orbiting.observer
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Have you played around with hosting your own LLM? I’ve just started running oobabooga, it lets you download various LLMs and host them. I’ve been working on getting it set up so the AI can provide text for Piper, and take input from Whisper. It requires ideally an nvidia card, but will work with AMD and CPU. That would let you use the API to get text for piper to read. It’s a lot more privacy oriented than sending your queries off to ChatGPT. The larger models do take more CPU/RAM/VRAM to run, but perhaps a smaller tuned model would suit your needs.