I am using Mistral 7b Instruct for text summary and some light “assistant” type chatting for the last several months. I have been pleased at how accurate it is for my needs, especially given it’s size.
I recall alot of trial and error to find models that were compatible with the version of llama-cpp-python that oobabooga uses (at any given time). GGUF should have made the model format (and therefore model selection) more simple, but i imagine there are still nuances that make it more difficult than it should be to find a working model for a noob.
Best of luck, let us know how it goes
I am interested in reading more about what Tezka means. Please do share.
I think i can relate to your goals and am personally focused on similar work in an effort to make my own life a little more bearable. my efforts are more focused on executive function and how to integrate this into my life seamlessly vs llm/conversational ai. i have been playing around with conversational ai, but i currently lack the psychological understanding which is needed to do this right. i look forward to hearing more from you.
my immediate (ok, i have been working on this all day) thoughts