For fun, I did an experiment to see how well a local chatbot could plan an upcoming bicycle trip. I was using a local copy of "LM Studio" on my PC and picked the Mistral Instruct 7B model. Overall less powerful than many large language models out there, but also something I could run local and portable on my PC.
Interesting. I am considering Mistral and Llama 2 to run jobs I currently do with openai. Was there a reason behind your decision or was it just a matter of opportunity?
Have you compared with what Gemini/chatGPT/Claude have to say? Meaningful differences?