------------------------------------ < Ollama > ------------------------------------ \ ^__^ \ (OO)\_______ / (__)\ )\/ ||----w | || ||
Been playing around with local LLMs recently. I like having the ability to run models offline and experiment without relying on cloud services. Ollama is fantastic, and I was surprised to find my laptop can handle the models fine, albeit with a bit of a lag. The one I've been using most is Dolphin-Llama3:8b. Have had fun creating some Modelfiles to "impose" different personalities on the base model. The base model is still available, but my Skippy LLM is a lot of fun.
Running the LLMs on the terminal is totally fine, but I've also been using Docker and Open WebUI for the front-end, and plan to experiment a bit with n8n next. Open WebUI allows me to upload files which the LLMs can access. n8n looks very powerful but finding a use case for my simple self is a bit of a challenge...