Customize, run and save LLMs using OLLAMA and the Modelfile
In this article, I’ll show you how to use the Modelfile in Ollama to change how an existing LLM (Llama2) behaves when interacting with it. I’ll also show you how to save your newly customized model to your personal namespace on the Ollama server.
I know it can get a bit confusing with all the different ”llamas” flying around. Just remember, Ollama is the company that enables you to download and locally run many different LLMs. Whereas, Llama2 is a particular LLM created by Meta the owner of Facebook. Apart from this relationship, they are not connected in any other way.
If you’ve never heard of Ollama before I recommend that you check out my article below where I go into depth on what Ollama is and how to install it on your system.
What is a modelfile?
In Ollama, a modelfile
refers to a configuration file that defines the blueprint to create and share models with Ollama.