Setting up a Local LLM

I wanted to experiment a bit with an LLM and training it, so I decided to try a few things. I looked at a few tutorials (see the references below) and then finally got this working. This post distills down what I got to work.

Getting Started

I like containers, and rather than install something on my machine, I decided to get docker images for ollama. I ran this to get started:

docker pull ollama/ollama

I ran the Docker container with this command:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

From there, I pulled a few models into the container with these commands:

docker exec -it ollama ollama pull mistral

I then ran this to start the model and interact with it.

docker exec -it ollama ollama run mistral

Here are a first few things I typed in to test the model.

2025-01_0111

From here, I exited and then ran this to start the model in detached mode.

docker exec -d ollama ollama run mistral

From there, more experiments, but that’s in another article.

References

Here are some places and tutorials I looked at:

Unknown's avatar

About way0utwest

Editor, SQLServerCentral
This entry was posted in Blog and tagged , , . Bookmark the permalink.