iconBjarne Verschorre

  1. Blog
  2. Write-ups
  3. Etc
../running-local-llms.md

Why

Running LLMs locally can be useful for a number of reasons:

Why not

Running LLMs locally can be resource intensive, especially for larger models, if you don’t have a good enough GPU it might be slow (for larger models).

Installation

Download from https://ollama.com/download/

For Linux it is

curl -fsSL https://ollama.com/install.sh | sh

Models

You can get your models from their library

Pull the image with:

ollama pull <name>

I recommend:

Note that you’d need a beefer system for the larger models.

Usage

(Pull and) run the image with:

ollama run <name> [prompt]

For more options:

ollama --help

API

If you want the ollama API to be accessable to other systems on your network you’d need add this to the [Service] part of the config file /etc/systemd/system/ollama.service:

Environment="OLLAMA_HOST=0.0.0.0:11434"

You can checkout the API documentation here.

UIs

You can use UIs for a more user friendly experience:

Uninstall

sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
sudo rm $(which ollama)
sudo rm -r /usr/share/ollama 
sudo userdel ollama
sudo groupdel ollama
← Revamped Website My PCs Disks Setup →