ollama is a golang LLM server built for ease of use.
https://github.com/jmorganca/ollama| Installer Source| Releases (json) (tab)
ollama is a golang LLM server built for ease of use.
https://github.com/jmorganca/ollama| Installer Source| Releases (json) (tab)
To update or switch versions, run webi ollama@stable
(or @v0.1.5
, etc).
ollama
is an LLM serving platform written in golang. It makes LLMs built on Llama standards easy to run with an API.
To get started quickly with the open source LLM Mistral-7b as an example is two commands.
ollama
serverOLLAMA_ORIGINS='*' OLLAMA_HOST=localhost:11434 ollama serve
ollama
CLI (using the Mistral-7b model)ollama pull mistral
ollama run mistral
These are the files / directories that are created and/or modified with this install:
~/.config/envman/PATH.env
~/.local/bin/ollama
~/.ollama/models/
There are many Ollama UIs to choose from, but ollama-webui
is
easy to start with (and can be built as a static page):
node
webi node@lts
source ~/.config/envman/PATH.env
ollama-webui
repogit clone https://github.com/ollama-webui/ollama-webui.git ./ollama-webui/
pushd ./ollama-webui/
cp -RPp ./example.env ./.env
npm clean-install
npm run dev
Note: Be sure to run ollama
with CORS enabled:
OLLAMA_ORIGINS='*' OLLAMA_HOST=localhost:11434 ollama serve
You'll need a fairly modern computer. An Apple M1 Air works great.
See the list at https://ollama.ai/library.
For example, we could try sqlcoder
, or orca-mini
(because it's small):
ollama pull sqlcoder
ollama run sqlcoder
ollama pull orca-mini
ollama run orca-mini
If you'd like ollama
to be accessible beyond localhost
(127.0.0.1
):
0.0.0.0
, which makes it accessible to ALL networks# fully open to all
OLLAMA_ORIGINS='*' OLLAMA_HOST=0.0.0.0:11435 ollama serve
# restrict browsers (not APIs) to requests from https://example.com
OLLAMA_ORIGINS='https://example.com' OLLAMA_HOST=0.0.0.0:11435 ollama serve
See also: