Ollama GUI: Web Interface for chatting with your local LLMs.
Ollama GUI is a web interface for ollama.ai, a tool that enables running Large
Language Models (LLMs) on your local machine.
🛠 Installation
Prerequisites
- Download and install ollama CLI.
- Download and install yarn
and node
ollama pull <model-name>
ollama serve
Getting Started
- Clone the repository and start the development server.
git clone https://github.com/HelgeSverre/ollama-gui.git
cd ollama-gui
yarn install
yarn dev
Or use the hosted web version, by running ollama with the following origin
command (docs)
OLLAMA_ORIGINS=https://ollama-gui.vercel.app ollama serve
Models
For convenience and copy-pastability
, here is a table of interesting models you might want to try out.
For a complete list of models Ollama supports, go
to ollama.ai/library.
Model |
Parameters |
Size |
Download |
Mixtral-8x7B Large |
7B |
26GB |
ollama pull mixtral |
Phi |
2.7B |
1.6GB |
ollama pull phi |
Solar |
10.7B |
6.1GB |
ollama pull solar |
Dolphin Mixtral |
7B |
4.1GB |
ollama pull dolphin-mixtral |
Mistral |
7B |
4.1GB |
ollama pull mistral |
Mistral (instruct) |
7B |
4.1GB |
ollama pull mistral:7b-instruct |
Llama 2 |
7B |
3.8GB |
ollama pull llama2 |
Code Llama |
7B |
3.8GB |
ollama pull codellama |
Llama 2 Uncensored |
7B |
3.8GB |
ollama pull llama2-uncensored |
Orca Mini |
3B |
1.9GB |
ollama pull orca-mini |
Vicuna |
7B |
3.8GB |
ollama pull falcon |
Vicuna |
7B |
3.8GB |
ollama pull vicuna |
Vicuna (16K context) |
7B |
3.8GB |
ollama pull vicuna:7b-16k |
Vicuna (16K context) |
13B |
7.4GB |
ollama pull vicuna:13b-16k |
nexusraven |
13B |
7.4gB |
ollama pull nexusraven |
starcoder |
7B |
4.3GB |
ollama pull starcoder:7b |
wizardlm-uncensored |
13B |
7.4GB |
ollama pull wizardlm-uncensored |
📋 To-Do List
🛠 Built With
📝 License
Licensed under the MIT License. See the LICENSE.md file for details.