1 Star 0 Fork 0

CCUU / devika

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
ollama.md 1.36 KB
一键复制 编辑 原始数据 按行查看 历史

Ollama Installation Guide

This guide will help you set up Ollama for Devika. Ollama is a tool that allows you to run open-source large language models (LLMs) locally on your machine. It supports varity of models like Llama-2, mistral, code-llama and many more.

Installation

  1. go to the Ollama website.
  2. Download the latest version of the Ollama.
  3. After installing the Ollama, you have to download the model you want to use. Models
  4. select the model you want to download and copy the command. for example, ollama run llama2.it will download the model and start the server.
  5. ollama list will show the list of models you have downloaded.
  6. if the server isn't running then you can manually start by ollama serve. default address for the server is http://localhost:11434
  7. for changing port and other configurations, follow the FAQ here
  8. for more information, ollama [command] --help will show the help menu. for example, ollama run --help will show the help menu for the run command.

Devika Configuration

  • if you serve the Ollama on a different address, you can change the port in the config.toml file or you can change it via UI.
  • if you are using the default address, devika will automatically detect the server and and fetch the models list.
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/ccuuwb/devika.git
git@gitee.com:ccuuwb/devika.git
ccuuwb
devika
devika
main

搜索帮助

344bd9b3 5694891 D2dac590 5694891