# CoGPT **Repository Path**: guanty18/CoGPT ## Basic Information - **Project Name**: CoGPT - **Description**: No description available - **Primary Language**: Unknown - **License**: MPL-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 7 - **Forks**: 9 - **Created**: 2024-02-19 - **Last Updated**: 2024-10-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # CoGPT 中文教程见 [我的博客](https://blog.geniucker.top/2024/01/26/%E9%80%9A%E8%BF%87-GitHub-Copilot-%E5%85%8D%E8%B4%B9%E4%BD%BF%E7%94%A8-gpt-4/) Only for network programming learning purposes. The previous version in Python is archieved in [py](https://github.com/Geniucker/CoGPT/tree/py) branch, which is not stable and difficult to maintain (there are many problems with async in Python). ## Features Provide an API nearly the same as OpenAI API (with gpt-3.5-turbo and gpt-4). The only difference is that you should use the application token of Copilot instead of OpenAI token. ## Usage ### Get token Download the latest release for your platform. Unzip it and run `cogpt-get-apptoken` (or `cogpt-get-apptoken.exe` on Windows). You can see how to use proxy by `./cogpt-get-apptoken -h` ### API - `GET /` - Returns `Hi, I'm CoGPT.` - `GET /health` - Returns `{"status":"OK"}` - `GET /v1/models` - return available models - `POST /v1/chat/completions` - for chat api - `POST /v1/embeddings` - for embeddings api Pay attention that this api is **not totally compatible** with OpenAI API. For `input` field, OpenAI API accepts following types: - `string`: The string that will be turned into an embedding. - `array`: The array of strings that will be turned into an embedding. - `array`: The array of integers that will be turned into an embedding. - `array`: The array of arrays containing integers that will be turned into an embedding. Unfortunately, this service only accepts the first 2 types as well as the array of arrays containing strings. ### Deploy #### Warning This service is not designed to be deployed on public network. The best way to use is to deploy it on your own computer or local network. Or you can deploy it on public network, but only for yourself. **DO NOT** share your token with others. If a token is accessed from many different IPs, it will be banned. And if too many tokens are requested from one IP, something bad may happen. So again, **ONLY** for yourself. #### Best approach 1. Deploy locally on your own computer 2. Deploy on local network for personal use or share in small group 3. Deploy on your own server for personal use #### Bad approach 1. Provide public interface for everyone to use In this way, many tokens will be requested from one IP, which will cause problems. 2. Provide public integrade (web) apps (such as ChatGPT-Next-Web) Makint too many requests with one token will cause problems. 3. Deploy on with serverless services (such as Vercel) Serverless services will change IP frequently, and they have short lifetime. 4. Any abuse of the service **DO NOT** try any of theses approaches. #### Deploy in Docker ```bash mkdir CoGPT && cd CoGPT ``` Then create a `docker-compose.yml` file with following content: ```yaml version: '3' services: cogpt-api: image: geniucker/cogpt:latest environment: - HOST=0.0.0.0 ports: - 8080:8080 volumes: - ./db:/app/db - ./log:/app/log restart: unless-stopped container_name: cogpt-api ``` If you want to use development version, replace `geniucker/cogpt:latest` with `geniucker/cogpt:dev`. By default, the service will listen on port 8080. If you want to change the port, edit `docker-compose.yml` and change the port in `ports` section. For example, if you want to listen on port 80, change `8080:8080` to `80:8080`. Other config options can also be changed in `environment` section. Or more conveniently, you can edit `.env` file (You can copy `.env.example` to `.env` and edit it). **Note that** the config for `db` and `log` should be changed in `volumes` section in `docker-compose.yml`. All config options are listed in [Config](#config). Then run `docker compose up -d` to start the service. #### Deploy without Docker Download the latest release for your platform. Unzip it and run `cogpt-api` (or `cogpt-api.exe` on Windows). By default, the service will listen on `localhost:8080`. For configuration, see [Config](#config). #### Run as a service ##### Linux For Linux based on systemd, you can follow the steps below. First, download the latest release for Linux. Unzip it and move `cogpt-api` to `/opt/cogpt/` and grant it executable permission. Then copy the content of [cogpt-api.service](examples/cogpt-api.service) to `/etc/systemd/system/cogpt-api.service`. If you need to change the config, you should edit `/opt/cogpt-api/.env` file. Finally, run following commands to enable and start the service. ```bash sudo systemctl enable cogpt-api sudo systemctl start cogpt-api ``` Run `sudo systemctl stop cogpt-api` to stop the service. Run `sudo systemctl disable cogpt-api` to disable the service. ##### MacOS For MacOS, services are based on `launchd`. First, download the latest release for MacOS. Unzip it and move `cogpt-api` to `/opt/cogpt/` and grant it executable permission. Then copy the content of [com.cogpt-api.plist](examples/com.cogpt-api.plist) to `/Library/LaunchDaemons/com.cogpt-api.plist`. If you need to change the config, you should edit `/opt/cogpt-api/.env` file. Finally, run `sudo launchctl load /Library/LaunchDaemons/com.cogpt-api.plist` to start the service. Run `sudo launchctl unload /Library/LaunchDaemons/com.cogpt-api.plist` to stop the service. ##### Windows For Windows, we can use scheduled tasks. You can follow the steps below. First, download the latest release for Windows. Unzip it to a directory. Let's say `C:\CoGPT\`. Then create a file `cogpt-api-service.ps1` in `C:\CoGPT\` with copy content of [cogpt-api-service.ps1](examples/cogpt-api-service.ps1) to it. Start a PowerShell **with administrator permission** and run following commands. ```powershell cd C:\CoGPT\ ./cogpt-api-service.ps1 enable ``` Here are all commands you can use. All commands should be run in PowerShell **with administrator permission**. ```powershell ./cogpt-api-service.ps1 enable # enable and start the service ./copgt-api-service.ps1 disable # stop and disable the service ./cogpt-api-service.ps1 start # start the service ./cogpt-api-service.ps1 stop # stop the service ./cogpt-api-service.ps1 restart # restart the service ./cogpt-api-service.ps1 status # check the status of the service ``` #### Share Token If you want to share this service with your friends, it's not safe to directly share your GitHub app token. This feature is designed for this situation. You can create map that maps so-called share token to real GitHub app token. The first way is to set environment variable or modify `.env` environment variable file. You should set `SHARE_TOKEN` to a string like `share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2`. The format is `share-token:real-token,share-token:real-token`. You can add as many pairs as you want. The other way is to use command line argument. You can run `./cogpt-api -share-token share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2` to start the service. You can add as many pairs as you want. If you set as above, when you make a request with a token that starts with `share-`, the service will use the real token mapped to the share token. If you make a request with a token that starts with `ghu_`, the service will use the token directly. **Note that** share tokens must start with `share-`. Maps that don't start with `share-` will be ignored. To generate a random share token, you can download the latest release for your platform. Unzip it and run `./gen-share-token`. ### Config Edit `.env` or set **environment variables** or **command line arguments**. Here are the config options and their default values (**.env or environment variables**): keys | default | description --- | --- | --- `HOST` | `localhost` | Host to listen on `PORT` | `8080` | Port to listen on `CACHE` | `true` | Whether to cache tokens in sqlite database. If false, tokens will be cached in memory `CACHE_PATH` | `db/cache.sqlite3` | Path to sqlite database. Only used if `CACHE` is `true` `DEBUG` | `false` | Whether to enable debug mode. If true, the service will print debug info `LOG_LEVEL` | `info` | Log level. `SHARE_TOKEN` | `""` | Maps of share-token and real token. For example, `SHARE_TOKEN=share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2`. For **command line arguments**, run `./cogpt-api -h` to see the help message. Precedence: **command line arguments** > **environment variables** > **.env**. ### Proxy **Environment variables for proxy** are also supported. They are `ALL_PROXY`, `HTTPS_PROXY` and `HTTP_PROXY`. You can also use **command line arguments** to set proxy. Run `./cogpt-api -h` to see the help message. Precedence: **command line arguments** > **environment variables** (`ALL_PROXY` > `HTTPS_PROXY` > `HTTP_PROXY`). ## Credits - [copilot-gpt4-service](https://github.com/aaamoon/copilot-gpt4-service) - [CopilogChat.nvim](https://github.com/jellydn/CopilotChat.nvim) ## License [MPL-2.0](LICENSE) ## Star History Star History Chart