# TalkWithLLM
**Repository Path**: lurengu/talk-with-llm
## Basic Information
- **Project Name**: TalkWithLLM
- **Description**: 智能助手+语音生成,使用bigdl-llm对大模型进行量化,减少模型大小并提升推理速度,语音生成使用GPTSoVITS
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: https://gitee.com/lurengu
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 1
- **Created**: 2024-03-14
- **Last Updated**: 2024-04-15
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
## 简介
本项目基于[GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)
和[TTS-GPT-SoVITS](https://github.com/X-T-E-R/TTS-for-GPT-soVITS),使用langchain实现智能助手+语音生成
使用[bigdl-llm](https://github.com/intel-analytics/bigdl)对大模型进行量化,减少模型大小并提升推理速度
构建了一个向量数据库,基于Chroma与以下文档:[1](https://github.com/HIT-Alibaba/interview.git),[2](https://github.com/Snailclimb/JavaGuide.git),[3](https://github.com/febobo/web-interview.git),[4](https://github.com/taizilongxu/interview_python.git)
## 预览:


## 环境要求
1. python 3.9
2. intel芯片或显卡
3. redis
## 使用
### Windows
1. **配置python环境**: 可使用如下代码
```
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
pip install -r requirements.txt
```
2. **下载相关文件**:
1. 安装redis,并设置其端口号为6333,如果不方便修改端口号,请修改 `Chat\chat.py` 里修改6333为自己的端口号
2. 下载并将 [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe)
和 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) 并放置在根目录下。
3. 从 [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS)
下载预训练模型,并将它们放置在 `GPT_SoVITS\pretrained_models` 中。
4. 可从release中下载角色模型文件,解压后放在根目录 `trained`
文件夹中,release中模型非本人训练,在TTS-GPT-SoVITS项目的[分享](https://www.yuque.com/xter/zibxlp/gsximn7ditzgispg)
中下载的
5. 下载[Baichuan2-7B](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat),并放置在 `Chat` 目录中
6. 下载[paraformer-zh](https://www.modelscope.cn/models/iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files)
并放置在 `Chat` 目录中
7. 可从release中下载向量数据库文件,解压后放在 `Chat` 目录 `Chroma`
文件夹中,release中模型非本人训练,在TTS-GPT-SoVITS项目的[分享](https://www.yuque.com/xter/zibxlp/gsximn7ditzgispg)
中下载的
3. **启动**:
1. 运行 `Chat/trans_to_4bit.py` 量化Baichuan2-7B
2. 运行 `Chat/backend.py` 注意terminal在根目录下启动,使用pycharm注意修改右上角编译选项 `Edit Configuration`
中的 `Working directory` 为项目根目录
3. 运行 `Chat/webui.py` ,注意启动目录同上
以下为[原项目](https://github.com/RVC-Boss/GPT-SoVITS)说明
---
GPT-SoVITS-WebUI
A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.
[](https://github.com/RVC-Boss/GPT-SoVITS)

[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb)
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE)
[](https://huggingface.co/lj1995/GPT-SoVITS/tree/main)
[**English**](./README.md) | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어
**](./docs/ko/README.md)
---
## Features:
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English,
Japanese, and Chinese.
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation,
Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
Unseen speakers few-shot fine-tuning demo:
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
[教程中文版](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) [User guide (EN)](https://rentry.co/GPT-SoVITS-guide#/)
## Installation
For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official)
to use AutoDL Cloud Docker to experience the full functionality online.
### Tested Environments
- Python 3.9, PyTorch 2.0.1, CUDA 11
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon)
_Note: numba==0.56.4 requires py<3.11_
### Windows
If you are a Windows user (tested with win>=10), you can directly download
the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true)
and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
### Linux
```bash
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh
```
### macOS
Only Macs that meet the following conditions can train models:
- Mac computers with Apple silicon
- macOS 12.3 or later
- Xcode command-line tools installed by running `xcode-select --install`
**All Macs can do inference with CPU, which has been demonstrated to outperform GPU inference.**
First make sure you have installed FFmpeg by running `brew install ffmpeg` or `conda install ffmpeg`, then install by
using the following commands:
```bash
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
pip install -r requirements.txt
```
_Note: Training models will only work if you've installed PyTorch Nightly._
### Install Manually
#### Install Dependences
```bash
pip install -r requirements.txt
```
#### Install FFmpeg
##### Conda Users
```bash
conda install ffmpeg
```
##### Ubuntu/Debian Users
```bash
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
```
##### Windows Users
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe)
and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
### Using Docker
#### docker-compose.yaml configuration
0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images,
please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images
and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
1. Environment Variables:
- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories
4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your
actual situation.
2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default
docker-compose.yaml lists some practical examples for uploading/downloading content.
3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal
operations. Adjust according to your own situation.
4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual
circumstances.
#### Running with docker compose
```
docker compose -f "docker-compose.yaml" up -d
```
#### Running with docker command
As above, modify the corresponding parameters based on your actual situation, then run the following command:
```
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
```
## Pretrained Models
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them
in `GPT_SoVITS/pretrained_models`.
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models
from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them
in `tools/uvr5/uvr5_weights`.
Users in China region can download these two models by entering the links below and clicking "Download a copy"
- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
For Chinese ASR (additionally), download models
from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files),
and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and
place them in `tools/damo_asr/models`.
## Dataset Format
The TTS annotation .list file format:
```
vocal_path|speaker_name|language|text
```
Language dictionary:
- 'zh': Chinese
- 'ja': Japanese
- 'en': English
Example:
```
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
```
## Todo List
- [ ] **High Priority:**
- [x] Localization in Japanese and English.
- [x] User guide.
- [x] Japanese and English dataset fine tune training.
- [ ] **Features:**
- [ ] Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
- [ ] TTS speaking speed control.
- [ ] Enhanced TTS emotion control.
- [ ] Experiment with changing SoVITS token inputs to probability distribution of vocabs.
- [ ] Improve English and Japanese text frontend.
- [ ] Develop tiny and larger-sized TTS models.
- [x] Colab scripts.
- [ ] Try expand training dataset (2k hours -> 10k hours).
- [ ] better sovits base model (enhanced audio quality)
- [ ] model mix
## (Optional) If you need, here will provide the command line operation mode
Use the command line to open the WebUI for UVR5
```
python tools/uvr5/webui.py "