1 Star 0 Fork 0

zhyqieqie/gpt-sovits

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

GPT-SoVITS-WebUI

A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

madewithlove

RVC-Boss%2FGPT-SoVITS | Trendshift

Open In Colab License Huggingface Discord

English | 中文简体 | 日本語 | 한국어 | Türkçe


Features:

  1. Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.

  2. Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.

  3. Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.

  4. WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.

Check out our demo video here!

Unseen speakers few-shot fine-tuning demo:

https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb

User guide: 简体中文 | English

Installation

For users in China, you can click here to use AutoDL Cloud Docker to experience the full functionality online.

Tested Environments

Python Version PyTorch Version Device
Python 3.9 PyTorch 2.0.1 CUDA 11.8
Python 3.10.13 PyTorch 2.1.2 CUDA 12.3
Python 3.10.17 PyTorch 2.5.1 CUDA 12.4
Python 3.9 PyTorch 2.5.1 Apple silicon
Python 3.11 PyTorch 2.6.0 Apple silicon
Python 3.9 PyTorch 2.2.2 CPU
Python 3.9 PyTorch 2.8.0dev CUDA12.8(for Nvidia50x0)

Windows

If you are a Windows user (tested with win>=10), you can download the integrated package and double-click on go-webui.bat to start GPT-SoVITS-WebUI.

Users in China can download the package here.

Linux

conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]

macOS

Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.

  1. Install Xcode command-line tools by running xcode-select --install.
  2. Install the program by running the following commands:
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]

Install Manually

Install FFmpeg

Conda Users
conda install ffmpeg
Ubuntu/Debian Users
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
Windows Users

Download and place ffmpeg.exe and ffprobe.exe in the GPT-SoVITS root.

Install Visual Studio 2017 (Korean TTS Only)

MacOS Users
brew install ffmpeg

Install Dependences

pip install -r extra-req.txt --no-deps
pip install -r requirements.txt

Using Docker

docker-compose.yaml configuration

  1. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check Docker Hub(outdated) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
  2. Environment Variables:
    • is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
  3. Volumes Configuration, The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
  4. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
  5. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.

Running with docker compose

docker compose -f "docker-compose.yaml" up -d

Running with docker command

As above, modify the corresponding parameters based on your actual situation, then run the following command:

docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx

Pretrained Models

If install.sh runs successfully, you may skip No.1,2,3

Users in China can download all these models here.

  1. Download pretrained models from GPT-SoVITS Models and place them in GPT_SoVITS/pretrained_models.

  2. Download G2PW models from G2PWModel.zip(HF)| G2PWModel.zip(ModelScope), unzip and rename to G2PWModel, and then place them in GPT_SoVITS/text.(Chinese TTS Only)

  3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from UVR5 Weights and place them in tools/uvr5/uvr5_weights.

    • If you want to use bs_roformer or mel_band_roformer models for UVR5, you can manually download the model and corresponding configuration file, and put them in tools/uvr5/uvr5_weights. Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix. In addition, the model and configuration file names must include roformer in order to be recognized as models of the roformer class.

    • The suggestion is to directly specify the model type in the model name and configuration file name, such as mel_mand_roformer, bs_roformer. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model bs_roformer_ep_368_sdr_12.9628.ckpt and its corresponding configuration file bs_roformer_ep_368_sdr_12.9628.yaml are a pair, kim_mel_band_roformer.ckpt and kim_mel_band_roformer.yaml are also a pair.

  4. For Chinese ASR (additionally), download models from Damo ASR Model, Damo VAD Model, and Damo Punc Model and place them in tools/asr/models.

  5. For English or Japanese ASR (additionally), download models from Faster Whisper Large V3 and place them in tools/asr/models. Also, other models may have the similar effect with smaller disk footprint.

Dataset Format

The TTS annotation .list file format:

vocal_path|speaker_name|language|text

Language dictionary:

  • 'zh': Chinese
  • 'ja': Japanese
  • 'en': English
  • 'ko': Korean
  • 'yue': Cantonese

Example:

D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.

Finetune and inference

Open WebUI

Integrated Package Users

Double-click go-webui.bator use go-webui.ps1 if you want to switch to V1,then double-clickgo-webui-v1.bat or use go-webui-v1.ps1

Others

python webui.py <language(optional)>

if you want to switch to V1,then

python webui.py v1 <language(optional)>

Or maunally switch version in WebUI

Finetune

Path Auto-filling is now supported

1. Fill in the audio path
2. Slice the audio into small chunks
3. Denoise(optinal)
4. ASR
5. Proofreading ASR transcriptions
6. Go to the next Tab, then finetune the model

Open Inference WebUI

Integrated Package Users

Double-click go-webui-v2.bat or use go-webui-v2.ps1 ,then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference

Others

python GPT_SoVITS/inference_webui.py <language(optional)>

OR

python webui.py

then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference

V2 Release Notes

New Features:

  1. Support Korean and Cantonese

  2. An optimized text frontend

  3. Pre-trained model extended from 2k hours to 5k hours

  4. Improved synthesis quality for low-quality reference audio

    more details

Use v2 from v1 environment:

  1. pip install -r requirements.txt to update some packages

  2. Clone the latest codes from github.

  3. Download v2 pretrained models from huggingface and put them into GPT_SoVITS\pretrained_models\gsv-v2final-pretrained.

    Chinese v2 additional: G2PWModel.zip(HF)| G2PWModel.zip(ModelScope)(Download G2PW models, unzip and rename to G2PWModel, and then place them in GPT_SoVITS/text.)

V3 Release Notes

New Features:

  1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).

  2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.

    more details

Use v3 from v2 environment:

  1. pip install -r requirements.txt to update some packages

  2. Clone the latest codes from github.

  3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from huggingface and put them into GPT_SoVITS\pretrained_models.

    additional: for Audio Super Resolution model, you can read how to download

V4 Release Notes

New Features:

  1. Version 4 fixes the issue of metallic artifacts in Version 3 caused by non-integer multiple upsampling, and natively outputs 48k audio to prevent muffled sound (whereas Version 3 only natively outputs 24k audio). The author considers Version 4 a direct replacement for Version 3, though further testing is still needed. more details

Use v4 from v1/v2/v3 environment:

  1. pip install -r requirements.txt to update some packages

  2. Clone the latest codes from github.

  3. Download v4 pretrained models (gsv-v4-pretrained/s2v4.ckpt, and gsv-v4-pretrained/vocoder.pth) from huggingface and put them into GPT_SoVITS\pretrained_models.

Todo List

  • High Priority:

    • Localization in Japanese and English.
    • User guide.
    • Japanese and English dataset fine tune training.
  • Features:

    • Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
    • TTS speaking speed control.
    • Enhanced TTS emotion control. Maybe use pretrained finetuned preset GPT models for better emotion.
    • Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
    • Improve English and Japanese text frontend.
    • Develop tiny and larger-sized TTS models.
    • Colab scripts.
    • Try expand training dataset (2k hours -> 10k hours).
    • better sovits base model (enhanced audio quality)
    • model mix

(Additional) Method for running from the command line

Use the command line to open the WebUI for UVR5

python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>

This is how the audio segmentation of the dataset is done using the command line

python audio_slicer.py \
    --input_path "<path_to_original_audio_file_or_directory>" \
    --output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
    --threshold <volume_threshold> \
    --min_length <minimum_duration_of_each_subclip> \
    --min_interval <shortest_time_gap_between_adjacent_subclips>
    --hop_size <step_size_for_computing_volume_curve>

This is how dataset ASR processing is done using the command line(Only Chinese)

python tools/asr/funasr_asr.py -i <input> -o <output>

ASR processing is performed through Faster_Whisper(ASR marking except Chinese)

(No progress bars, GPU performance may cause time delays)

python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>

A custom list save path is enabled

Credits

Special thanks to the following projects and contributors:

Theoretical Research

Pretrained Models

Text Frontend for Inference

WebUI Tools

Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.

Thanks to all contributors for their efforts

MIT License Copyright (c) 2024 RVC-Boss Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

“花儿不哭”巨作,一分钟素材训练模型,github地址:https://github.com/RVC-Boss/GPT-SoVITS.git,企鹅裙:373034592 展开 收起
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/zhyqieqie/gpt-sovits.git
git@gitee.com:zhyqieqie/gpt-sovits.git
zhyqieqie
gpt-sovits
gpt-sovits
main

搜索帮助