Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.
Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.
WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
Check out our demo video here!
Unseen speakers few-shot fine-tuning demo:
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
For users in China, you can click here to use AutoDL Cloud Docker to experience the full functionality online.
Python Version | PyTorch Version | Device |
---|---|---|
Python 3.9 | PyTorch 2.0.1 | CUDA 11.8 |
Python 3.10.13 | PyTorch 2.1.2 | CUDA 12.3 |
Python 3.10.17 | PyTorch 2.5.1 | CUDA 12.4 |
Python 3.9 | PyTorch 2.5.1 | Apple silicon |
Python 3.11 | PyTorch 2.6.0 | Apple silicon |
Python 3.9 | PyTorch 2.2.2 | CPU |
Python 3.9 | PyTorch 2.8.0dev | CUDA12.8(for Nvidia50x0) |
If you are a Windows user (tested with win>=10), you can download the integrated package and double-click on go-webui.bat to start GPT-SoVITS-WebUI.
Users in China can download the package here.
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.
xcode-select --install
.conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
conda install ffmpeg
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
Download and place ffmpeg.exe and ffprobe.exe in the GPT-SoVITS root.
Install Visual Studio 2017 (Korean TTS Only)
brew install ffmpeg
pip install -r extra-req.txt --no-deps
pip install -r requirements.txt
docker compose -f "docker-compose.yaml" up -d
As above, modify the corresponding parameters based on your actual situation, then run the following command:
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
If install.sh
runs successfully, you may skip No.1,2,3
Users in China can download all these models here.
Download pretrained models from GPT-SoVITS Models and place them in GPT_SoVITS/pretrained_models
.
Download G2PW models from G2PWModel.zip(HF)| G2PWModel.zip(ModelScope), unzip and rename to G2PWModel
, and then place them in GPT_SoVITS/text
.(Chinese TTS Only)
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from UVR5 Weights and place them in tools/uvr5/uvr5_weights
.
If you want to use bs_roformer
or mel_band_roformer
models for UVR5, you can manually download the model and corresponding configuration file, and put them in tools/uvr5/uvr5_weights
. Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix. In addition, the model and configuration file names must include roformer
in order to be recognized as models of the roformer class.
The suggestion is to directly specify the model type in the model name and configuration file name, such as mel_mand_roformer
, bs_roformer
. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model bs_roformer_ep_368_sdr_12.9628.ckpt
and its corresponding configuration file bs_roformer_ep_368_sdr_12.9628.yaml
are a pair, kim_mel_band_roformer.ckpt
and kim_mel_band_roformer.yaml
are also a pair.
For Chinese ASR (additionally), download models from Damo ASR Model, Damo VAD Model, and Damo Punc Model and place them in tools/asr/models
.
For English or Japanese ASR (additionally), download models from Faster Whisper Large V3 and place them in tools/asr/models
. Also, other models may have the similar effect with smaller disk footprint.
The TTS annotation .list file format:
vocal_path|speaker_name|language|text
Language dictionary:
Example:
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
Double-click go-webui.bat
or use go-webui.ps1
if you want to switch to V1,then double-clickgo-webui-v1.bat
or use go-webui-v1.ps1
python webui.py <language(optional)>
if you want to switch to V1,then
python webui.py v1 <language(optional)>
Or maunally switch version in WebUI
1. Fill in the audio path
2. Slice the audio into small chunks
3. Denoise(optinal)
4. ASR
5. Proofreading ASR transcriptions
6. Go to the next Tab, then finetune the model
Double-click go-webui-v2.bat
or use go-webui-v2.ps1
,then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference
python GPT_SoVITS/inference_webui.py <language(optional)>
OR
python webui.py
then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference
New Features:
Support Korean and Cantonese
An optimized text frontend
Pre-trained model extended from 2k hours to 5k hours
Improved synthesis quality for low-quality reference audio
Use v2 from v1 environment:
pip install -r requirements.txt
to update some packages
Clone the latest codes from github.
Download v2 pretrained models from huggingface and put them into GPT_SoVITS\pretrained_models\gsv-v2final-pretrained
.
Chinese v2 additional: G2PWModel.zip(HF)| G2PWModel.zip(ModelScope)(Download G2PW models, unzip and rename to G2PWModel
, and then place them in GPT_SoVITS/text
.)
New Features:
The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).
GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.
Use v3 from v2 environment:
pip install -r requirements.txt
to update some packages
Clone the latest codes from github.
Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from huggingface and put them into GPT_SoVITS\pretrained_models
.
additional: for Audio Super Resolution model, you can read how to download
New Features:
Use v4 from v1/v2/v3 environment:
pip install -r requirements.txt
to update some packages
Clone the latest codes from github.
Download v4 pretrained models (gsv-v4-pretrained/s2v4.ckpt, and gsv-v4-pretrained/vocoder.pth) from huggingface and put them into GPT_SoVITS\pretrained_models
.
High Priority:
Features:
Use the command line to open the WebUI for UVR5
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>
This is how the audio segmentation of the dataset is done using the command line
python audio_slicer.py \
--input_path "<path_to_original_audio_file_or_directory>" \
--output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
--threshold <volume_threshold> \
--min_length <minimum_duration_of_each_subclip> \
--min_interval <shortest_time_gap_between_adjacent_subclips>
--hop_size <step_size_for_computing_volume_curve>
This is how dataset ASR processing is done using the command line(Only Chinese)
python tools/asr/funasr_asr.py -i <input> -o <output>
ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
(No progress bars, GPU performance may cause time delays)
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>
A custom list save path is enabled
Special thanks to the following projects and contributors:
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。