MuseTalk: Real-Time High Quality Lip Synchronization with Latent Space Inpainting Yue Zhang *, Minhao Liu*, Zhaokang Chen, Bin Wu†, Yingjie He, Chao Zhan, Wenjiang Zhou (*Equal Contribution, †Corresponding Author, benbinwu@tencent.com)
github huggingface gradio Project (comming soon) Technical report (comming soon)
We introduce MuseTalk
, a real-time high quality lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by MuseV, as a complete virtual human solution.
MuseTalk
is a real-time high quality audio-driven lip-syncing model trained in the latent space of ft-mse-vae
, which
256 x 256
.
MuseTalk was trained in latent spaces, where the images were encoded by a freezed VAE. The audio was encoded by a freezed whisper-tiny
model. The architecture of the generation network was borrowed from the UNet of the stable-diffusion-v1-4
, where the audio embeddings were fused to the image embeddings by cross-attention.
Note that although we use a very similar architecture as Stable Diffusion, MuseTalk is distinct in that it is Not
a diffusion model. Instead, MuseTalk operates by inpainting in the latent space with a single step
.
Image | MuseV | +MuseTalk |
Xinying Sun
, is a supermodel KOL. You can follow her on douyin.MuseTalk | Original videos |
Link |
Image | MuseV + MuseTalk |
We provide a detailed tutorial about the installation and the basic usage of MuseTalk for new users:
Thanks for the third-party integration, which makes installation and use more convenient for everyone. We also hope you note that we have not verified, maintained, or updated third-party. Please refer to this project for specific results.
To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:
pip install -r requirements.txt
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Download the ffmpeg-static and
export FFMPEG_PATH=/path/to/ffmpeg
for example:
export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
You can download weights manually as follows:
Download our trained weights.
Download the weights of other components:
Finally, these weights should be organized in models
as follows:
./models/
├── musetalk
│ └── musetalk.json
│ └── pytorch_model.bin
├── dwpose
│ └── dw-ll_ucoco_384.pth
├── face-parse-bisent
│ ├── 79999_iter.pth
│ └── resnet18-5c106cde.pth
├── sd-vae-ft-mse
│ ├── config.json
│ └── diffusion_pytorch_model.bin
└── whisper
└── tiny.pt
Here, we provide the inference script.
python -m scripts.inference --inference_config configs/inference/test.yaml
configs/inference/test.yaml is the path to the inference configuration file, including video_path and audio_path. The video_path should be either a video file or a directory of images.
You are recommended to input video with 25fps
, the same fps used when training the model. If your video is far less than 25fps, you are recommended to apply frame interpolation or directly convert the video to 25fps using ffmpeg.
We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the bbox_shift
parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.
You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.
For example, in the case of Xinying Sun
, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be -7
.
python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7
More technical details can be found in bbox_shift.
As a complete solution to virtual human generation, you are suggested to first apply MuseV to generate a video (text-to-video, image-to-video or pose-to-video) by referring this. Frame interpolation is suggested to increase frame rate. Then, you can use MuseTalk
to generate a lip-sync video by referring this.
If you want to launch online video chats, you are suggested to generate videos using MuseV and apply necessary pre-processing such as face detection and face parsing in advance. During online chatting, only UNet and the VAE decoder are involved, which makes MuseTalk real-time.
Thanks for open-sourcing!
Resolution: Though MuseTalk uses a face region size of 256 x 256, which make it better than other open-source methods, it has not yet reached the theoretical resolution bound. We will continue to deal with this problem.
If you need higher resolution, you could apply super resolution models such as GFPGAN in combination with MuseTalk.
Identity preservation: Some details of the original face are not well preserved, such as mustache, lip shape and color.
Jitter: There exists some jitter as the current pipeline adopts single-frame generation.
@article{musetalk,
title={MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting},
author={Zhang, Yue and Liu, Minhao and Chen, Zhaokang and Wu, Bin and He, Yingjie and Zhan, Chao and Zhou, Wenjiang},
journal={arxiv},
year={2024}
}
code
: The code of MuseTalk is released under the MIT License. There is no limitation for both academic and commercial usage.model
: The trained model are available for any purpose, even commercially.other opensource model
: Other open-source models used must comply with their license, such as whisper
, ft-mse-vae
, dwpose
, S3FD
, etc..AIGC
: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。