# FaceShot
**Repository Path**: mirrors_open-mmlab/FaceShot
## Basic Information
- **Project Name**: FaceShot
- **Description**: Official repo for FaceShot: Bring Any Character into Life
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-03-01
- **Last Updated**: 2025-12-14
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# [ICLR 2025] FaceShot: Bring Any Character into Life
[**FaceShot: Bring Any Character into Life**](https://arxiv.org/abs/2503.00740)
[Junyao Gao](https://jeoyal.github.io/home/), [Yanan Sun](https://scholar.google.com/citations?hl=zh-CN&user=6TA1oPkAAAAJ)‡ *, [Fei Shen](https://muzishen.github.io/), [Xin Jiang](https://whitejiang.github.io/), [Zhening Xing](https://scholar.google.com/citations?user=sVYO0GYAAAAJ&hl=en), [Kai Chen*](https://chenkai.site/), [Cairong Zhao*](https://vill-lab.github.io/)
(* corresponding authors, ‡ project leader)
Bringing characters like Teddy Bear into life requires a bit of *magic*. **FaceShot** makes this *magic* a reality by introducing a training-free portrait animation framework which can animate
any character from any driven video, especially for non-human characters, such as emojis and toys.
**Your star is our fuel! We're revving up the engines with it!**
## News
- [2025/6/26] 🔥 We release the preprocessing scripts for pre-store target images and the appearance gallery.
- [2025/1/23] 🔥 FaceShot will be appeared in ICLR 2025!
- [2025/1/23] 🔥 We release the code, [project page](https://faceshot2024.github.io/faceshot/) and [paper](https://www.arxiv.org/abs/2503.00740).
## TODO List
- [x] (2025.06.26) Preprocessing script for pre-store target images and appearance gallery.
- [x] (2025.06.26) Appearance gallery.
- [ ] Gradio demo.
## Gallery
Bring Any Character into Life!!!
|
|
Toy Character
|
|
|
2D Anime Character
|
|
|
3D Anime Character
|
|
|
Animal Character
|
Check the gallery of our
project page for more visual results!
## Get Started
### Clone the Repository
```
git clone https://github.com/open-mmlab/FaceShot.git
cd ./FaceShot
```
### Environment Setup
This script has been tested on CUDA version of 12.4.
```
conda create -n faceshot python==3.10
conda activate faceshot
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install "git+https://github.com/XPixelGroup/BasicSR.git"
```
### Downloading Checkpoints
1. Download the checkpoint of CMP from [MOFA-Video](https://huggingface.co/MyNiuuu/MOFA-Video-Hybrid/resolve/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
2. Download the `ckpts` [folder](https://huggingface.co/MyNiuuu/MOFA-Video-Hybrid/tree/main/ckpts) from the huggingface repo which contains necessary pretrained checkpoints and put it under `./ckpts`. You may use `git lfs` to download the **entire** `ckpts` folder.
### Building Appearance Gallery
You can download pre-stored domain features from [here](https://huggingface.co/Gaojunyao/FaceShot/tree/main), or create your own appearance gallery by following these steps:
1. Place character images for a specific domain into `./characters/images/xx/`, where xx represents the domain index.
2. Run `python annotation.py` to annotate landmarks for the characters. Please note that for non-human characters, manual annotation is required. The landmarks will be saved in `./characters/points/xx/`.
3. Run `python process_features.py` to extract CLIP and diffusion features for each domain. The features will be saved in `./target_domains/`.
### Running Inference Scripts
```
chmod 777 inference.sh
./inference.sh
```
## License and Citation
All assets and code are under the [license](./LICENSE) unless specified otherwise.
If this work is helpful for your research, please consider citing the following BibTeX entry.
```
@article{gao2025faceshot,
title={FaceShot: Bring Any Character into Life},
author={Gao, Junyao and Sun, Yanan and Shen, Fei and Jiang, Xin and Xing, Zhening and Chen, Kai and Zhao, Cairong},
journal={arXiv preprint arXiv:2503.00740},
year={2025}
}
```
## Acknowledgements
The code is built upon [MOFA-Video](https://github.com/MyNiuuu/MOFA-Video) and [DIFT](https://github.com/Tsingularity/dift).