# BFM_to_FLAME
**Repository Path**: ShunyuYao/BFM_to_FLAME
## Basic Information
- **Project Name**: BFM_to_FLAME
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-12-31
- **Last Updated**: 2021-12-31
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Convert from Basel Face Model (BFM) to FLAME
This repository demonstrates
1) how to create a [FLAME](http://flame.is.tue.mpg.de) texture model from the BFM vertex color space, and
2) how to convert a BFM mesh to a FLAME mesh.
### About FLAME
FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. Public FLAME related repositories:
* [TF_FLAME: Tensorflow FLAME framework](https://github.com/TimoBolkart/TF_FLAME)
* [flame-fitting: Chumpy-based FLAME fitting](https://github.com/Rubikplayer/flame-fitting)
* [Photometric FLAME Fitting: FLAME image fitting using differentiable rendering](https://github.com/HavenFeng/photometric_optimization)
* [FLAME_PyTorch: FLAME PyTorch layer](https://github.com/soubhiksanyal/FLAME_PyTorch)
* [RingNet: FLAME meshes from single images](https://github.com/soubhiksanyal/RingNet)
* [DECA: Detailed nimatable face reconstruction from single images](https://github.com/YadiraF/DECA)
* [VOCA: Voice Operated Character Animation](https://github.com/TimoBolkart/voca)
* [GIF: Generative Interpretable Faces](https://github.com/ParthaEth/GIF)
### Setup
Install pip and virtualenv
```
sudo apt-get install python3-pip python3-venv
```
Clone the git project:
```
git clone https://github.com/TimoBolkart/BFM_to_FLAME.git
```
Set up and activate virtual environment:
```
mkdir /.virtualenvs
python3 -m venv /.virtualenvs/BFM_to_FLAME
source /.virtualenvs/BFM_to_FLAME/bin/activate
```
Make sure your pip version is up-to-date:
```
pip install -U pip
```
Install requirements
```
pip install numpy==1.19.4
pip install h5py==3.1.0
pip install chumpy==0.70
pip install opencv-python==4.4.0.46
```
### Create texture model
Download BFM 2017 (i.e. 'model2017-1_bfm_nomouth.h5') from [here](https://faces.dmi.unibas.ch/bfm/bfm2017.html) and place it in the model folder.
Download inpainting masks from [here](http://files.is.tue.mpg.de/tbolkart/FLAME/mask_inpainting.npz) and place it in the data folder.
Running
```
python col_to_tex.py
```
outputs a 'FLAME_albedo_from_BFM.npz' in the output folder. This file can be used with several FLAME-based repositories like [TF_FLAME](https://github.com/TimoBolkart/TF_FLAME) or [FLAME photometric optimization](https://github.com/HavenFeng/photometric_optimization).
### Convert meshes
Install mesh processing libraries from [MPI-IS/mesh](https://github.com/MPI-IS/mesh) within the virtual environment.
Download FLAME from [here](https://flame.is.tue.mpg.de) and place it in the model folder.
Running
```
python mesh_convert.py
```
outputs a FLAME mesh for a specified BFM mesh. The demo supports meshes in 'BFM 2017', 'BFM 2009', or 'cropped BFM 2009' (i.e. as used by [3DDFA](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm)) topology.
### Citing
When using this code, the generated texture space, or FLAME meshes in a scientific publication, please cite
```
@article{FLAME:SiggraphAsia2017,
title = {Learning a model of facial shape and expression from {4D} scans},
author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
volume = {36},
number = {6},
year = {2017},
url = {https://doi.org/10.1145/3130800.3130813}
}
```
When using the converted texture space, please further follow the license agreement of the BFM model as specified [here](https://faces.dmi.unibas.ch/bfm/bfm2017.html).
### Acknowledgement
We thank the authors of the BFM 2017 model for making the model publicly available.