# face-parsing.PyTorch **Repository Path**: halfskywalker/face-parsing.PyTorch ## Basic Information - **Project Name**: face-parsing.PyTorch - **Description**: 12333333333 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-07-03 - **Last Updated**: 2022-08-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # face-parsing.PyTorch
### Contents - [Training](#training) - [Demo](#Demo) - [References](#references) ## Training 1. Prepare training data: -- download [CelebAMask-HQ dataset](https://github.com/switchablenorms/CelebAMask-HQ) -- change file path in the `prepropess_data.py` and run ```Shell python prepropess_data.py ``` 2. Train the model using CelebAMask-HQ dataset: Just run the train script: ``` $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py ``` If you do not wish to train the model, you can download [our pre-trained model](https://drive.google.com/open?id=154JgKpzCPW82qINcVieuPH3fZ2e0P812) and save it in `res/cp`. ## Demo 1. Evaluate the trained model using: ```Shell # evaluate using GPU python test.py ``` ## Face makeup using parsing maps [**face-makeup.PyTorch**](https://github.com/zllrunning/face-makeup.PyTorch)Hair | Lip | |
---|---|---|
Original Input | ![]() |
![]() |
Color | ![]() |
![]() |