同步操作将从 han20121231/Deep-Iterative-Collaboration 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
Pytorch implementation of Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation (CVPR 2020) [arXiv][CVF]
If you find our work useful in your research, please consider citing:
@inproceedings{ma2020deep,
title={Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation},
author={Ma, Cheng and Jiang, Zhenyu and Rao, Yongming and Lu, Jiwen and Zhou, Jie},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
pip install numpy opencv-python tqdm imageio pandas matplotlib tensorboardX
CelebA dataset can be downloaded here. Please download and unzip the img_celeba.7z
file.
Helen dataset can be downloaded here. Please download and unzip the 5 parts of All images
.
Testing sets for CelebA and Helen can be downloaded from Google Drive or Baidu Drive (extraction code: 6qhx).
Landmark annotations for CelebA and Helen can be downloaded in the annotations
folder from Google Drive or Baidu Drive (extraction code: 6qhx).
The pretrained models can also be downloaded from the models
folder in the above links. Then please place them in ./models
.
To train a model:
cd code
python train.py -opt options/train/train_(DIC|DICGAN)_(CelebA|Helen).json
The json file will be processed by options/options.py
. Please refer to this for more details.
Before running this code, please modify option files to your own configurations including:
dataroot_HR
and dataroot_LR
paths for the data loaderinfo_path
for the annotationsLightCNN_feature.pth
) if training a GAN modelDuring training, you can use Tesorboard to monitor the losses with
tensorboard --logdir tb_logger/NAME_OF_YOUR_EXPERIMENT
To generate SR images by a model:
cd code
python test.py -opt options/test/test_(DIC|DICGAN)_(CelebA|Helen).json
results/{test_name}/{dataset_name}
. The PSNR and SSIM values will be stored in result.json
while the average results will be recorded in average_result.txt
models
folder from Google Drive or Baidu Drive (extraction code: 6qhx). Then you can modify the directory of pretrained model and LR image sets in option files and run test.py
for a quick test.To evaluate the SR results by landmark detection:
python eval_landmark.py --info_path /path/to/landmark/annotations --data_root /path/to/result/images
HG_68_CelebA.pth
from the from Google Drive or Baidu Drive (extraction code: 6qhx) and put it into the ./models
directory./path/to/result/images/landmark_result.json
and the averaged results will be in landmark_average_result.txt
.
The code is based on SRFBN and hourglass-facekeypoints-detection
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。