同步操作将从 HowieLee01/DR2_Drgradation_Remover 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
This is the implementation DR2: Diffusion-based Robust Degradation Remover for Blind face Restoration (CVPR 2023).
DR2E is a two-stage blind face restoration framework consists of the degradation remover DR2 and an enhance module which can be any existing blind face restoration model. In the first stage, DR2 utilizes the input image to control the diffusion sampling process and results in smooth, clean middle result in ˆx0 . In the second stage, the enhancement modules maps ˆx0 in to high-res and high-quality output.
Our implementation of DR2 module is heavily based on ILVR_adm, and the pretrained model is trained on FFHQ dataset to generate 256×256 face images. You can download DR2 model we use for our paper from Baidu Cloud Drive , or try more pretrained weights at P2-weighting.
Variant blind face restoration models can be plugged in as the enhancement module. To use these model, you can
In our implementation, we choose SPARNetHD and train it from scratch. The training code and loss functions remain unchanged to the original paper, but we construct training set using DR2 augmentation (which is introduced in the paper section 3.4) as follows: y=DR2(x)⊛ where x is the ground truth high-quality image, and y is the input image. This helps the enhancement module adapts faster to DR2 output. Other than this, no more degradation model is required. You can download SPARNetHD weights from Baidu Cloud Drive.
You can directly use pretrained blind face restoration methods like VQFR, CodeFormer, GPEN without further finetuning. Because these methods are trained on complex degradation model and can work fine on DR2 output.
First clone the repo and install dependent packages
# build dependency
pip install -r requirements.txt
Download pretrained weights for DR2 and SPARNetHD and put them in "./weights". And run demo
python demo.py
This will enhance the testing images is file "./test_images/input". Noting that for each subdirectory, we choose different controlling parameters (N, \tau) (please refer to out paper). Feel free to change the controlling parameters and see how they affect the output. Noting N \in \{1, 2, 4, 8, 16, ...\} and \tau \in \{0,1,2,3,..,T\}. Since we inference every 10 steps for speed by default, T should be 100 rather than 1000.
# in demo.py, line 11-16
dir_para = [
# [data_dir, (N, \tau)]
["./test_images/input/01/", (4, 22)],
["./test_images/input/02/", (8, 35)],
["./test_images/input/03/", (16, 35)],
]
The results of two stages are stored in "./test_images/output". If you installed other blind face restoration methods, you can run them on "./test_images/output/dr2" file.
This the results reported in out paper and you can see our method is robust against heavy degradation and our framework enable VQFR to perform normally again.
Because the stochastic nature of diffusion sampling process, you may get different results on a single images (unless you're using very small N and \tau). But we found it wouldn't affect the average quantitative score (like PSNR, SSIM, FID, LPIPS) if tested on a big testing set containing hundreds or thousands of images.
If you have any questions, please contact dedsec_z@sjtu.edu.cn
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。