Xinqi Lin1,*, Jingwen He2,3,*, Ziyan Chen1, Zhaoyang Lyu2, Bo Dai2, Fanghua Yu1, Wanli Ouyang2, Yu Qiao2, Chao Dong1,2
1Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
2Shanghai AI Laboratory
3The Chinese University of Hong Kong
I often think of Bag End. I miss my books and my arm chair, and my garden. See, that's where I belong. That's home. --- Bilbo Baggins
# clone this repo
git clone https://github.com/XPixelGroup/DiffBIR.git
cd DiffBIR
# create environment
conda create -n diffbir python=3.10
conda activate diffbir
pip install -r requirements.txt
Our new code is based on pytorch 2.2.2 for the built-in support of memory-efficient attention. If you are working on a GPU that is not compatible with the latest pytorch, just downgrade pytorch to 1.13.1+cu116 and install xformers 0.0.16 as an alternative.
Run the following command to interact with the gradio website.
# For low-VRAM users, set captioner to ram or none
python run_gradio.py --captioner llava
Here we list pretrained weight of stage 2 model (IRControlNet) and our trained SwinIR, which was used for degradation removal during the training of stage 2 model.
Model Name | Description | HuggingFace | BaiduNetdisk | OpenXLab |
---|---|---|---|---|
v2.1.pt | IRControlNet trained on filtered unsplash | download | N/A | N/A |
v2.pth | IRControlNet trained on filtered laion2b-en | download | download (pwd: xiu3) |
download |
v1_general.pth | IRControlNet trained on ImageNet-1k | download | download (pwd: 79n9) |
download |
v1_face.pth | IRControlNet trained on FFHQ | download | download (pwd: n7dx) |
download |
codeformer_swinir.ckpt | SwinIR trained on ImageNet-1k with CodeFormer degradation | download | download (pwd: vfif) |
download |
realesrgan_s4_swinir_100k.pth | SwinIR trained on ImageNet-1k with Real-ESRGAN degradation | download | N/A | N/A |
During inference, we use off-the-shelf models from other papers as the stage 1 model: BSRNet for BSR, SwinIR-Face used in DifFace for BFR, and SCUNet-PSNR for BID, while the trained IRControlNet remains unchanged for all tasks. Please check code for more details. Thanks for their work!
We provide some examples for inference, check inference.py for more arguments. Pretrained weights will be automatically downloaded. For users with limited VRAM, please run the following scripts with tiled sampling.
# DiffBIR v2 (ECCV paper version)
python -u inference.py \
--task sr \
--upscale 4 \
--version v2 \
--sampler spaced \
--steps 50 \
--captioner none \
--pos_prompt '' \
--neg_prompt 'low quality, blurry, low-resolution, noisy, unsharp, weird textures' \
--cfg_scale 4 \
--input inputs/demo/bsr \
--output results/v2_demo_bsr \
--device cuda --precision fp32
# DiffBIR v2.1
python -u inference.py \
--task sr \
--upscale 4 \
--version v2.1 \
--captioner llava \
--cfg_scale 8 \
--noise_aug 0 \
--input inputs/demo/bsr \
--output results/v2.1_demo_bsr
# DiffBIR v2 (ECCV paper version)
python -u inference.py \
--task face \
--upscale 1 \
--version v2 \
--sampler spaced \
--steps 50 \
--captioner none \
--pos_prompt '' \
--neg_prompt 'low quality, blurry, low-resolution, noisy, unsharp, weird textures' \
--cfg_scale 4.0 \
--input inputs/demo/bfr/aligned \
--output results/v2_demo_bfr_aligned \
--device cuda --precision fp32
# DiffBIR v2.1
python -u inference.py \
--task face \
--upscale 1 \
--version v2.1 \
--captioner llava \
--cfg_scale 8 \
--noise_aug 0 \
--input inputs/demo/bfr/aligned \
--output results/v2.1_demo_bfr_aligned
# DiffBIR v2 (ECCV paper version)
python -u inference.py \
--task face_background \
--upscale 2 \
--version v2 \
--sampler spaced \
--steps 50 \
--captioner none \
--pos_prompt '' \
--neg_prompt 'low quality, blurry, low-resolution, noisy, unsharp, weird textures' \
--cfg_scale 4.0 \
--input inputs/demo/bfr/whole_img \
--output results/v2_demo_bfr_unaligned \
--device cuda --precision fp32
# DiffBIR v2.1
python -u inference.py \
--task face_background \
--upscale 2 \
--version v2.1 \
--captioner llava \
--cfg_scale 8 \
--noise_aug 0 \
--input inputs/demo/bfr/whole_img \
--output results/v2.1_demo_bfr_unaligned
# DiffBIR v2 (ECCV paper version)
python -u inference.py \
--task denoise \
--upscale 1 \
--version v2 \
--sampler spaced \
--steps 50 \
--captioner none \
--pos_prompt '' \
--neg_prompt 'low quality, blurry, low-resolution, noisy, unsharp, weird textures' \
--cfg_scale 4.0 \
--input inputs/demo/bid \
--output results/v2_demo_bid \
--device cuda --precision fp32
# DiffBIR v2.1
python -u inference.py \
--task denoise \
--upscale 1 \
--version v2.1 \
--captioner llava \
--cfg_scale 8 \
--noise_aug 0 \
--input inputs/demo/bid \
--output results/v2.1_demo_bid
python -u inference.py \
--upscale 4 \
--version custom \
--train_cfg [path/to/training/config] \
--ckpt [path/to/saved/checkpoint] \
--captioner llava \
--cfg_scale 8 \
--noise_aug 0 \
--input inputs/demo/bsr \
--output results/custom_demo_bsr
Add the following arguments to enable tiled sampling:
[command...] \
# tiled inference for stage-1 model
--cleaner_tiled \
--cleaner_tile_size 256 \
--cleaner_tile_stride 128 \
# tiled inference for VAE encoding
--vae_encoder_tiled \
--vae_encoder_tile_size 256 \
# tiled inference for VAE decoding
--vae_decoder_tiled \
--vae_decoder_tile_size 256 \
# tiled inference for diffusion process
--cldm_tiled \
--cldm_tile_size 512 \
--cldm_tile_stride 256
Tiled sampling supports super-resolution with a large scale factor on low-VRAM graphics cards. Our tiled sampling is built upon mixture-of-diffusers and Tiled-VAE. Thanks for their work!
This option only works with DiffBIR v1 and v2. As proposed in SeeSR, the LR embedding (LRE) strategy provides a more faithful start point for sampling and consequently suppresses the artifacts in flat region:
[command...] --start_point_type cond
For our model, we use the diffused condition as start point. This option makes the results more stable and ensures that the outcomes from ODE samplers like DDIM and DPMS are normal. However, it may lead to a decrease in sample quality.
First, we train a SwinIR, which will be used for degradation removal during the training of stage 2.
Generate file list of training set and validation set, a file list looks like:
/path/to/image_1
/path/to/image_2
/path/to/image_3
...
You can write a simple python script or directly use shell command to produce file lists. Here is an example:
# collect all iamge files in img_dir
find [img_dir] -type f > files.list
# shuffle collected files
shuf files.list > files_shuf.list
# pick train_size files in the front as training set
head -n [train_size] files_shuf.list > files_shuf_train.list
# pick remaining files as validation set
tail -n +[train_size + 1] files_shuf.list > files_shuf_val.list
Fill in the training configuration file with appropriate values.
Start training!
accelerate launch train_stage1.py --config configs/train/train_stage1.yaml
Download pretrained Stable Diffusion v2.1 to provide generative capabilities.
wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
Generate file list as mentioned above. Currently, the training script of stage 2 doesn't support validation set, so you only need to create training file list.
Fill in the training configuration file with appropriate values.
Start training!
accelerate launch train_stage2.py --config configs/train/train_stage2.yaml
Please cite us if our work is useful for your research.
@misc{lin2024diffbir,
title={DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior},
author={Xinqi Lin and Jingwen He and Ziyan Chen and Zhaoyang Lyu and Bo Dai and Fanghua Yu and Wanli Ouyang and Yu Qiao and Chao Dong},
year={2024},
eprint={2308.15070},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This project is released under the Apache 2.0 license.
This project is based on ControlNet and BasicSR. Thanks for their awesome work.
If you have any questions, please feel free to contact with me at linxinqi23@mails.ucas.ac.cn
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。