# AMRSR **Repository Path**: wen--he/AMRSR ## Basic Information - **Project Name**: AMRSR - **Description**: Pytorch code for the paper "Attention-based Multi-Reference Learning for Image Super-Resolution" (ICCV2021) - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2021-10-01 - **Last Updated**: 2022-01-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Attention-based-Multi-Reference-Learning-for-Image-Super-Resolution Pytorch implementation of the paper [Attention-based Multi-Reference Learning for Image Super-Resolution](https://arxiv.org/abs/2108.13697) accepted in ICCV 2021. The code is built on [SRNTT](https://github.com/S-aiueo32/srntt-pytorch). [Project Page](https://marcopesavento.github.io/AMRSR/) ## Contents - [Requirements and dependencies](#requirements-and-dependencies) - [Datasets](#datasets) - [Train](#train) - [Test](#test) - [Citation](#citation) - [Contact](#contact) - [Acknowledgments](#acknowledgments) ## Requirements and dependencies * python 3.7 **packages** * pytorch >= 1.3.1 * torchvision >= 0.4.0 * pillow >= 6.2.1 * tqdm >= 4.38.0 * future >= 0.18.2 * tensorboard >= 2.0.1 * scikit-learn >= 0.21.3 * kornia >= 0.1.4.post2 * numpy >= 1.17.4 * pandas >= 0.25.3 * googledrivedownloader >= 0.4 * opencv-python >= 4.1.1.26 **dev-packages** * ipython >= 7.9.0 * pylint >= 2.4.4 * autopep8 >= 1.4.4 * flake8 >= 3.7.9 * jupyterlab >= 1.2.3 ## Datasets (Coming soon) Download the desired datasets: [CU4REF] [Sun80] (Only for evaluation) [GEMAP] [HUMAP] For GEMAP and HUMAP please preprocess the data: 1. Create the patches by cropping the texture maps (256x256 size) with the script (change the path folder on the script): ```shell python ./utils/extract_subimages.py ``` 2. On each reference folder, copy the reference image as many time as the number of the patches of the correspondent input. 3. The final output folder should look like this for HUMAP (1 reference): ``` -input: - 1_s1.png - 1_s2.png . . . - 1_s64.png -references: -ref1: - 1_s1.png - 1_s2.png . . . - 1_s64.png ``` where 1_s1.png, 1_s2.png and 1_s64.png are the same image for the references. ## Train ### Similarity map creation To speed up the training of the super-resolution network, the hierarchical attention-based similarity is performed offline with the following command: ```shell $ python create_simmap.py --dataroot (path to input images folder) --patch_inp (number of patch to divide the input image into) --patch_ref (number of patch to divide the reference images into) --dataroot_ref (path to reference folder) --num_ref (number of reference images) --dataroot_save (path to save the similarity maps) ``` The similarity maps will be created on the selected folder and used on the training of AMRSR. These maps represent the reference images features that are most similar to the input image features. ### Training AMRSR Train AMRSR using the similarity maps created on the previous step. The weights will be saved on the same directory containing the folder of the similarity maps. To use the pre-trained weights, add `--use_weights` option at all times (optional but suggested). ```shell $ python train.py --dataroot (path to the input images folder) --dataroot_save (path to directory containing the folder of the similarity maps) --use_weights ``` To train only with the reconstruction loss: ```shell $ python train_psnr.py --dataroot (path to the input images folder) --dataroot_save (path to directory containing the folder of the similarity maps) --use_weights ``` ## Test ### Quick Test Pre-trained weights for CU4REF and Sun80 are made available to test AMRSR. Download the CU4REF test dataset and run the following command: ```shell python test.py --input (path to input image) --dataroot_ref (path to reference folder) --patch_inp 1 --patch_ref 1 --num_ref 4 --save (path to the directory to save the output) --weight ./weights/netG.pth --use_weights ``` Download the Sun80 test dataset and run the following command: ```shell python test.py --input (path to input image) --dataroot_ref (path to reference folder) --patch_inp 1 --patch_ref 16 --num_ref 4 --save (path to the directory to save the output) --weight ./weights/netG_Sun80.pth --use_weights ``` ### Test trained models Run the following command: ```shell python test.py --input (path to input image) --dataroot_ref (path to reference folder) --patch_inp (number of patches that the input is divided into) --patch_ref (number of patches that the references are divided into) --num_ref (number of references used) --save (path to the directory to save the output) --weight (path to the directory where the weights obtained from the training are saved) --use_weights (optional, add only if used also for training) ``` The specified directory will contain the SR images and the directory new_HR with the ground truth images. The matlab script "./utils/Evaluate_method.m" computes the PSNR and SSIM values for the output images. ## Citation ``` @misc{pesavento2021attentionbased, title={Attention-based Multi-Reference Learning for Image Super-Resolution}, author={Marco Pesavento and Marco Volino and Adrian Hilton}, year={2021}, eprint={2108.13697}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Contacts If you meet any problems, please describe them in issues or contact: * Marco Pesavento: m.pesavento@surrey.ac.uk ## Acknowledgments This code is built on [SRNTT](https://zzutk.github.io/SRNTT-Project-Page/) (PyTorch). We thank the authors for sharing their code of [SRNTT](https://github.com/S-aiueo32/srntt-pytorch) PyTorch version. This research was supported by UKRI EPSRC Platform Grant EP/P022529/1.