# SRCNN **Repository Path**: zhang-juntao69/srcnn ## Basic Information - **Project Name**: SRCNN - **Description**: srcnn代码实现 - **Primary Language**: Python - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-12-23 - **Last Updated**: 2023-12-23 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # SRCNN This repository is implementation of the ["Image Super-Resolution Using Deep Convolutional Networks"](https://arxiv.org/abs/1501.00092)(SRCNN)by PyTorch. ![](assets/markdown-img-paste-20190716211816899.png) ## Requirements (this version is ok) - PyTorch 1.0.0 (1.13.1) - Numpy 1.15.4 (1.21.5) - Pillow 5.4.1 (9.3.0) - h5py 2.8.0 (3.7.0) - tqdm 4.30.0 (4.64.1) - scikit-image 0.19.3 ## Train "eval_img" and "train_img" are both the 91-image dataset converted to HDF5. You can use them as train-file and eval-file Otherwise, you can use `prepare.py` to create custom dataset. ```bash python train.py --train-file "path_to_train_file" \ --eval-file "path_to_eval_file" \ --outputs-dir "path_to_outputs_file" \ --scale 3 \ --lr 1e-4 \ --batch-size 16 \ --num-epochs 400 \ --num-workers 0 \ --seed 123 ``` ## Test --lr means you give a low resolution image without --lr means you give a high resolution image. The program will automatically trim it, generate a lr-image(if necessary) and use this lr-image to produce a hr-image. So, you can see the differences . ```bash python test.py --weights-file "path_to_pth_file" \ --image-file "path_to_test_image_file" \ --scale 3 --lr ``` ## Results We used the network settings for experiments, i.e., . PSNR was calculated on the Y channel.