# AdcSR **Repository Path**: weekndzzzzz/AdcSR ## Basic Information - **Project Name**: AdcSR - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-12-08 - **Last Updated**: 2025-12-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
](https://imgsli.com/MzU2MjU1) [
](https://imgsli.com/MzU2MjU2) [
](https://imgsli.com/MzU2MjU3)
[
](https://imgsli.com/MzU2NTg4) [
](https://imgsli.com/MzU2NTkw) [
](https://imgsli.com/MzU2NTk1)
[
](https://imgsli.com/MzU2OTE0) [
](https://imgsli.com/MzU2OTE1)
https://github.com/user-attachments/assets/1211cefa-8704-47f5-82cd-ec4ef084b9ec
## ⚙ Installation
```shell
git clone https://github.com/Guaishou74851/AdcSR.git
cd AdcSR
conda create -n AdcSR python=3.10
conda activate AdcSR
pip install --upgrade pip
pip install -r requirements.txt
chmod +x train.sh train_debug.sh test_debug.sh evaluate_debug.sh
```
## ⚡ Test
1. **Download test datasets** (`DIV2K-Val.zip`, `DRealSR.zip`, `RealSR.zip`) from [Hugging Face](https://huggingface.co/Guaishou74851/AdcSR) or [PKU Disk](https://disk.pku.edu.cn/link/AAD499197CBF054392BC4061F904CC4026).
2. **Unzip** them into `./testset/`, ensuring the structure:
```
./testset/DIV2K-Val/LR/xxx.png
./testset/DIV2K-Val/HR/xxx.png
./testset/DRealSR/LR/xxx.png
./testset/DRealSR/HR/xxx.png
./testset/RealSR/LR/xxx.png
./testset/RealSR/HR/xxx.png
```
3. **Download model weights** (`net_params_200.pkl`) from the same link and place it in `./weight/`.
4. **Run the test script** (or modify and execute `./test_debug.sh` for convenience):
```bash
python test.py --LR_dir=path_to_LR_images --SR_dir=path_to_SR_images
```
The results will be saved in `path_to_SR_images`.
5. **Test Your Own Images**:
- Place your **low-resolution (LR)** images into `./testset/xxx/`.
- Run the command with `--LR_dir=./testset/xxx/ --SR_dir=./yyy/`, and the model will perform **x4 super-resolution**.
## 🍭 Evaluation
Run the evaluation script (or modify and execute `./evaluate_debug.sh` for convenience):
```bash
python evaluate.py --HR_dir=path_to_HR_images --SR_dir=path_to_SR_images
```
## 🔥 Train
This repo provides code for **Stage 2** training (**adversarial distillation**). For **Stage 1** (pretraining the channel-pruned VAE decoder), refer to our paper and use the code of [Latent Diffusion Models](https://github.com/CompVis/latent-diffusion) repo.
1. **Download pretrained model weights** (`DAPE.pth`, `halfDecoder.ckpt`, `osediff.pkl`, `ram_swin_large_14m.pth`) from [Hugging Face](https://huggingface.co/Guaishou74851/AdcSR) or [PKU Disk](https://disk.pku.edu.cn/link/AAD499197CBF054392BC4061F904CC4026), and place them in `./weight/pretrained/`.
2. **Download the [LSDIR](https://huggingface.co/ofsoundof/LSDIR) dataset** and store it in your preferred location.
3. **Modify the dataset path** in `config.yml`:
```yaml
dataroot_gt: path_to_HR_images_of_LSDIR
```
4. **Run the training script** (or modify and execute `./train.sh` or `./train_debug.sh`):
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.run --nproc_per_node=8 --master_port=23333 train.py
```
The trained model will be saved in `./weight/`.
## 🥰 Acknowledgement
This project is built upon the codes of [Latent Diffusion Models](https://github.com/CompVis/latent-diffusion), [Diffusers](https://github.com/huggingface/diffusers), [BasicSR](https://github.com/XPixelGroup/BasicSR), and [OSEDiff](https://github.com/cswry/OSEDiff). We sincerely thank the authors of these repos for their significant contributions.
## 🎓 Citation
If you find our work helpful, please consider citing:
```latex
@inproceedings{chen2025adversarial,
title={Adversarial Diffusion Compression for Real-World Image Super-Resolution},
author={Chen, Bin and Li, Gehui and Wu, Rongyuan and Zhang, Xindong and Chen, Jie and Zhang, Jian and Zhang, Lei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
```