# DMID **Repository Path**: furh18/DMID ## Basic Information - **Project Name**: DMID - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-05-29 - **Last Updated**: 2025-05-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Stimulating Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling [](https://arxiv.org/abs/2307.03992) [](https://ieeexplore.ieee.org/document/10607932) [](https://zhuanlan.zhihu.com/p/1898420817429262557) [](https://zhuanlan.zhihu.com/p/639911080) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma15?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma25?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma50?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma100?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma150?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma200?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-mcmaster-sigma250?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma15?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma25?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma50?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma100?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma150?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma200?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-kodak24-sigma250?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma15?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma25?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma50?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma100?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma150?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma200?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-cbsd68-sigma250?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-urban100-sigma15-1?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-urban100-sigma25?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-urban100-sigma50?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-imagenet-sigma50?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-imagenet-sigma100?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-imagenet-sigma150?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-imagenet-sigma200?p=stimulating-the-diffusion-model-for-image) [](https://paperswithcode.com/sota/color-image-denoising-on-imagenet-sigma250?p=stimulating-the-diffusion-model-for-image)
## Quick Start
- Download the pre-trained unconditional diffusion [model](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt)(from [OpenAI](https://github.com/openai/guided-diffusion)) and place it in `./pre-trained/`.
- To quick start, just run:
```
python main_for_gaussian.py
```
## Evaluation
- All the [visual results](https://github.com/Li-Tong-621/DMID/releases/tag/v1.0) are available.
- Download [testsets](https://github.com/Li-Tong-621/DMID/releases/tag/v1.0) (CBSD68, Kodak24, McMaster, Urban100, ImageNet), and place the testsets in './data/', eg './data/CBSD68'.
- Download the [testsets](https://github.com/Li-Tong-621/DMID/releases/tag/v1.0) after noise transformation (CC, PolyU, FMDD), and replace the folder named '.pre-trained' with the downloaded testsets.
- Download the pre-trained unconditional diffusion [model](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt)(from [OpenAI](https://github.com/openai/guided-diffusion)) and place it in `./pre-trained/`.
- To quickly reproduce the reported results, run
```
sh evaluate.sh
```
## Tools
- 🔨 All the details of setting details of different tables can be found in the paper.
- 🔨 To calculate timestep N for a given noise level, we provied two version code, you can find them in [https://github.com/Li-Tong-621/DMID/utils](https://github.com/Li-Tong-621/DMID/tree/main/utils)
```
python utils_cal_N.py
```
```
python utils_cal_N_2.py
```
- 🔨 To perform our improved noise transformation method by yourself, or denoised any give noisy image, please firstly perform noise transformation and then denoise the intermediate image.
```
python main_for_real_NT.py
python main_for_real.py
```
- The detailed tips are as follows:
- - 1: Note that in `python main_for_real_NT.py`, we need to set some parameters, such as the noise level parameter `sigma` related to the noise level of the images (line 430: `sigma=data_dct['sigma']`), and the parameter `eps` for determining the stopping condition (line 445: `eps=data_dct['eps']`).
- - 2: Running `python main_for_real_NT.py` successfully will generate a `.pt` file, after which we can run the `main_for_real.py` file. When running `main_for_real.py`, we need to find an appropriate step like line 155: `diffusion_times = find_N(sigma=2 * noises[str(index)]['noise_level'] / 255)`.
- - 3: In general, choosing appropriate parameters can yield very good results. For easier operation or comparison, feel free to make some changes, for example, use a fixed stopping condition, such as running 4500 or 5000 iterations fixed for each image.
- 🔨 We provide a new code for real-world image denoising (main_for_real.py), because there are some errors, which i didn't find, in original code for real-world image denoising (main_for_real_o.py).
- 🔨 What if the denoised images you have are not natural images?
- - 1: You can first try to denoise directly with the current model. We also used non-natural image datasets in our paper, for example, the FMDD dataset, which is actually a dataset of fluorescent cell images. We treated the FMDD dataset as single-channel grayscale natural images, and you can refer to the code for specific operations.
- - 2: You can also train a diffusion model with your own dataset. We used a model trained by OpenAI on ImageNet, and you can use OpenAI's code to train a generative model for polarized images. Some potential issues you might encounter can be found in the issues https://github.com/Li-Tong-621/DMID/issues/9 and https://github.com/Li-Tong-621/DMID/issues/10.
## Results