# ddpm-mindspore **Repository Path**: JeffDingAI/ddpm-mindspore ## Basic Information - **Project Name**: ddpm-mindspore - **Description**: Implementation of Denoising Diffusion Probabilistic Model in MindSpore. The implementation refers to lucidrains's denoising-diffusion-pytorch. - **Primary Language**: Python - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-06-02 - **Last Updated**: 2023-08-03 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Denoising Diffusion Probabilistic Model, in MindSpore Implementation of [Denoising Diffusion Probabilistic Model](https://arxiv.org/abs/2006.11239) in MindSpore. The implementation refers to lucidrains's [denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). ## 修改部分 本项目是使用以下这位大佬的mindspore代码以及zyf-ai大佬的代码修改而成 原始代码仓链接:https://github.com/lvyufeng/denoising-diffusion-mindspore 增加了通过超参适配启智的训练作业环境及非启智环境 ## Usage ```python from ddm import Unet, GaussianDiffusion, value_and_grad from ddm.ops import randn model = Unet( dim = 64, dim_mults = (1, 2, 4, 8) ) diffusion = GaussianDiffusion( model, image_size = 128, timesteps = 1000, # number of steps loss_type = 'l1' # L1 or L2 ) training_images = randn((1, 3, 128, 128)) # images are normalized from 0 to 1 grad_fn = value_and_grad(diffusion, None, diffusion.trainable_params()) loss, grads = grad_fn(training_images) # after a lot of training sampled_images = diffusion.sample(batch_size = 1) print(sampled_images.shape) # (4, 3, 128, 128) ``` Or, if you simply want to pass in a folder name and the desired image dimensions, you can use the `Trainer` class to easily train a model. ```python from download import download from ddm import Unet, GaussianDiffusion, Trainer url = 'https://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz' path = download(url, './102flowers', 'tar.gz') model = Unet( dim = 64, dim_mults = (1, 2, 4, 8) ) diffusion = GaussianDiffusion( model, image_size = 64, timesteps = 10, # number of steps sampling_timesteps = 5, # number of sampling timesteps (using ddim for faster inference [see citation for ddim paper]) loss_type = 'l1' # L1 or L2 ) trainer = Trainer( diffusion, path, train_batch_size = 1, train_lr = 8e-5, train_num_steps = 1000, # total training steps gradient_accumulate_every = 2, # gradient accumulation steps ema_decay = 0.995, # exponential moving average decay amp_level = 'O1', # turn on mixed precision ) trainer.train() ``` > `amp_level` of `Trainer` will automaticlly set to `O1` on Ascend. ## Citations ```bibtex @inproceedings{NEURIPS2020_4c5bcfec, author = {Ho, Jonathan and Jain, Ajay and Abbeel, Pieter}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin}, pages = {6840--6851}, publisher = {Curran Associates, Inc.}, title = {Denoising Diffusion Probabilistic Models}, url = {https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf}, volume = {33}, year = {2020} } ```