Skip to content
/ DDPM Public

This is an easy to understand, simplified, broken-down implementation of Diffusion Models written in PyTorch. The architecture is borrowed from the paper "Denoising Diffusion Probabilistic Models"

Notifications You must be signed in to change notification settings

aju22/DDPM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Diffusion Models

DDPM was the first paper demonstrating the use of diffusion models for generating high-quality images. The authors proved that a certain parameterization of diffusion models reveals an equivalence with denoising score matching over multiple noise levels during training and with annealed Langevin dynamics during sampling that generates the best quality results.

Access Colab File at: DDPM.ipynb

Diffusion Models are generative models, meaning that they are used to generate data similar to the data on which they are trained. Fundamentally, Diffusion Models work by destroying training data through the successive addition of Gaussian noise, and then learning to recover the data by reversing this noising process. After training, we can use the Diffusion Model to generate data by simply passing randomly sampled noise through the learned denoising process.

Advantages

Beyond cutting-edge image quality, Diffusion Models come with a host of other benefits, including not requiring adversarial training. The difficulties of adversarial training are well-documented; and, in cases where non-adversarial alternatives exist with comparable performance and training efficiency, it is usually best to utilize them. On the topic of training efficiency, Diffusion Models also have the added benefits of scalability and parallelizability.

Forward Diffusion

A Diffusion Model consists of a forward process (or diffusion process), in which a datum (generally an image) is progressively noised. image

Reverse Process

As mentioned previously, the "magic" of diffusion models comes in the reverse process. During training, the model learns to reverse this diffusion process in order to generate new data.

Training and Sampling Algorithm

About

This is an easy to understand, simplified, broken-down implementation of Diffusion Models written in PyTorch. The architecture is borrowed from the paper "Denoising Diffusion Probabilistic Models"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published