From c69ed50648e9f020064a2b8559bb743262c2902c Mon Sep 17 00:00:00 2001 From: Walter Hugo Lopez Pinaya Date: Mon, 20 Mar 2023 22:31:04 +0000 Subject: [PATCH 1/5] Add README.md Signed-off-by: Walter Hugo Lopez Pinaya --- tutorials/README.md | 125 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 tutorials/README.md diff --git a/tutorials/README.md b/tutorials/README.md new file mode 100644 index 00000000..c7787871 --- /dev/null +++ b/tutorials/README.md @@ -0,0 +1,125 @@ +# MONAI Generative Models Tutorials +This directory hosts the MONAI Generative Models tutorials. + +## Requirements +To run the tutorials, you will need to install the Generative Models package. +Besides that, most of the examples and tutorials require +[matplotlib](https://matplotlib.org/) and [Jupyter Notebook](https://jupyter.org/). + +These can be installed with the following: + +```bash +python -m pip install -U pip +python -m pip install -U matplotlib +python -m pip install -U notebook +``` + +Some of the examples may require optional dependencies. In case of any optional import errors, +please install the relevant packages according to MONAI's [installation guide](https://docs.monai.io/en/latest/installation.html). +Or install all optional requirements with the following: + +```bash +pip install -r requirements-dev.txt +``` + +## List of notebooks and examples + +### Table of Contents +1. [Diffusion Models](#1.-Diffusion-Models) + + -[Image synthesis](#Image-synthesis-with-Diffusion-Models) + -[Anomaly Detection](#Image-synthesis-with-Diffusion-Models) + +2. [Latent Diffusion Models](#2.-Latent-Diffusion-Models) + + -[Image synthesis](#Image-synthesis-with-Latent-Diffusion-Models) + -[Super Resolution](#Super-resolution-with-Latent-Diffusion-Models) + +3. [VQ-VAE + Transformers](#3.-VQ-VAE-+-Transformers) + + -[Image synthesis](#Image-synthesis-with-VQ-VAE-+-Transformers) + -[Anomaly Detection](#Anomaly-Detection-with-VQ-VAE-+-Transformers) + + +### 1. Diffusion Models + +#### Image synthesis with Diffusion Models + +* [Trainning a 3D Denoising Diffusion Probabilistic Model](./3d_ddpm/3d_ddpm_tutorial.ipynb): This tutorial shows how to easily +train a DDPM on 3D medical data. In this example, we use a downsampled version of the BraTS dataset. We will show how to +make use of the UNet model and the Noise Scheduler necessary to train a diffusion model. Besides that, we show how to +use the DiffusionInferer class to simplify the training and sampling processes. Finally, after training the model, we +show how to use a Noise Scheduler with fewer timesteps to sample synthetic images. + +* [Trainning a 2D Denoising Diffusion Probabilistic Model](./2d_ddpm/2d_ddpm_tutorial.ipynb): This tutorial shows how to easily +train a DDPM on medical data. In this example, we use the MedNIST dataset, which is very suitable for beginners as a tutorial. + +* [Comparing different noise schedulers](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb): In this tutorial, we compare the +performance of different noise schedulers. We will show how to sample a diffusion model using the DDPM, DDIM, and PNDM +schedulers and how different numbers of timesteps affect the quality of the samples. + +* [Trainning a 2D Denoising Diffusion Probabilistic Model with different parameterisation](./2d_ddpm/2d_ddpm_tutorial_v_prediction.ipynb): +In MONAI Generative Models, we support different parameterizations for the diffusion model (epsilon, sample, and +v-prediction). In this tutorial, we show how to train a DDPM using the v-prediction parameterization, which improves the +stability and convergence of the model. + +* [Training a 2D DDPM using Pytorch Ignite](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb): Here, we show how to train a DDPM +on medical data using Pytorch Ignite. We will show how to use the DiffusionPrepareBatch to prepare the model inputs and MONAI's SupervisedTrainer and SupervisedEvaluator to train DDPMs. + +* [Using a 2D DDPM to inpaint images](./2d_ddpm/2d_ddpm_inpainting.ipynb): In this tutorial, we show how to use a DDPM to +inpaint of 2D images from the MedNIST dataset using the RePaint method. + +* [Generating conditional samples with a 2D DDPM using classifier-free guidance](./classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb): +This tutorial shows how easily we can train a Diffusion Model and generate conditional samples using classifier-free guidance in +the MONAI's framework. + +* [Training Diffusion models with Distributed Data Parallel](./distributed_training/ddpm_training_ddp.py): +* This example shows how to execute distributed training and evaluation based on PyTorch native DistributedDataParallel +module with torch.distributed.launch. + +#### Anomaly Detection with Diffusion Models + +* [Weakly Supervised Anomaly Detection with Classifier-free Guidance](./anomaly_detection/2d_classifierfree_guidance_anomalydetection_tutorial.ipynb): +This tutorial shows how to use a DDPM to perform weakly supervised anomaly detection using classifier-free guidance based on the +method proposed by Sanchez et al. [What is Healthy? Generative Counterfactual Diffusion for Lesion Localization](https://arxiv.org/abs/2207.12268). DGM 4 MICCAI 2022 + + +### 2. Latent Diffusion Models + +#### Image synthesis with Latent Diffusion Models + +* [Training a 3D Latent Diffusion Model](./3d_ldm/3d_ldm_tutorial.ipynb): This tutorial shows how to train a LDM on 3D medical +data. In this example, we use the BraTS dataset. We show how to train an AutoencoderKL and connect it to an LDM. We also +comment on the importance of the scaling factor in the LDM used to scale the latent representation of the AEKL to a suitable +range for the diffusion model. Finally, we show how to use the LatentDiffusionInferer class to simplify the training and sampling. + +* [Training a 2D Latent Diffusion Model](./2d_ldm/2d_ldm_tutorial.ipynb): This tutorial shows how to train an LDM on medical +on the MedNIST dataset. We show how to train an AutoencoderKL and connect it to an LDM. + +* Training Autoencoder with KL-regularization: In this section, we focus on training an AutoencoderKL on [2D](./2d_autoencoderkl/2d_autoencoderkl_tutorial.ipynb) and [3D](./3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) medical data, +that can be used as the compression model used in a Latent Diffusion Model. + +#### Super-resolution with Latent Diffusion Models + +* [Super-resolution using Stable Diffusion Upscalers method](./2d_super_resolution/2d_stable_diffusion_v2_super_resolution.ipynb): +In this tutorial, we show how to perform super-resolution on 2D images from the MedNIST dataset using the Stable +Diffusion Upscalers method. In this example, we will show how to condition a latent diffusion model on a low-resolution image +as well as how to use the DiffusionModelUNet's class_labels conditioning to condition the model on the level of noise added to the image +(aka "noise conditioning augmentation") + + +### 3. VQ-VAE + Transformers + +#### Image synthesis with VQ-VAE + Transformers + +* [Training a 2D VQ-VAE + Autoregressive Transformers](./2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb): This tutorial shows how to train +a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset. + +* Training VQ-VAEs and VQ-GANs: In this section, we show how to train Vector Quantized Variation Autoencoder (on [2D](./2d_vqvae/2d_vqvae_tutorial.ipynb) and [3D](./3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) data) and +show how to use the PatchDiscriminator class to train a [VQ-GAN](./2d_vqgan/2d_vqgan_tutorial.ipynb) and improve the quality of the generated images. + +#### Anomaly Detection with VQ-VAE + Transformers + +* [Anomaly Detection with 2D VQ-VAE + Autoregressive Transformers](./anomaly_detection/anomaly_detection_with_transformers.ipynb): This tutorial shows how to + train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset and use it to extract the likelihood of +testing images to be part of the in-distribution class (used during training). From 0c6c4311ac950b4b36d2d839f6c8dc3237db6b17 Mon Sep 17 00:00:00 2001 From: Walter Hugo Lopez Pinaya Date: Mon, 20 Mar 2023 22:36:13 +0000 Subject: [PATCH 2/5] Add README.md Signed-off-by: Walter Hugo Lopez Pinaya --- tutorials/README.md | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/tutorials/README.md b/tutorials/README.md index c7787871..9752c888 100644 --- a/tutorials/README.md +++ b/tutorials/README.md @@ -25,21 +25,15 @@ pip install -r requirements-dev.txt ## List of notebooks and examples ### Table of Contents -1. [Diffusion Models](#1.-Diffusion-Models) - - -[Image synthesis](#Image-synthesis-with-Diffusion-Models) - -[Anomaly Detection](#Image-synthesis-with-Diffusion-Models) - -2. [Latent Diffusion Models](#2.-Latent-Diffusion-Models) - - -[Image synthesis](#Image-synthesis-with-Latent-Diffusion-Models) - -[Super Resolution](#Super-resolution-with-Latent-Diffusion-Models) - -3. [VQ-VAE + Transformers](#3.-VQ-VAE-+-Transformers) - - -[Image synthesis](#Image-synthesis-with-VQ-VAE-+-Transformers) - -[Anomaly Detection](#Anomaly-Detection-with-VQ-VAE-+-Transformers) - +1. Diffusion Models + -Image Synthesis + -Anomaly Detection +2. Latent Diffusion Models + -Image Synthesis + -Super Resolution +3. VQ-VAE + Transformers + -Image synthesis + -Anomaly Detection ### 1. Diffusion Models From f2d459f6d34c8ba05b89868e05465b66d8730af0 Mon Sep 17 00:00:00 2001 From: Walter Hugo Lopez Pinaya Date: Mon, 20 Mar 2023 22:37:55 +0000 Subject: [PATCH 3/5] Add README.md Signed-off-by: Walter Hugo Lopez Pinaya --- tutorials/README.md | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/tutorials/README.md b/tutorials/README.md index 9752c888..c332dbf6 100644 --- a/tutorials/README.md +++ b/tutorials/README.md @@ -25,15 +25,10 @@ pip install -r requirements-dev.txt ## List of notebooks and examples ### Table of Contents -1. Diffusion Models - -Image Synthesis - -Anomaly Detection -2. Latent Diffusion Models - -Image Synthesis - -Super Resolution -3. VQ-VAE + Transformers - -Image synthesis - -Anomaly Detection +1. [Diffusion Models](#1-diffusion-models) +2. [Latent Diffusion Models](#2-latent-diffusion-models) +3. [VQ-VAE + Transformers](#3-vq-vae--transformers) + ### 1. Diffusion Models From 17cb8c255dde8dc34cdb15da20d7ac1161bade11 Mon Sep 17 00:00:00 2001 From: Walter Hugo Lopez Pinaya Date: Mon, 20 Mar 2023 22:39:48 +0000 Subject: [PATCH 4/5] Fix links Signed-off-by: Walter Hugo Lopez Pinaya --- tutorials/README.md | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/tutorials/README.md b/tutorials/README.md index c332dbf6..530933d2 100644 --- a/tutorials/README.md +++ b/tutorials/README.md @@ -34,41 +34,41 @@ pip install -r requirements-dev.txt #### Image synthesis with Diffusion Models -* [Trainning a 3D Denoising Diffusion Probabilistic Model](./3d_ddpm/3d_ddpm_tutorial.ipynb): This tutorial shows how to easily +* [Trainning a 3D Denoising Diffusion Probabilistic Model](./generative/3d_ddpm/3d_ddpm_tutorial.ipynb): This tutorial shows how to easily train a DDPM on 3D medical data. In this example, we use a downsampled version of the BraTS dataset. We will show how to make use of the UNet model and the Noise Scheduler necessary to train a diffusion model. Besides that, we show how to use the DiffusionInferer class to simplify the training and sampling processes. Finally, after training the model, we show how to use a Noise Scheduler with fewer timesteps to sample synthetic images. -* [Trainning a 2D Denoising Diffusion Probabilistic Model](./2d_ddpm/2d_ddpm_tutorial.ipynb): This tutorial shows how to easily +* [Trainning a 2D Denoising Diffusion Probabilistic Model](./generative/2d_ddpm/2d_ddpm_tutorial.ipynb): This tutorial shows how to easily train a DDPM on medical data. In this example, we use the MedNIST dataset, which is very suitable for beginners as a tutorial. -* [Comparing different noise schedulers](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb): In this tutorial, we compare the +* [Comparing different noise schedulers](./generative/2d_ddpm/2d_ddpm_compare_schedulers.ipynb): In this tutorial, we compare the performance of different noise schedulers. We will show how to sample a diffusion model using the DDPM, DDIM, and PNDM schedulers and how different numbers of timesteps affect the quality of the samples. -* [Trainning a 2D Denoising Diffusion Probabilistic Model with different parameterisation](./2d_ddpm/2d_ddpm_tutorial_v_prediction.ipynb): +* [Trainning a 2D Denoising Diffusion Probabilistic Model with different parameterisation](./generative/2d_ddpm/2d_ddpm_tutorial_v_prediction.ipynb): In MONAI Generative Models, we support different parameterizations for the diffusion model (epsilon, sample, and v-prediction). In this tutorial, we show how to train a DDPM using the v-prediction parameterization, which improves the stability and convergence of the model. -* [Training a 2D DDPM using Pytorch Ignite](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb): Here, we show how to train a DDPM +* [Training a 2D DDPM using Pytorch Ignite](./generative/2d_ddpm/2d_ddpm_compare_schedulers.ipynb): Here, we show how to train a DDPM on medical data using Pytorch Ignite. We will show how to use the DiffusionPrepareBatch to prepare the model inputs and MONAI's SupervisedTrainer and SupervisedEvaluator to train DDPMs. -* [Using a 2D DDPM to inpaint images](./2d_ddpm/2d_ddpm_inpainting.ipynb): In this tutorial, we show how to use a DDPM to +* [Using a 2D DDPM to inpaint images](./generative/2d_ddpm/2d_ddpm_inpainting.ipynb): In this tutorial, we show how to use a DDPM to inpaint of 2D images from the MedNIST dataset using the RePaint method. -* [Generating conditional samples with a 2D DDPM using classifier-free guidance](./classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb): +* [Generating conditional samples with a 2D DDPM using classifier-free guidance](./generative/classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb): This tutorial shows how easily we can train a Diffusion Model and generate conditional samples using classifier-free guidance in the MONAI's framework. -* [Training Diffusion models with Distributed Data Parallel](./distributed_training/ddpm_training_ddp.py): +* [Training Diffusion models with Distributed Data Parallel](./generative/distributed_training/ddpm_training_ddp.py): * This example shows how to execute distributed training and evaluation based on PyTorch native DistributedDataParallel module with torch.distributed.launch. #### Anomaly Detection with Diffusion Models -* [Weakly Supervised Anomaly Detection with Classifier-free Guidance](./anomaly_detection/2d_classifierfree_guidance_anomalydetection_tutorial.ipynb): +* [Weakly Supervised Anomaly Detection with Classifier-free Guidance](./generative/anomaly_detection/2d_classifierfree_guidance_anomalydetection_tutorial.ipynb): This tutorial shows how to use a DDPM to perform weakly supervised anomaly detection using classifier-free guidance based on the method proposed by Sanchez et al. [What is Healthy? Generative Counterfactual Diffusion for Lesion Localization](https://arxiv.org/abs/2207.12268). DGM 4 MICCAI 2022 @@ -77,20 +77,20 @@ method proposed by Sanchez et al. [What is Healthy? Generative Counterfactual Di #### Image synthesis with Latent Diffusion Models -* [Training a 3D Latent Diffusion Model](./3d_ldm/3d_ldm_tutorial.ipynb): This tutorial shows how to train a LDM on 3D medical +* [Training a 3D Latent Diffusion Model](./generative/3d_ldm/3d_ldm_tutorial.ipynb): This tutorial shows how to train a LDM on 3D medical data. In this example, we use the BraTS dataset. We show how to train an AutoencoderKL and connect it to an LDM. We also comment on the importance of the scaling factor in the LDM used to scale the latent representation of the AEKL to a suitable range for the diffusion model. Finally, we show how to use the LatentDiffusionInferer class to simplify the training and sampling. -* [Training a 2D Latent Diffusion Model](./2d_ldm/2d_ldm_tutorial.ipynb): This tutorial shows how to train an LDM on medical +* [Training a 2D Latent Diffusion Model](./generative/2d_ldm/2d_ldm_tutorial.ipynb): This tutorial shows how to train an LDM on medical on the MedNIST dataset. We show how to train an AutoencoderKL and connect it to an LDM. -* Training Autoencoder with KL-regularization: In this section, we focus on training an AutoencoderKL on [2D](./2d_autoencoderkl/2d_autoencoderkl_tutorial.ipynb) and [3D](./3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) medical data, +* Training Autoencoder with KL-regularization: In this section, we focus on training an AutoencoderKL on [2D](./generative/2d_autoencoderkl/2d_autoencoderkl_tutorial.ipynb) and [3D](./generative/3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) medical data, that can be used as the compression model used in a Latent Diffusion Model. #### Super-resolution with Latent Diffusion Models -* [Super-resolution using Stable Diffusion Upscalers method](./2d_super_resolution/2d_stable_diffusion_v2_super_resolution.ipynb): +* [Super-resolution using Stable Diffusion Upscalers method](./generative/2d_super_resolution/2d_stable_diffusion_v2_super_resolution.ipynb): In this tutorial, we show how to perform super-resolution on 2D images from the MedNIST dataset using the Stable Diffusion Upscalers method. In this example, we will show how to condition a latent diffusion model on a low-resolution image as well as how to use the DiffusionModelUNet's class_labels conditioning to condition the model on the level of noise added to the image @@ -101,14 +101,14 @@ as well as how to use the DiffusionModelUNet's class_labels conditioning to cond #### Image synthesis with VQ-VAE + Transformers -* [Training a 2D VQ-VAE + Autoregressive Transformers](./2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb): This tutorial shows how to train +* [Training a 2D VQ-VAE + Autoregressive Transformers](./generative/2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb): This tutorial shows how to train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset. -* Training VQ-VAEs and VQ-GANs: In this section, we show how to train Vector Quantized Variation Autoencoder (on [2D](./2d_vqvae/2d_vqvae_tutorial.ipynb) and [3D](./3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) data) and -show how to use the PatchDiscriminator class to train a [VQ-GAN](./2d_vqgan/2d_vqgan_tutorial.ipynb) and improve the quality of the generated images. +* Training VQ-VAEs and VQ-GANs: In this section, we show how to train Vector Quantized Variation Autoencoder (on [2D](./generative/2d_vqvae/2d_vqvae_tutorial.ipynb) and [3D](./generative/3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb) data) and +show how to use the PatchDiscriminator class to train a [VQ-GAN](./generative/2d_vqgan/2d_vqgan_tutorial.ipynb) and improve the quality of the generated images. #### Anomaly Detection with VQ-VAE + Transformers -* [Anomaly Detection with 2D VQ-VAE + Autoregressive Transformers](./anomaly_detection/anomaly_detection_with_transformers.ipynb): This tutorial shows how to +* [Anomaly Detection with 2D VQ-VAE + Autoregressive Transformers](./generative/anomaly_detection/anomaly_detection_with_transformers.ipynb): This tutorial shows how to train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset and use it to extract the likelihood of testing images to be part of the in-distribution class (used during training). From 85ecea52ae2fc62cf88b0d203d669c0a42d57827 Mon Sep 17 00:00:00 2001 From: Walter Hugo Lopez Pinaya Date: Mon, 20 Mar 2023 22:41:20 +0000 Subject: [PATCH 5/5] Fix Typo Signed-off-by: Walter Hugo Lopez Pinaya --- tutorials/README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/tutorials/README.md b/tutorials/README.md index 530933d2..7877345f 100644 --- a/tutorials/README.md +++ b/tutorials/README.md @@ -62,8 +62,7 @@ inpaint of 2D images from the MedNIST dataset using the RePaint method. This tutorial shows how easily we can train a Diffusion Model and generate conditional samples using classifier-free guidance in the MONAI's framework. -* [Training Diffusion models with Distributed Data Parallel](./generative/distributed_training/ddpm_training_ddp.py): -* This example shows how to execute distributed training and evaluation based on PyTorch native DistributedDataParallel +* [Training Diffusion models with Distributed Data Parallel](./generative/distributed_training/ddpm_training_ddp.py): This example shows how to execute distributed training and evaluation based on PyTorch native DistributedDataParallel module with torch.distributed.launch. #### Anomaly Detection with Diffusion Models