From 1b3d9085a56bc8454f1947135a3ba6569a315229 Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Wed, 3 May 2023 10:50:20 -0700 Subject: [PATCH 1/3] first draft --- docs/source/en/_toctree.yml | 2 ++ docs/source/en/training/adapt_a_model.mdx | 40 +++++++++++++++++++++++ 2 files changed, 42 insertions(+) create mode 100644 docs/source/en/training/adapt_a_model.mdx diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index fc101347a6e9..d9fbc8e3293a 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -60,6 +60,8 @@ - sections: - local: training/overview title: Overview + - local: training/adapt_a_model + title: Adapt a model to a new task - local: training/unconditional_training title: Unconditional image generation - local: training/text_inversion diff --git a/docs/source/en/training/adapt_a_model.mdx b/docs/source/en/training/adapt_a_model.mdx new file mode 100644 index 000000000000..c7d7e5d55da1 --- /dev/null +++ b/docs/source/en/training/adapt_a_model.mdx @@ -0,0 +1,40 @@ +# Adapt a model to a new task + +Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. + +This guide will show you how to adapt a pretrained text-to-image model for inpainting. You'll initialize a pretrained [`UNet2DConditionModel`] and modify it's architecture to enable training for inpainting. + +## Configure UNet2DConditionModel parameters + +A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, you can load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`: + +```py +from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipeline.unet.config["in_channels"] +4 +``` + +Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting): + +```py +from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipeline.unet.config["in_channels"] +9 +``` + +To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9. Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now. + +```py +from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, subfolder="unet", in_channels=9, low_cpu_mem_usage=False, ignore_mismatched_sizes=True +) +``` + +The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the `unet` model weights are randomly initialized. It is important to finetune the model for inpainting now; otherwise the model returns noise. From 46fcf4870c97ee6564b009367289be84ef4416d5 Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Mon, 8 May 2023 12:05:51 -0700 Subject: [PATCH 2/3] apply feedback --- docs/source/en/training/adapt_a_model.mdx | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/source/en/training/adapt_a_model.mdx b/docs/source/en/training/adapt_a_model.mdx index c7d7e5d55da1..7b802b07bafa 100644 --- a/docs/source/en/training/adapt_a_model.mdx +++ b/docs/source/en/training/adapt_a_model.mdx @@ -2,11 +2,11 @@ Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. -This guide will show you how to adapt a pretrained text-to-image model for inpainting. You'll initialize a pretrained [`UNet2DConditionModel`] and modify it's architecture to enable training for inpainting. +This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [`UNet2DConditionModel`]. ## Configure UNet2DConditionModel parameters -A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, you can load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`: +A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`: ```py from diffusers import StableDiffusionPipeline @@ -26,7 +26,9 @@ pipeline.unet.config["in_channels"] 9 ``` -To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9. Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now. +To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9. + +Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now. ```py from diffusers import UNet2DConditionModel @@ -37,4 +39,4 @@ unet = UNet2DConditionModel.from_pretrained( ) ``` -The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the `unet` model weights are randomly initialized. It is important to finetune the model for inpainting now; otherwise the model returns noise. +The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the `unet` model weights are randomly initialized. It is important to finetune the model for inpainting now because otherwise the model returns noise. From 34857077e4518c50b7dded713ff09a19a313c84d Mon Sep 17 00:00:00 2001 From: Steven Liu Date: Wed, 10 May 2023 11:47:56 -0700 Subject: [PATCH 3/3] conv_in.weight thrown away --- docs/source/en/training/adapt_a_model.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/adapt_a_model.mdx b/docs/source/en/training/adapt_a_model.mdx index 7b802b07bafa..f1af5fca57a2 100644 --- a/docs/source/en/training/adapt_a_model.mdx +++ b/docs/source/en/training/adapt_a_model.mdx @@ -39,4 +39,4 @@ unet = UNet2DConditionModel.from_pretrained( ) ``` -The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the `unet` model weights are randomly initialized. It is important to finetune the model for inpainting now because otherwise the model returns noise. +The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.