-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Add Stable Diffusion 3 #8483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Add Stable Diffusion 3 #8483
Changes from all commits
Commits
Show all changes
53 commits
Select commit
Hold shift + click to select a range
e979394
up
DN6 15fb675
add sd3
DN6 e82f8c4
update
DN6 761e677
update
DN6 02881e2
add tests
DN6 db90e8d
fix copies
DN6 215c2e0
fix docs
DN6 cea25b6
update
DN6 6b49a13
add dreambooth lora
DN6 8eb8e21
add LoRA
DN6 617997d
update
DN6 706f1b9
update
DN6 167fa3b
update
DN6 ed45a09
update
DN6 f7794f9
import fix
sayakpaul 3ea4669
update
DN6 07dafaf
Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_dif…
DN6 0966902
import fix 2
sayakpaul a56bd28
update
DN6 d28e5a1
Merge branch 'sd3' of https://github.com/huggingface/diffusers into sd3
DN6 f7f0d52
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 ab3265b
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 1dd1a2e
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 1e0c205
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 bf53ddd
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 e323dcb
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 d63bd33
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 4a7075b
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 df84635
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 cb0ab6c
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 249fdd4
Update src/diffusers/models/autoencoders/autoencoder_kl.py
DN6 cd1cd0e
update
DN6 aa6227d
update
DN6 06400e7
update
DN6 c356aed
fix ckpt id
sayakpaul 59b321d
fix more ids
sayakpaul f5e29f8
update
DN6 da68e58
Merge branch 'sd3' of https://github.com/huggingface/diffusers into sd3
DN6 ba76d29
missing doc
sayakpaul cbd29d2
Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
DN6 f22c199
Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
DN6 61e60db
Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion…
DN6 c921fd2
Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion…
DN6 0297d9e
update'
DN6 877e765
Merge branch 'main' into sd3
sayakpaul 1a69918
fix
sayakpaul 7f611fb
update
DN6 e2389fe
Merge branch 'sd3' of https://github.com/huggingface/diffusers into sd3
DN6 752febe
Update src/diffusers/models/autoencoders/autoencoder_kl.py
yiyixuxu e78462e
Update src/diffusers/models/autoencoders/autoencoder_kl.py
yiyixuxu b2dba96
note on gated access.
sayakpaul 04e38b3
requirements
sayakpaul ba17c16
licensing
sayakpaul File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,19 @@ | ||
| <!--Copyright 2024 The HuggingFace Team. All rights reserved. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
| the License. You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
| specific language governing permissions and limitations under the License. | ||
| --> | ||
|
|
||
| # SD3 Transformer Model | ||
|
|
||
| The Transformer model introduced in [Stable Diffusion 3](https://hf.co/papers/2403.03206). Its novelty lies in the MMDiT transformer block. | ||
|
|
||
| ## SD3Transformer2DModel | ||
|
|
||
| [[autodoc]] SD3Transformer2DModel |
230 changes: 230 additions & 0 deletions
230
docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,230 @@ | ||
| <!--Copyright 2024 The HuggingFace Team. All rights reserved. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
| the License. You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
| specific language governing permissions and limitations under the License. | ||
| --> | ||
|
|
||
| # Stable Diffusion 3 | ||
|
|
||
| Stable Diffusion 3 (SD3) was proposed in [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://arxiv.org/pdf/2403.03206.pdf) by Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Muller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. | ||
|
|
||
| The abstract from the paper is: | ||
|
|
||
| *Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a recent generative model formulation that connects data and noise in a straight line. Despite its better theoretical properties and conceptual simplicity, it is not yet decisively established as standard practice. In this work, we improve existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales. Through a large-scale study, we demonstrate the superior performance of this approach compared to established diffusion formulations for high-resolution text-to-image synthesis. Additionally, we present a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens, improving text comprehension typography, and human preference ratings. We demonstrate that this architecture follows predictable scaling trends and correlates lower validation loss to improved text-to-image synthesis as measured by various metrics and human evaluations.* | ||
|
|
||
|
|
||
| ## Usage Example | ||
|
|
||
| _As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._ | ||
|
|
||
| Use the command below to log in: | ||
|
|
||
| ```bash | ||
| huggingface-cli login | ||
| ``` | ||
|
|
||
| <Tip> | ||
|
|
||
| The SD3 pipeline uses three text encoders to generate an image. Model offloading is necessary in order for it to run on most commodity hardware. Please use the `torch.float16` data type for additional memory savings. | ||
|
|
||
| </Tip> | ||
|
|
||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusion3Pipeline | ||
|
|
||
| pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) | ||
| pipe.to("cuda") | ||
|
|
||
| image = pipe( | ||
| prompt="a photo of a cat holding a sign that says hello world", | ||
| negative_prompt="", | ||
| num_inference_steps=28, | ||
| height=1024, | ||
| width=1024, | ||
| guidance_scale=7.0, | ||
| ).images[0] | ||
|
|
||
| image.save("sd3_hello_world.png") | ||
| ``` | ||
|
|
||
| ## Memory Optimisations for SD3 | ||
|
|
||
| SD3 uses three text encoders, one if which is the very large T5-XXL model. This makes it challenging to run the model on GPUs with less than 24GB of VRAM, even when using `fp16` precision. The following section outlines a few memory optimizations in Diffusers that make it easier to run SD3 on low resource hardware. | ||
|
|
||
| ### Running Inference with Model Offloading | ||
|
|
||
| The most basic memory optimization available in Diffusers allows you to offload the components of the model to CPU during inference in order to save memory, while seeing a slight increase in inference latency. Model offloading will only move a model component onto the GPU when it needs to be executed, while keeping the remaining components on the CPU. | ||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusion3Pipeline | ||
|
|
||
| pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) | ||
| pipe.enable_model_cpu_offload() | ||
|
|
||
| image = pipe( | ||
| prompt="a photo of a cat holding a sign that says hello world", | ||
| negative_prompt="", | ||
| num_inference_steps=28, | ||
| height=1024, | ||
| width=1024, | ||
| guidance_scale=7.0, | ||
| ).images[0] | ||
|
|
||
| image.save("sd3_hello_world.png") | ||
| ``` | ||
|
|
||
| ### Dropping the T5 Text Encoder during Inference | ||
|
|
||
| Removing the memory-intensive 4.7B parameter T5-XXL text encoder during inference can significantly decrease the memory requirements for SD3 with only a slight loss in performance. | ||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusion3Pipeline | ||
|
|
||
| pipe = StableDiffusion3Pipeline.from_pretrained( | ||
| "stabilityai/stable-diffusion-3-medium-diffusers", | ||
| text_encoder_3=None, | ||
| tokenizer_3=None, | ||
| torch_dtype=torch.float16 | ||
| ) | ||
| pipe.to("cuda") | ||
|
|
||
| image = pipe( | ||
| prompt="a photo of a cat holding a sign that says hello world", | ||
| negative_prompt="", | ||
| num_inference_steps=28, | ||
| height=1024, | ||
| width=1024, | ||
| guidance_scale=7.0, | ||
| ).images[0] | ||
|
|
||
| image.save("sd3_hello_world-no-T5.png") | ||
| ``` | ||
|
|
||
| ### Using a Quantized Version of the T5 Text Encoder | ||
|
|
||
| We can leverage the `bitsandbytes` library to load and quantize the T5-XXL text encoder to 8-bit precision. This allows you to keep using all three text encoders while only slightly impacting performance. | ||
|
|
||
| First install the `bitsandbytes` library. | ||
|
|
||
| ```shell | ||
| pip install bitsandbytes | ||
| ``` | ||
|
|
||
| Then load the T5-XXL model using the `BitsAndBytesConfig`. | ||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusion3Pipeline | ||
| from transformers import T5EncoderModel, BitsAndBytesConfig | ||
|
|
||
| quantization_config = BitsAndBytesConfig(load_in_8bit=True) | ||
|
|
||
| model_id = "stabilityai/stable-diffusion-3-medium-diffusers" | ||
| text_encoder = T5EncoderModel.from_pretrained( | ||
| model_id, | ||
| subfolder="text_encoder_3", | ||
| quantization_config=quantization_config, | ||
| ) | ||
| pipe = StableDiffusion3Pipeline.from_pretrained( | ||
| model_id, | ||
| text_encoder_3=text_encoder, | ||
| device_map="balanced", | ||
| torch_dtype=torch.float16 | ||
| ) | ||
|
|
||
| image = pipe( | ||
| prompt="a photo of a cat holding a sign that says hello world", | ||
| negative_prompt="", | ||
| num_inference_steps=28, | ||
| height=1024, | ||
| width=1024, | ||
| guidance_scale=7.0, | ||
| ).images[0] | ||
|
|
||
| image.save("sd3_hello_world-8bit-T5.png") | ||
| ``` | ||
|
|
||
DN6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| You can find the end-to-end script [here](https://gist.github.com/sayakpaul/82acb5976509851f2db1a83456e504f1). | ||
|
|
||
| ## Performance Optimizations for SD3 | ||
|
|
||
| ### Using Torch Compile to Speed Up Inference | ||
|
|
||
| Using compiled components in the SD3 pipeline can speed up inference by as much as 4X. The following code snippet demonstrates how to compile the Transformer and VAE components of the SD3 pipeline. | ||
|
|
||
| ```python | ||
| import torch | ||
| from diffusers import StableDiffusion3Pipeline | ||
|
|
||
| torch.set_float32_matmul_precision("high") | ||
|
|
||
| torch._inductor.config.conv_1x1_as_mm = True | ||
| torch._inductor.config.coordinate_descent_tuning = True | ||
| torch._inductor.config.epilogue_fusion = False | ||
| torch._inductor.config.coordinate_descent_check_all_directions = True | ||
|
|
||
| pipe = StableDiffusion3Pipeline.from_pretrained( | ||
| "stabilityai/stable-diffusion-3-medium-diffusers", | ||
| torch_dtype=torch.float16 | ||
| ).to("cuda") | ||
| pipe.set_progress_bar_config(disable=True) | ||
|
|
||
| pipe.transformer.to(memory_format=torch.channels_last) | ||
| pipe.vae.to(memory_format=torch.channels_last) | ||
|
|
||
| pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True) | ||
| pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) | ||
|
|
||
| # Warm Up | ||
| prompt = "a photo of a cat holding a sign that says hello world", | ||
| for _ in range(3): | ||
| _ = pipe(prompt=prompt, generator=torch.manual_seed(1)) | ||
|
|
||
| # Run Inference | ||
| image = pipe(prompt=prompt, generator=torch.manual_seed(1)).images[0] | ||
| image.save("sd3_hello_world.png") | ||
| ``` | ||
|
|
||
DN6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| Check out the full script [here](https://gist.github.com/sayakpaul/508d89d7aad4f454900813da5d42ca97). | ||
|
|
||
| ## Loading the original checkpoints via `from_single_file` | ||
|
|
||
| The `SD3Transformer2DModel` and `StableDiffusion3Pipeline` classes support loading the original checkpoints via the `from_single_file` method. This method allows you to load the original checkpoint files that were used to train the models. | ||
|
|
||
| ## Loading the original checkpoints for the `SD3Transformer2DModel` | ||
|
|
||
| ```python | ||
| from diffusers import SD3Transformer2DModel | ||
|
|
||
| model = SD3Transformer2DModel.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors") | ||
| ``` | ||
|
|
||
| ## Loading the single checkpoint for the `StableDiffusion3Pipeline` | ||
|
|
||
| ```python | ||
| from diffusers import StableDiffusion3Pipeline | ||
| from transformers import T5EncoderModel | ||
|
|
||
| text_encoder_3 = T5EncoderModel.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", subfolder="text_encoder_3", torch_dtype=torch.float16) | ||
| pipe = StableDiffusion3Pipeline.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors", torch_dtype=torch.float16, text_encoder_3=text_encoder_3) | ||
| ``` | ||
|
|
||
| <Tip> | ||
| `from_single_file` support for the `fp8` version of the checkpoints is coming soon. Watch this space. | ||
| </Tip> | ||
|
|
||
| ## StableDiffusion3Pipeline | ||
|
|
||
| [[autodoc]] StableDiffusion3Pipeline | ||
| - all | ||
| - __call__ | ||
18 changes: 18 additions & 0 deletions
18
docs/source/en/api/schedulers/flow_match_euler_discrete.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,18 @@ | ||
| <!--Copyright 2024 The HuggingFace Team. All rights reserved. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
| the License. You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
| specific language governing permissions and limitations under the License. | ||
| --> | ||
|
|
||
| # FlowMatchEulerDiscreteScheduler | ||
|
|
||
| `FlowMatchEulerDiscreteScheduler` is based on the flow-matching sampling introduced in [Stable Diffusion 3](https://arxiv.org/abs/2403.03206). | ||
|
|
||
| ## FlowMatchEulerDiscreteScheduler | ||
| [[autodoc]] FlowMatchEulerDiscreteScheduler |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this meant to be
[here](https://huggingface.co/stabilityai/)?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah feel free to shorten it. I wanted to unfurl it.