Describe the bug
pipeline_stable_diffusion.py trys to encode the prompt when there is no prompt and prompt_embeds and negative_prompt_embeds are given.
Reproduction
Use --pre_compute_text_embeddings when train a dreambooth scripts. Use #6135 by @sayakpaul updating the train_dreambooth_lora.py. Even if prompt_embeds and negative_prompt_embeds are proven to pipeline, it still wants to find prompt to encode and return Nonetype. So --pre_compute_text_embeddings can be used only in IF.
Logs
No response
System Info
diffusers version: 0.24.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- PyTorch version (GPU?): 1.11.0+cu115 (True)
- Huggingface_hub version: 0.19.4
- Transformers version: 4.36.1
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Who can help?
@yiyixuxu @DN6 @sayakpaul @patrickvonplaten