-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Closed
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates
Description
Describe the bug
Dear All,
I have use the script and md file to train dreambooth_LoRA.py and I try to use the method you inference. https://github.com/huggingface/diffusers/tree/main/examples/dreambooth
Reproduction
This is the repo I use after the update of train_dreambooth_lora.py
Probably because of the PEFT
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
pipe.unet.load_attn_procs("My Repo")
Logs
Traceback (most recent call last):
File "test.py", line 32, in <module>
pipe.unet.load_attn_procs("My repo")
File "/anaconda3/envs/bias/lib/python3.9/site-packages/diffusers/loaders/unet.py", line 264, in load_attn_procs
rank = value_dict["lora.down.weight"].shape[0]
KeyError: 'lora.down.weight'System Info
diffusersversion: 0.24.0- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- PyTorch version (GPU?): 1.11.0+cu115 (True)
- Huggingface_hub version: 0.19.4
- Transformers version: 4.36.1
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates