[core / Lora] Fix lora scale on text encoders#5087
[core / Lora] Fix lora scale on text encoders#5087younesbelkada wants to merge 1 commit intohuggingface:mainfrom
core / Lora] Fix lora scale on text encoders#5087Conversation
|
Actually I think this is expected since one method is just used for the forward pass (since we can't pass forward to |
|
@patrickvonplaten exactly. For fusing the LoRA parameters for text encoders, So, not sure, if this is the best way to tackle this actually. |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
What does this PR do?
Found a potential bug in
adjust_lora_scale_text_encoderxxxPipelinecalls under the hood this method: https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/lora.py#L28-L39 to adjust the lora scale on the text encoder. That method seems to force assign the lora_scale of the LoRA layers to a new one. On the other hand, when calling fuse_lora it seems that we multiply the the lora_scale: https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/lora.py#L130Maybe the reason this has not been flagged by the CI is that self.lora_layer.network_alpha / self.lora_layer.rank was somehow equal to 1.
cc @patrickvonplaten @sayakpaul