[lora] fix zimage lora conversion to support for more lora.#13209
[lora] fix zimage lora conversion to support for more lora.#13209
Conversation
|
@christopher5106 does this PR work for you? |
|
|
||
| if has_non_diffusers_lora_id: | ||
|
|
||
| def get_alpha_scales(down_weight, alpha_key): |
There was a problem hiding this comment.
Just moving it out of the if block since it can be generally used.
| lora_dot_up_key = ".lora.up.weight" | ||
| has_lora_dot_format = any(lora_dot_down_key in k for k in state_dict) | ||
|
|
||
| if has_lora_dot_format: |
There was a problem hiding this comment.
Main additional to support this LoRA checkpoint structure.
|
yes, works for me |
|
jsut because I was curious, I stripped all the redundant keys and the lora went down from 761MB to 261MB.... why do people do this? |
|
Strange, let me ask if someone knows. Did you manage to divide more than 2 by deduplicating their q/k/v + fused version + common down lora part, right ? |
@christopher5106 yeah, if you want I can upload the weights. Also the lora quality isn't nowhere near of the example but I didn't lower the scale
|
|
I'm fine, thanks, but I'm still wondering where it comes from. In the past, I saw that you managed mixtures of loras for flux1 but I believe that some users do manual hacks on these loras to merge multiple loras into one, without knowing that half of their keys will be dropped, and to me, it makes little sense to support these edge cases. |
|
Failing test is unrelated. |


What does this PR do?
Fixes #13203
Additionally, fixes how "alpha"s are handled in the diffusers format path.