Fixed multirank + multialpha for sequential LoRAs, added correct support of LoRA-C3Lier conversion#937
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
|
Thanks @kovalexal. Could you please run |
|
@BenjaminBossan Hi! Sorry for that, fixed styling. |
pacman100
left a comment
There was a problem hiding this comment.
Thank you @kovalexal for all the work on support multirank and multialpha LoRA layers. Left a comment
|
@pacman100 seems strange that this test is failing on Windows. What diffusers version is used for running tests? |
|
That could be the explanation. We're working on a proper fix, in the meantime we have fixed the diffusers version in #936. So probably this branch needs to merge Edit: Btw. the test failing is probably unrelated to Windows, it just happened that Windows was the first CI run to fail, which cancelled all other runs. |
|
@BenjaminBossan ok, got it. Should I merge |
|
If @pacman100 is fine with the PR as is, I'd suggest we merge it and do a full on deep review on his PR once it's finished. |
|
Hello, Thank you @kovalexal 🤗! Merging it. |
2433abb
into
huggingface:smangrul/lora-supoort-multirank-multialpha
* support multiple ranks and alphas * Update lora.py * Update lora.py * commit suggestions Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * address comments Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Fixed multirank + multialpha for sequential LoRAs, added correct support of LoRA-C3Lier conversion (#937) * Fixed multirank multialpha for sequential loras, added tests, fixed docs * Refactored kohya_ss conversion script for proper support of LoRA-C3Lier * Fixed styling * Removed old comment from docstring * shift `scale_layer`/`unscale_layer` to `LoraLayer` class to support all the child classes * support multiple active adapters * add `active_adapters` property Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * fix bug related to active adapter of `ModulesToSaveWrapper` * revert the change wrt active_adapter assignment Co-Authored-By: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * addressing comments * address comments * address comment --------- Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by: Alexander Kovalchuk <kovalexal@gmail.com> Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* support multiple ranks and alphas * Update lora.py * Update lora.py * commit suggestions Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * address comments Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Fixed multirank + multialpha for sequential LoRAs, added correct support of LoRA-C3Lier conversion (huggingface#937) * Fixed multirank multialpha for sequential loras, added tests, fixed docs * Refactored kohya_ss conversion script for proper support of LoRA-C3Lier * Fixed styling * Removed old comment from docstring * shift `scale_layer`/`unscale_layer` to `LoraLayer` class to support all the child classes * support multiple active adapters * add `active_adapters` property Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * fix bug related to active adapter of `ModulesToSaveWrapper` * revert the change wrt active_adapter assignment Co-Authored-By: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * addressing comments * address comments * address comment --------- Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by: Alexander Kovalchuk <kovalexal@gmail.com> Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
@pacman100 Hi!
As discussed in #873, here are the modifications I've done to fix the issue I had mentioned.
Also, I would like to add proper support for special type of LoRAs for StableDiffusion - LoRA-C3Lier - this is just a LoRA with different rank and alpha for conv2d layers, so I refactored the conversion script.
The new conversion script works better - it automatically detects whether the provided LoRA contains conv2d (1x1, 3x3) layers or not and provides custom ranks / alphas for these layers if needed.
This script can be easily modified for proper support of SDXL LoRAs, so I was going to add it in separate PR if you are comfortable with it.