Getting closer#43327
Merged
BenjaminBossan merged 1 commit intohuggingface:peft-x-moesfrom Jan 16, 2026
Merged
Conversation
It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always expected 2d weights (being nn.Linear).
Member
Author
|
Current error: So there isn't much missing to align the shapes. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Contributor
|
View the CircleCI Test Summary for this PR: https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=43327&sha=ba6769 |
6857f5f
into
huggingface:peft-x-moes
18 of 25 checks passed
ArthurZucker
added a commit
that referenced
this pull request
Jan 24, 2026
* current changes * finally! * collection is giid * what kinda works * nit * fix name * small nits * introduce loading info and config? * try to remove some duplication * trying to simplify its really not that hard is it? * nit * is this better? * update * fix * better? * small fix * force change lora * push * up * replace gate_up_ * push * Updated Ben (#43319) * Getting closer (#43327) It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always expected 2d weights (being nn.Linear). * style * bring back eval() * nits * Revert "bring back eval()" This reverts commit bcee589. * fix quantizer * fix * fix key mapping not recognized * fix kwargs shinannigans * fix more kwargs passing * up * fix `use_safetensors=False` call? * nits? * properly pass use_safetensors=False * fix * style * defaut factory * style * simplify * fix custom adapter_state_dict * small updates * nit * style * Fix mixtral loading * rank needed to be set to 2*r for concatenated gate up projection parameter so that PEFT allocates 2*r and matches the converted weights (using rank_pattern) * the weights needed to be transposed to match the counter parts * MoE in PEFT assumes (experts, in, out) but Mixtral MoE is transposed so we need to patch this assumption in PEFT for now * Make style * Fix error messages * hardcode checking if .bin works * fix another test * fix regex renaming patterns * nits * help debug tests * style * Patch `update_layer` instead of `_get_in_out_features` The latter does not exist in released PEFT versions and therefore is not an ideal target for this PR :) * Handle Qwen2 conversion similarly to mixtral * updates, explicit, simplify * style * nit * fix `httpx.LocalProtocolError: Illegal header value b'unknown/None; hf_hub/1.3.2; python/3.13.2; torch/2.9.1; transformers/5.0.0.dev0;` * some of the last nits --------- Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by: nemo <git@ningu.net>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always expected 2d weights (being nn.Linear).
Therefore, I added new conversion ops:
PermuteDimsandFlattenDims. I also added a convenience function_block_diag_3d, it could probably be optimized.Furthermore, I now have separate branches for
down_projandgate_up_projas they are different enough that treating them in one branch is too confusing.