Skip to content

Getting closer#43327

Merged
BenjaminBossan merged 1 commit intohuggingface:peft-x-moesfrom
BenjaminBossan:peft-x-moes-ben2
Jan 16, 2026
Merged

Getting closer#43327
BenjaminBossan merged 1 commit intohuggingface:peft-x-moesfrom
BenjaminBossan:peft-x-moes-ben2

Conversation

@BenjaminBossan
Copy link
Copy Markdown
Member

It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always expected 2d weights (being nn.Linear).

Therefore, I added new conversion ops: PermuteDims and FlattenDims. I also added a convenience function _block_diag_3d, it could probably be optimized.

Furthermore, I now have separate branches for down_proj and gate_up_proj as they are different enough that treating them in one branch is too confusing.

It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always
expected 2d weights (being nn.Linear).
@BenjaminBossan
Copy link
Copy Markdown
Member Author

Current error:

MixtralForCausalLM LOAD REPORT from: peft-internal-testing/mixtral-pre-v5-lora
Key                                                              | Status   | Details                                                                                  
-----------------------------------------------------------------+----------+------------------------------------------------------------------------------------------
model.layers.{0, 1}.mlp.experts.lora_A.default.weight            | MISMATCH | Reinit due to size mismatch ckpt: torch.Size([64, 3584]) vs model:torch.Size([64, 1024]) 
model.layers.{0, 1}.mlp.experts.lora_B.default.weight            | MISMATCH | Reinit due to size mismatch ckpt: torch.Size([64, 1024]) vs model:torch.Size([3584, 64]) 
model.layers.{0, 1}.mlp.experts.base_layer.lora_A.default.weight | MISMATCH | Reinit due to size mismatch ckpt: torch.Size([128, 1024]) vs model:torch.Size([64, 7168])
model.layers.{0, 1}.mlp.experts.base_layer.lora_B.default.weight | MISMATCH | Reinit due to size mismatch ckpt: torch.Size([128, 7168]) vs model:torch.Size([1024, 64])

So there isn't much missing to align the shapes.

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-actions
Copy link
Copy Markdown
Contributor

View the CircleCI Test Summary for this PR:

https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=43327&sha=ba6769

@BenjaminBossan BenjaminBossan merged commit 6857f5f into huggingface:peft-x-moes Jan 16, 2026
18 of 25 checks passed
@BenjaminBossan BenjaminBossan deleted the peft-x-moes-ben2 branch January 16, 2026 15:58
ArthurZucker added a commit that referenced this pull request Jan 24, 2026
* current changes

* finally!

* collection is giid

* what kinda works

* nit

* fix name

* small nits

* introduce loading info and config?

* try to remove some duplication

* trying to simplify its really not that hard is it?

* nit

* is this better?

* update

* fix

* better?

* small fix

* force change lora

* push

* up

* replace gate_up_

* push

* Updated Ben (#43319)

* Getting closer (#43327)

It was necessary to flatten the LoRA weights for 3d MoE, as LoRA always
expected 2d weights (being nn.Linear).

* style

* bring back eval()

* nits

* Revert "bring back eval()"

This reverts commit bcee589.

* fix quantizer

* fix

* fix key mapping not recognized

* fix kwargs shinannigans

* fix more kwargs passing

* up

* fix `use_safetensors=False` call?

* nits?

* properly pass use_safetensors=False

* fix

* style

* defaut factory

* style

* simplify

* fix custom adapter_state_dict

* small updates

* nit

* style

* Fix mixtral loading

* rank needed to be set to 2*r for concatenated gate up projection
  parameter so that PEFT allocates 2*r and matches the converted
  weights (using rank_pattern)

* the weights needed to be transposed to match the counter parts

* MoE in PEFT assumes (experts, in, out) but Mixtral MoE is transposed
  so we need to patch this assumption in PEFT for now

* Make style

* Fix error messages

* hardcode checking if .bin works

* fix another test

* fix regex renaming patterns

* nits

* help debug tests

* style

* Patch `update_layer` instead of `_get_in_out_features`

The latter does not exist in released PEFT versions and
therefore is not an ideal target for this PR :)

* Handle Qwen2 conversion similarly to mixtral

* updates, explicit, simplify

* style

* nit

* fix `httpx.LocalProtocolError: Illegal header value b'unknown/None; hf_hub/1.3.2; python/3.13.2; torch/2.9.1; transformers/5.0.0.dev0;`

* some of the last nits

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: nemo <git@ningu.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants