Skip to content

Fix model saving corruption for dynamically untied embeddings#45135

Closed
Cursx wants to merge 2 commits intohuggingface:mainfrom
Cursx:fix-issue
Closed

Fix model saving corruption for dynamically untied embeddings#45135
Cursx wants to merge 2 commits intohuggingface:mainfrom
Cursx:fix-issue

Conversation

@Cursx
Copy link
Copy Markdown

@Cursx Cursx commented Mar 31, 2026

What does this PR do?

Fixes an issue where PEFT adapters applied independently to tied embeddings (embed_tokens and lm_head) cause silent model corruption upon reloading via AutoModelForCausalLM.from_pretrained().

Root Cause:
When embeddings are untied dynamically during runtime (e.g., vocabulary resizing and independent PEFT merging), their tensor memory storage diverges. PreTrainedModel.save_pretrained() correctly saves both parameter tensors because remove_tied_weights_from_state_dict() sees they don't share identical storage. However, the model configuration saves tie_word_embeddings = True. Upon reloading, from_pretrained() sees tie_word_embeddings=True and aggressively re-ties the two embeddings by overwriting one parameter with the other, effectively destroying the independent delta weights.
Fix:
Included a check in save_pretrained(): if config.tie_word_embeddings is True but input_embeddings.weight.data_ptr() != output_embeddings.weight.data_ptr(), it automatically flips model.config.tie_word_embeddings to False before saving the configuration mapping, preventing silent destruction on loading.

Fixes # #45127

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?
    (Note: A minimal reproduction script is provided below to easily test and validate the fix, as it requires mocking PEFT adaptation)

Reproduction Script

Click to view minimal repro (repro.py)
import torch
import gc
from transformers import AutoModelForCausalLM, AutoConfig

def main():
    # Make a tiny dummy model with tied embeddings
    config = AutoConfig.from_pretrained("Qwen/Qwen1.5-0.5B", trust_remote_code=True)
    config.hidden_size = 32
    config.intermediate_size = 64
    config.num_hidden_layers = 2
    config.num_attention_heads = 4
    config.num_key_value_heads = 4
    config.vocab_size = 1000
    config.tie_word_embeddings = True
    
    # Needs to be small to fit in memory
    model = AutoModelForCausalLM.from_config(config)
    print("Initial tie config:", model.config.tie_word_embeddings)
    print("Are weights tied initially?", id(model.get_input_embeddings().weight) == id(model.get_output_embeddings().weight))
    
    # Suppose PEFT does something and they are no longer tied
    model.get_output_embeddings().weight = torch.nn.Parameter(model.get_output_embeddings().weight.clone())
    model.get_output_embeddings().weight.data += 1.0 # Modify to make them different
    
    print("Are weights tied after fake PEFT?", id(model.get_input_embeddings().weight) == id(model.get_output_embeddings().weight))
    
    model.save_pretrained("./test_tied_model")
    
    # Reload
    model_reloaded = AutoModelForCausalLM.from_pretrained("./test_tied_model")
    print("Reloaded tie config:", model_reloaded.config.tie_word_embeddings)
    print("Are weights tied after reload?", id(model_reloaded.get_input_embeddings().weight) == id(model_reloaded.get_output_embeddings().weight))
    
    # Check if the output embeddings weight is equal to input embeddings
    print("Is the modified output weight preserved?", not torch.allclose(model_reloaded.get_output_embeddings().weight, model_reloaded.get_input_embeddings().weight))
if __name__ == "__main__":
    main()

Who can review?

@BenjaminBossan @githubnemo

@Cursx Cursx changed the title Fix save_pretrained() to set tie_word_embeddings=False when weights a… Fix model saving corruption for dynamically untied embeddings Mar 31, 2026
@Cursx Cursx force-pushed the fix-issue branch 2 times, most recently from 087f84e to e10c8f3 Compare March 31, 2026 02:59
…re independently modified outside of Transformers (e.g., via PEFT)
@github-actions
Copy link
Copy Markdown
Contributor

View the CircleCI Test Summary for this PR:

https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=45135&sha=458cc1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant