Skip to content

Fix #45127: Auto-fix diverged tie_word_embeddings config on save to prevent silent weight corruption#45136

Closed
Cursx wants to merge 2 commits intohuggingface:mainfrom
Cursx:fix-issue
Closed

Fix #45127: Auto-fix diverged tie_word_embeddings config on save to prevent silent weight corruption#45136
Cursx wants to merge 2 commits intohuggingface:mainfrom
Cursx:fix-issue

Conversation

@Cursx
Copy link
Copy Markdown

@Cursx Cursx commented Mar 31, 2026

What does this PR do?

This PR fixes a bug in PreTrainedModel.save_pretrained() where config.tie_word_embeddings can be inconsistent with the actual weight state, leading to silent model corruption for downstream consumers.

Problem

After PEFT merge_and_unload() (typical scenario: Qwen, Llama, Mistral, etc.), embed_tokens and lm_head weights are separated in memory with different values, but config.tie_word_embeddings remains True. Currently, save_pretrained() performs no validation of tie_word_embeddings against the actual weight state — the incorrect config is written to config.json as-is.

This causes two issues:

  1. Reloading via from_pretrained: The load-side safety check (modeling_utils.py:2535-2547) detects the inconsistency and refuses to tie, emitting a warning each time — but the config is still semantically wrong.
  2. Downstream tool consumption (GGUF converters, quantization scripts, etc.): These tools trust tie_word_embeddings: true in config.json directly, potentially causing silent weight corruption — one tensor overwrites the other, producing completely degraded outputs.

Fix

In save_pretrained(), before writing the config to disk, we detect whether the input/output embeddings have diverged. If so, we automatically set config.tie_word_embeddings = False and emit a warning.

Key safety considerations:

  • Only triggers when the output embedding key (e.g., lm_head.weight) is explicitly declared in the model's _tied_weights_keys mapping as tied to the input embedding. This prevents false positives on models like Pop2Piano, which uses tie_word_embeddings=True for decoder output scaling but does not declare lm_head.weight in its _tied_weights_keys (it only ties encoder.embed_tokens and decoder.embed_tokens to shared).
  • Cross-device scenarios (model parallelism / offloading) are skipped entirely to avoid false positives and potential OOM from implicit device copies.
  • T5 family safety analysis:
    • T5: Scaling is decoupled to an independent scale_decoder_outputs field (configuration_t5.py:82-83) and tie_word_embeddings is forced to True. Not affected.
    • UMT5: Config init forcibly overrides tie_word_embeddings = True — even if saved as False, it's restored on load. Not affected.
    • LongT5, Pop2Piano, SwitchTransformers: Still read tie_word_embeddings for scaling in forward, but these are guarded by the _tied_weights_keys check described above.

Changes

modeling_utils.py:

  • Added weight divergence detection + auto-fix logic before config saving in save_pretrained()
  • _tied_weights_keys guard to only auto-fix when the output embedding is declared as tied to the input embedding
  • Cross-device: skip check (avoid false positives and potential OOM)
  • NotImplementedError: silently ignored (expected for vision/speech backbones)
  • Other exceptions: logged via logger.debug

test_modeling_utils.py:

  • Added test_save_pretrained_auto_fixes_diverged_tied_embeddings: constructs a tied Llama model → simulates weight divergence (PEFT merge) → verifies saved config is corrected + warning is emitted + reloaded weights are preserved correctly

Fixes # #45127

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

…re independently modified outside of Transformers (e.g., via PEFT)
@Cursx Cursx force-pushed the fix-issue branch 4 times, most recently from 9ad243f to 3f68e2c Compare March 31, 2026 09:17
@Cursx Cursx changed the title Fix issue Fix #45127: Auto-fix diverged tie_word_embeddings config on save to prevent silent weight corruption Mar 31, 2026
@Cursx Cursx marked this pull request as ready for review March 31, 2026 10:31
@Rocketknight1
Copy link
Copy Markdown
Member

Please don't tick the "I confirm this is not a code agent PR" box and then send me a code agent PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants