Skip to content

FIX Restore LoRA hotswapping functionality#45682

Open
BenjaminBossan wants to merge 1 commit intohuggingface:mainfrom
BenjaminBossan:fix-restore-peft-hotswapping
Open

FIX Restore LoRA hotswapping functionality#45682
BenjaminBossan wants to merge 1 commit intohuggingface:mainfrom
BenjaminBossan:fix-restore-peft-hotswapping

Conversation

@BenjaminBossan
Copy link
Copy Markdown
Member

What does this PR do?

LoRA hotswapping was added in #41297. Due to changes in #43261, it stopped working. This PR restores the functionality.

The tests already cover this and are failing, but probably no one noticed because they're slow tests. On main, they fail with mismatched sizes, which is expected as the padding of the LoRA weights is not being applied. With this PR, I can confirm that the tests pass locally.

Since the two PRs were released in together in v5, there was never a Transformers release with working hotswapping functionality.

Notes

The hotswap path does not use _load_pretrained_model, which means that loading the state_dict if not present is required. I hoisted that functionality from the TP path, which was already there, to re-use the same logic. I also apply weight renamings for that reason.

Moreover, I moved the inference model logic to a local function, again to avoid duplicating the logic.

Code Agent Policy

  • I confirm that this is not a pure code agent PR. Claude was used to assist with writing this PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

LoRA hotswapping was added in huggingface#41297. Due to changes in huggingface#43261, it
stopped working. This PR restores the functionality.

The tests already cover this and are failing, but probably no one
noticed because they're slow tests. On main, they fail with mismatched
sizes, which is expected as the padding of the LoRA weights is not being
applied. With this PR, I can confirm that the tests pass locally.

Since the two PRs were released in together in v5, there was never a
Transformers release with working hotswapping functionality.

Notes:

The hotswap path does not use _load_pretrained_model, which means that
loading the state_dict if not present is required. I hoisted that
functionality from the TP path, which was already there, to re-use the
same logic. I also apply weight renamings for that reason.

Moreover, I moved the inference model logic to a local function, again
to avoid duplicating the logic.
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants