FIX Adoption prompt: New way to obtain position embeddings#2276
Merged
BenjaminBossan merged 1 commit intohuggingface:mainfrom Dec 13, 2024
Merged
Conversation
This PR resolves the failing adoption prompt tests in the CI using transformers installed from source. In this transformers PR: huggingface/transformers#34858 the module.rotary_emb attribute has been removed, which adoption prompt so far assumed was present. Instead, the position_embeddings are now already computed and can be taken directly from the kwargs.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Member
Author
|
@Cyrilvallez The refactoring of rotary embeddings broke a function in PEFT that calculates the query states. This PR should hopefully fix that. It would be great if you could check if my changes make sense. |
Cyrilvallez
approved these changes
Dec 13, 2024
Member
Cyrilvallez
left a comment
There was a problem hiding this comment.
Makes sense to me! If needed, the rotary_emb module can be retrieved from the Model module, not the Attention module anymore!
Member
|
But |
githubnemo
approved these changes
Dec 13, 2024
Guy-Bilitski
pushed a commit
to Guy-Bilitski/peft
that referenced
this pull request
May 13, 2025
This PR resolves the failing adoption prompt tests in the CI using transformers installed from source. In this transformers PR: huggingface/transformers#34858 the module.rotary_emb attribute has been removed, which adoption prompt so far assumed was present. Instead, the position_embeddings are now already computed and can be taken directly from the kwargs.
cyyever
pushed a commit
to cyyever/peft
that referenced
this pull request
Sep 4, 2025
* Refactor reward processing in OnlineDPOTrainer * Refactor completion decoding and reward processing * remove strip * remove warning * Add reward_tokenizer to training script * Add reward_tokenizer and reward_processing_class to OnlineDPOTrainer test * propagate to xpo and nash * style * reduce memory requirement with inference_mode * fix tests * pairrm judge llmblender * setUpClass(cls) * Add setUpClass method to TestJudges class * truncation left for reward tokenizer * don't logcompletion without eval dataset * only eval when possible
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR resolves the failing adoption prompt tests in the CI using transformers installed from source.
In this transformers PR:
huggingface/transformers#34858
the
module.rotary_embattribute has been removed, which adoption prompt so far assumed was present. Instead, theposition_embeddingsare now already computed and can be taken directly from thekwargs.