Add prefix-preserving training chat template for GPT-OSS#5109
Open
qgallouedec wants to merge 12 commits intomainfrom
Open
Add prefix-preserving training chat template for GPT-OSS#5109qgallouedec wants to merge 12 commits intomainfrom
qgallouedec wants to merge 12 commits intomainfrom
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Base automatically changed from
more-test-get_training_chat_template
to
main
February 18, 2026 12:45
Member
albertvillanova
left a comment
There was a problem hiding this comment.
Thanks for the fix.
I think:
- The prefix-preserving fix is good and likely necessary.
- But I would say the Design note’s justification is a weak point: I think comparative objective doesn't guarantee preservation of an unseen special token; this could also impact stopping and parsing.
I would prefer others to comment on this.
Member
Author
100% agree. At this point it's an intuition. I'll update the comment to be more conservative. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Description
This PR adds support for tool-calling training with GPT-OSS models (e.g., gpt-oss-20b) in the GRPO agent training pipeline, extending the existing Qwen3 support.
Problem
For context, the main challenge is to ensure that the chat template used for training is prefix-preserving, meaning that when new messages are appended to a conversation, the previous sequence of tokens remains unchanged. This is crucial for multi-turn training.
The original GPT-OSS chat template is not prefix-preserving for two reasons:
<|return|>vs<|end|>: The last assistant message ends with<|return|>while all other turns use<|end|>. When a conversation is extended with new turns, the previously-final assistant message switches from<|return|>to<|end|>, breaking the prefix.Conditional thinking blocks: (Same as Qwen3) The thinking field is only rendered for the final assistant turn (via
loop.lastin the template), so earlier assistant turns lose their thinking content when new messages are appended.This PR introduces
gpt_oss_chat_template— A reference copy of the original GPT-OSS chat template stored in chat_template_utils.py for template matching.gpt_oss_training_chat_template— A modified training-safe template with two key changes:<|return|>with<|end|>on the final assistant message to ensure consistent turn delimiters across all turns.loop.lasttotrueso thinking blocks are always rendered, not just on the final turn.get_training_chat_template()— Now recognizes GPT-OSS templates and returns the training variant, alongside the existing Qwen3 support.is_chat_template_prefix_preserving()— Test messages now include bothreasoning_contentandthinkingkeys, since GPT-OSS usesthinkingwhile Qwen3 usesreasoning_content.TestGetTrainingChatTemplatetests are now parameterized over both GPT-OSS and Qwen3, with a helper_assert_equalthat accounts for the expected<|return|>→<|end|>difference.Design notes (happy to get thoughts on this!)
The consequence of this is that the
<|return|>token is not seen during training. My intuition is that, since GRPO uses a comparative objective (relative ranking within groups), the model's ability to produce<|return|>at inference is not expected to degrade.