Skip to content

fix: gemma3 27b must now have skip_tokenizer_init=False in vllm#1721

Merged
yuki-97 merged 1 commit intomainfrom
test-fix-2
Jan 7, 2026
Merged

fix: gemma3 27b must now have skip_tokenizer_init=False in vllm#1721
yuki-97 merged 1 commit intomainfrom
test-fix-2

Conversation

@terrykong
Copy link
Copy Markdown
Collaborator

@terrykong terrykong commented Jan 6, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

Release Notes

  • Bug Fixes
    • Improved handling for Gemma3 model architecture to ensure correct configuration behavior.
    • Added warning messages to alert users when model-specific configuration requirements conflict with their settings.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Terry Kong <terryk@nvidia.com>
@terrykong terrykong requested review from yfw and yuki-97 January 6, 2026 08:17
@terrykong terrykong requested a review from a team as a code owner January 6, 2026 08:17
@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests r0.5.0 labels Jan 6, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

Added conditional handling for the Gemma3ForConditionalGeneration model architecture in the HuggingFace config patching logic. When this architecture is detected, the code forces skip_tokenizer_init to False and issues a warning if the user configured it differently.

Changes

Cohort / File(s) Summary
Gemma3 Model Configuration Handling
nemo_rl/models/generation/vllm/vllm_worker.py
Added new conditional branch to detect Gemma3ForConditionalGeneration in hf_config.architectures, enforce skip_tokenizer_init=False, and warn users if they configured it to True

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding handling for Gemma3ForConditionalGeneration to ensure skip_tokenizer_init=False in the vLLM worker configuration.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed Minor targeted configuration fix for Gemma3 27B model to prevent crashes; no new features, breaking changes, or refactoring introduced.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
nemo_rl/models/generation/vllm/vllm_worker.py (1)

391-398: Logic is correct; consider using logger for the warning.

The conditional handling ensures skip_tokenizer_init is set to False for Gemma3ForConditionalGeneration, which aligns with the PR objective. The warning appropriately informs users when their configuration is overridden.

Optional: Use logger.warning() instead of print() for consistency

While print() works, using the logger would be more consistent with logging best practices and provide better log level control:

         elif "Gemma3ForConditionalGeneration" in getattr(
             hf_config, "architectures", []
         ):
             if self.cfg["vllm_cfg"]["skip_tokenizer_init"]:
-                print(
+                logger.warning(
                     "Gemma3ForConditionalGeneration models may crash when skip_tokenizer_init is True. NeMo-RL is forcing it to False for this architecture. See https://github.com/NVIDIA-NeMo/RL/issues/1681 for more details."
                 )
             self.cfg["vllm_cfg"]["skip_tokenizer_init"] = False

Note: The logger is already initialized at line 167.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1720466 and 6ddd3ca.

📒 Files selected for processing (1)
  • nemo_rl/models/generation/vllm/vllm_worker.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/generation/vllm/vllm_worker.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Lint check
  • GitHub Check: sphinx-build / Build docs
  • GitHub Check: build-container / main
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR

@terrykong terrykong mentioned this pull request Jan 6, 2026
@yuki-97 yuki-97 enabled auto-merge (squash) January 6, 2026 09:29
@yuki-97 yuki-97 added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Jan 6, 2026
@yuki-97 yuki-97 merged commit 82e6871 into main Jan 7, 2026
82 of 90 checks passed
@yuki-97 yuki-97 deleted the test-fix-2 branch January 7, 2026 04:00
chtruong814 pushed a commit that referenced this pull request Jan 7, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
parthmannan pushed a commit to parthmannan/RL that referenced this pull request Jan 15, 2026
…IA-NeMo#1721)

Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 12, 2026
…IA-NeMo#1721)

Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
…IA-NeMo#1721)

Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests r0.5.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants