chore: Bump vllm to 0.11.2, torch to 2.9, transformers to 4.57.1#1563
chore: Bump vllm to 0.11.2, torch to 2.9, transformers to 4.57.1#1563
Conversation
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
❌ Submodule Fast-Forward Check FailedCheck based on commit: 82b6f95 (PR #1563 from ❌ Submodules that need attention:Automodel: ❌ Commits have DIVERGED from a common ancestor Please ensure all submodule commits are fast-forwards of the main branch before merging. |
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Related: vllm-project/vllm#27562 Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
cb2168a to
eab6019
Compare
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
📝 WalkthroughWalkthroughThis PR updates core dependencies (PyTorch 2.9.0, transformers 4.57.1, vLLM 0.11.2), refactors vLLM worker initialization to use dynamic file lookup instead of hardcoded paths, introduces in-place FP8 weight post-processing, and migrates test configurations from single-node to two-node setups. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
.gitmodules(1 hunks)3rdparty/Automodel-workspace/Automodel(1 hunks)examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml(2 hunks)nemo_rl/models/generation/fp8.py(2 hunks)nemo_rl/models/generation/vllm/vllm_worker.py(2 hunks)pyproject.toml(5 hunks)tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh(1 hunks)tests/test_suites/nightly.txt(2 hunks)tools/build-custom-vllm.sh(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.sh
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.sh: Use uv run instead of python to execute scripts
Follow the Google Shell Style Guide for shell scripts
Files:
tools/build-custom-vllm.shtests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
tools/build-custom-vllm.shpyproject.toml.gitmodulesexamples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yamltests/test_suites/nightly.txttests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.shnemo_rl/models/generation/vllm/vllm_worker.pynemo_rl/models/generation/fp8.py3rdparty/Automodel-workspace/Automodel
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
tools/build-custom-vllm.shtests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.shnemo_rl/models/generation/vllm/vllm_worker.pynemo_rl/models/generation/fp8.py
examples/configs/recipes/**/*.yaml
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
When adding support for a new model, create a recipe YAML under examples/configs/recipes/ in the appropriate domain subdirectory (llm, vlm, etc.)
Files:
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
examples/configs/recipes/llm/*.yaml
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Recipe YAML files should follow the naming pattern: --ng-[-modifiers][-long][.vN].yaml for LLM recipes
Files:
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
tests/test_suites/nightly.txt
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
When adding a nightly test for a new model, append the driver script path (relative to tests/test_suites/) to tests/test_suites/nightly.txt
Files:
tests/test_suites/nightly.txt
tests/test_suites/**/*.sh
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
tests/test_suites/**/*.sh: When adding support for a new model, create a corresponding driver shell script under tests/test_suites/ in the matching domain
Driver shell scripts should match the YAML base name with .sh extension and invoke training entrypoint with uv run
Files:
tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/models/generation/vllm/vllm_worker.pynemo_rl/models/generation/fp8.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/models/generation/vllm/vllm_worker.pynemo_rl/models/generation/fp8.py
🧠 Learnings (6)
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to examples/configs/recipes/llm/*.yaml : Recipe YAML files should follow the naming pattern: <algo>-<model>-<nodes>n<gpus>g-<strategy-and-params>[-modifiers][-long][.vN].yaml for LLM recipes
Applied to files:
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to examples/configs/recipes/vlm/*.yaml : Recipe YAML files should follow the naming pattern: vlm_<algo>-<model>-<nodes>n<gpus>g-<strategy>[-modifiers][.vN].yaml for VLM recipes
Applied to files:
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-09-18T13:26:43.307Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:19-26
Timestamp: 2025-09-18T13:26:43.307Z
Learning: In on-policy distillation workflows, validation can use downstream task performance (like math problem solving) as RL-like reward metrics rather than traditional distillation metrics like KL divergence. In this case, "val_reward" with "higher_is_better: true" is the correct checkpoint monitoring configuration.
Applied to files:
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml
📚 Learning: 2025-11-24T17:24:41.976Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:24:41.976Z
Learning: Applies to tests/test_suites/nightly.txt : When adding a nightly test for a new model, append the driver script path (relative to tests/test_suites/) to tests/test_suites/nightly.txt
Applied to files:
tests/test_suites/nightly.txt
📚 Learning: 2025-10-12T14:46:57.171Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1324
File: tests/test_suites/llm/distillation-qwen3-32b-to-1.7b-base-1n8g-megatron-tp2pp2cp2-pack.sh:6-11
Timestamp: 2025-10-12T14:46:57.171Z
Learning: Test scripts in tests/test_suites/llm/ follow a standard configuration pattern that includes NUM_NODES, STEPS_PER_RUN, MAX_STEPS, NUM_RUNS (calculated as `$(( (MAX_STEPS + STEPS_PER_RUN - 1) / STEPS_PER_RUN ))`), and NUM_MINUTES. These variables are part of the test infrastructure's standard interface and should not be flagged as unused even if not directly referenced within the individual script, as they are consumed by external launch tooling or common.env.
Applied to files:
tests/test_suites/nightly.txttests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
📚 Learning: 2025-11-06T22:30:22.860Z
Learnt from: ZhiyuLi-Nvidia
Repo: NVIDIA-NeMo/RL PR: 1477
File: nemo_rl/models/generation/vllm/vllm_backend.py:163-168
Timestamp: 2025-11-06T22:30:22.860Z
Learning: For Ray actor methods in the vLLM generation worker code (vllm_backend.py, vllm_worker.py, vllm_worker_async.py), error handling should use print/traceback + return False pattern rather than raising exceptions, following the Ray RPC practice where exceptions may not propagate well across process boundaries.
Applied to files:
nemo_rl/models/generation/vllm/vllm_worker.py
🪛 Ruff (0.14.6)
nemo_rl/models/generation/vllm/vllm_worker.py
177-181: Avoid specifying long messages outside the exception class
(TRY003)
187-192: Avoid specifying long messages outside the exception class
(TRY003)
221-221: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🪛 Shellcheck (0.11.0)
tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh
[warning] 6-6: NUM_NODES appears unused. Verify use (or export if used externally).
(SC2034)
🔇 Additional comments (12)
examples/configs/recipes/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.yaml (1)
9-9: LGTM—consistent 2-node config update with proper naming.The changes correctly scale the configuration from 1-node to 2-node setup:
- Directory/checkpoint identifiers updated consistently (1n8g → 2n8g).
cluster.num_nodes: 2properly specified withgpus_per_node: 8(line 59).- Filename follows the required pattern per coding guidelines:
<algo>-<model>-<nodes>n<gpus>g-<strategy-and-params>[-modifiers][-long>[.vN].yaml.- Changes align with the broader PR context (test suite migrations and dependency updates).
Also applies to: 51-51, 56-56, 58-58
.gitmodules (1)
14-14: Clarify submodule branch intent—potential merge blocker.Line 14 points to
yifu/bump-torch-and-hf(a feature branch). Pinning submodules to feature branches creates build fragility: the branch can be deleted, have its history rewritten, or go stale, potentially breaking CI and releases.Before merging, confirm:
- Is this feature branch intended to be temporary, or should it remain in production?
- If temporary, ensure the Automodel repo merges this branch to
main(or another stable ref) before this PR merges.- If this branch must remain referenced, update the PR description to document the dependency and expected lifetime.
3rdparty/Automodel-workspace/Automodel (1)
1-1: Verify that the new submodule commit is compatible with the dependency bumps.The submodule pointer has been updated to a new commit. Given that this PR includes significant dependency bumps (torch 2.9, transformers 4.57.1, vLLM 0.11.2), please verify that the new Automodel commit (
910f4e0402ec3af0c3b8642639f0347732067630) is compatible with these updated versions and does not introduce any breaking changes or version conflicts.Consider verifying:
- The new commit exists in the Automodel repository
- What changes are included in the new commit and whether they align with the dependency upgrades
- Whether any configuration or compatibility adjustments are needed in consuming code
tests/test_suites/nightly.txt (1)
19-19: LGTM!The nightly test suite update correctly reflects the shift from 1n8g to 2n8g configuration, and the comment update properly describes the moonlight run section.
Also applies to: 42-42
tests/test_suites/llm/grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e.sh (1)
5-11: LGTM!The
NUM_NODES=2update correctly aligns with the 2n8g configuration indicated in the filename. The ShellCheck warning aboutNUM_NODESbeing unused is a false positive—based on learnings, these variables are consumed by external launch tooling orcommon.env.pyproject.toml (2)
60-60: LGTM!The vllm version is consistently updated to 0.11.2 across all optional-dependencies sections (automodel, vllm, mcore).
Also applies to: 72-72, 95-95
106-108: LGTM!The build dependency group correctly updates torch to 2.9.0, maintaining consistency with the main dependencies section.
tools/build-custom-vllm.sh (1)
69-69: LGTM!The torch installation correctly updates to 2.9.0 with the cu129 wheel index, maintaining consistency with
pyproject.toml. The xformers version (0.0.32.post1) on line 61 is appropriately updated to be compatible with torch 2.9.nemo_rl/models/generation/fp8.py (2)
426-457: LGTM! In-place FP8 weight post-processing preserves weight_loader compatibility.The implementation correctly:
- Checks if DeepGemm should be used before processing
- Uses
.data.copy_()for in-place updates instead of creating newtorch.nn.Parameterobjects, preserving theweight_loaderattribute needed for refit- Properly references the vLLM source for traceability
The lazy imports are appropriate for optional vLLM dependencies.
459-484: LGTM!The integration of
maybe_post_process_fp8_weight_blockat the end ofprocess_weights_after_loadingis correct. The call order ensureslayer.weight_scaleis properly initialized before the DeepGemm-specific post-processing is applied.nemo_rl/models/generation/vllm/vllm_worker.py (2)
19-19: LGTM!The import of
find_specis appropriate for the new dynamic vLLM file discovery approach.
168-194: LGTM!The helper function provides robust runtime discovery of vLLM files with clear error messages. The detailed error messages flagged by ruff (TRY003) are actually beneficial for debugging installation and version issues.
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
|
…DIA-NeMo#1563) Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com>
…DIA-NeMo#1563) Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com> Signed-off-by: Parth Mannan <pmannan@nvidia.com>
…DIA-NeMo#1563) Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
…DIA-NeMo#1563) Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
…DIA-NeMo#1563) Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Signed-off-by: Peter Jin <pjin@nvidia.com> Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com> Co-authored-by: Guyue Huang <guyueh@nvidia.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: Peter Jin <pjin@nvidia.com> Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com>
What does this PR do ?
Updates vllm to 0.11.2, torch to 2.9, transformers to 4.57.1. Also updates Automodel to use
mainbranch.Issues
List issues that this PR closes (syntax):
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
Release Notes
Updates
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.