Skip to content

cp: fix: Fix fp8 after vllm v0.11.2 bump (1660) into r0.5.0#1673

Merged
terrykong merged 1 commit intor0.5.0from
cherry-pick-1660-r0.5.0
Dec 22, 2025
Merged

cp: fix: Fix fp8 after vllm v0.11.2 bump (1660) into r0.5.0#1673
terrykong merged 1 commit intor0.5.0from
cherry-pick-1660-r0.5.0

Conversation

@chtruong814
Copy link
Copy Markdown
Contributor

@chtruong814 chtruong814 commented Dec 20, 2025

beep boop [🤖]: Hi @guyueh1 👋,

we've cherry picked #1660 into  for you! 🚀

Please review and approve this cherry pick by your convenience!

Summary by CodeRabbit

  • Improvements
    • Enhanced FP8 quantization handling for distributed inference operations.
    • Optimized weight post-processing for Mixture-of-Experts models with improved deep GEMM processing.
    • Updated KV-cache handling with FP8 quantization support.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Guyue Huang <guyueh@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 20, 2025

📝 Walkthrough

Walkthrough

This pull request refactors FP8 quantization handling in NeMo-RL's generation module. Changes include switching the vllm patch target to collective_rpc, removing conditional guards on KV-cache patching, converting weight scale updates to in-place operations, replacing deprecated rocm library imports, and refactoring MoE weight post-processing to use deep GEMM optimization instead of legacy column-major alignment.

Changes

Cohort / File(s) Summary
FP8 Patching & Weight Processing
nemo_rl/models/generation/fp8.py
Updated vllm RayDistributedExecutor patch target from _run_workers to collective_rpc; removed conditional KV-cache dtype guard; switched weight scale updates to in-place copy_(); replaced rocm_aiter_fused_moe imports with rocm_aiter_ops; refactored MoE weight post-processing to use deepgemm_post_process_fp8_weight_block and removed legacy column-major alignment operations

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Areas requiring extra attention:

  • vllm patch target change from _run_workers to collective_rpc — verify compatibility with target vllm version and test collective RPC invocation flow
  • Removal of KV-cache conditional guard — confirm that unconditional patching doesn't cause issues in non-FP8 scenarios
  • MoE weight processing refactor — validate that deepgemm_post_process_fp8_weight_block produces equivalent or improved weight quantization compared to legacy logic
  • rocm library import replacement — ensure rocm_aiter_ops.is_fused_moe_enabled() provides equivalent functionality to replaced method

Possibly related PRs

Suggested labels

CI:L2, r0.5.0

Suggested reviewers

  • guyueh1
  • terrykong

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning PR introduces 876-line fp8.py with comprehensive FP8 support but lacks test results, validation metrics, and regression testing documentation. Update PR description with explicit test execution confirmation (grpo-llama3.1-8b-instruct-1n8g8g-megatron-fp8-rollouts, grpo-llama3.1-8b-instruct-2n8g-megatron-fp8-e2e, grpo-qwen3-8b-base-1n8g-fp8-kvcache-megatron), convergence metrics, vLLM v0.11.2 validation, and CI log links or manual validation summary.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly indicates a FP8 fix related to a vllm v0.11.2 bump, directly matching the changeset's focus on updating FP8 patching and MoE weight handling after vLLM version changes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cherry-pick-1660-r0.5.0

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
nemo_rl/models/generation/fp8.py (1)

226-226: Critical bug: list.append() called with multiple arguments.

list.append() accepts only one argument, but three are passed here. This will raise a TypeError at runtime when use_activation_pow2_scale is True.

🔎 Proposed fix
-            fp8_state.vllm_patches.append(patcher2, patcher3, patcher4)
+            fp8_state.vllm_patches.extend([patcher2, patcher3, patcher4])
🧹 Nitpick comments (1)
nemo_rl/models/generation/fp8.py (1)

83-101: LGTM - API migration to collective_rpc looks correct.

The patch target change from _run_workers to collective_rpc aligns with vLLM v0.11.2's API. The interception logic correctly applies FP8 patches before the original method executes.

Consider renaming original_run_workers to original_collective_rpc for clarity, since it no longer wraps _run_workers.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d488064 and 48963d2.

📒 Files selected for processing (1)
  • nemo_rl/models/generation/fp8.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/generation/fp8.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/generation/fp8.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/generation/fp8.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/generation/fp8.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post automodel integration comment / Comment on PR
🔇 Additional comments (3)
nemo_rl/models/generation/fp8.py (3)

592-598: LGTM - In-place copy preserves Parameter identity.

Using copy_() instead of direct assignment correctly preserves the existing torch.nn.Parameter object and its weight_loader attribute during refit, which is the stated goal of this function.


637-653: LGTM - MoE weight post-processing with DeepGEMM.

The refactored logic correctly:

  1. Extracts weights (with flashinfer swap when applicable)
  2. Applies DeepGEMM post-processing conditionally
  3. Uses in-place copy_() to update layer weights, preserving Parameter objects for refit

This is consistent with the approach used in process_weights_after_loading for Linear layers.


609-621: The import path and API call changes claimed in this review comment cannot be verified. Documentation and PR references for vLLM show that the stable API is is_rocm_aiter_moe_enabled() from the rocm_aiter_fused_moe module, not rocm_aiter_ops.is_fused_moe_enabled() from vllm._aiter_ops. Verify the actual import structure and API calls in the current code before accepting these changes.

@terrykong terrykong added the CI:L1 Run doctests, unit tests, and functional tests label Dec 21, 2025
@terrykong terrykong enabled auto-merge (squash) December 21, 2025 18:51
@terrykong terrykong merged commit bc352c4 into r0.5.0 Dec 22, 2025
96 of 104 checks passed
@terrykong terrykong deleted the cherry-pick-1660-r0.5.0 branch December 22, 2025 04:37
avenkateshha pushed a commit to avenkateshha/RL that referenced this pull request Apr 10, 2026
…IA-NeMo#1673)

Signed-off-by: Guyue Huang <guyueh@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
Co-authored-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-pick CI:L1 Run doctests, unit tests, and functional tests Run CICD

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants