cp: fix: Fix fp8 after vllm v0.11.2 bump (1660) into r0.5.0#1673
cp: fix: Fix fp8 after vllm v0.11.2 bump (1660) into r0.5.0#1673
fix: Fix fp8 after vllm v0.11.2 bump (1660) into r0.5.0#1673Conversation
Signed-off-by: Guyue Huang <guyueh@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
📝 WalkthroughWalkthroughThis pull request refactors FP8 quantization handling in NeMo-RL's generation module. Changes include switching the vllm patch target to Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
nemo_rl/models/generation/fp8.py (1)
226-226: Critical bug:list.append()called with multiple arguments.
list.append()accepts only one argument, but three are passed here. This will raise aTypeErrorat runtime whenuse_activation_pow2_scaleisTrue.🔎 Proposed fix
- fp8_state.vllm_patches.append(patcher2, patcher3, patcher4) + fp8_state.vllm_patches.extend([patcher2, patcher3, patcher4])
🧹 Nitpick comments (1)
nemo_rl/models/generation/fp8.py (1)
83-101: LGTM - API migration tocollective_rpclooks correct.The patch target change from
_run_workerstocollective_rpcaligns with vLLM v0.11.2's API. The interception logic correctly applies FP8 patches before the original method executes.Consider renaming
original_run_workerstooriginal_collective_rpcfor clarity, since it no longer wraps_run_workers.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
nemo_rl/models/generation/fp8.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/models/generation/fp8.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/models/generation/fp8.py
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
nemo_rl/models/generation/fp8.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/models/generation/fp8.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Lint check
- GitHub Check: Lint check
- GitHub Check: Lint check
- GitHub Check: Post submodule check comment / Comment on PR
- GitHub Check: Post automodel integration comment / Comment on PR
🔇 Additional comments (3)
nemo_rl/models/generation/fp8.py (3)
592-598: LGTM - In-place copy preserves Parameter identity.Using
copy_()instead of direct assignment correctly preserves the existingtorch.nn.Parameterobject and itsweight_loaderattribute during refit, which is the stated goal of this function.
637-653: LGTM - MoE weight post-processing with DeepGEMM.The refactored logic correctly:
- Extracts weights (with flashinfer swap when applicable)
- Applies DeepGEMM post-processing conditionally
- Uses in-place
copy_()to update layer weights, preserving Parameter objects for refitThis is consistent with the approach used in
process_weights_after_loadingfor Linear layers.
609-621: The import path and API call changes claimed in this review comment cannot be verified. Documentation and PR references for vLLM show that the stable API isis_rocm_aiter_moe_enabled()from therocm_aiter_fused_moemodule, notrocm_aiter_ops.is_fused_moe_enabled()fromvllm._aiter_ops. Verify the actual import structure and API calls in the current code before accepting these changes.
…IA-NeMo#1673) Signed-off-by: Guyue Huang <guyueh@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
beep boop [🤖]: Hi @guyueh1 👋,
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.