Skip to content

cp: fix: Fix gradient clipping of non-float32 params (1158) into r0.4.0#1258

Closed
chtruong814 wants to merge 1 commit intor0.4.0from
cherry-pick-1158-r0.4.0
Closed

cp: fix: Fix gradient clipping of non-float32 params (1158) into r0.4.0#1258
chtruong814 wants to merge 1 commit intor0.4.0from
cherry-pick-1158-r0.4.0

Conversation

@chtruong814
Copy link
Copy Markdown
Contributor

@chtruong814 chtruong814 commented Oct 2, 2025

beep boop [🤖]: Hi @jseppanen 👋,

we've cherry picked #1158 into  for you! 🚀

Please review and approve this cherry pick by your convenience!

Summary by CodeRabbit

  • New Features

    • None; no user-facing changes.
  • Refactor

    • Simplified gradient clipping in training workflows, reducing overhead and improving maintainability.
    • Standardized clipping behavior across policy workers.
  • Performance

    • Slightly faster training steps when gradients do not require clipping.
  • Chores

    • Updated internal calls to align with the streamlined clipping behavior, keeping functionality consistent.

Signed-off-by: Jarno Seppänen <jseppanen@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Oct 2, 2025

⚠️ File Consistency Check

Check based on commit: 2dbc493 (PR #1258 from cherry-pick-1158-r0.4.0)

⚠️ Parallel Plans Synchronization Warning

The file nemo_rl/models/dtensor/parallelize.py was modified in this PR, but neither 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/optimized_tp_plans.py nor 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/parallelizer.py was updated.

Why this matters:
These files contain similar parallel plan implementations that should be kept synchronized to ensure consistency across the codebase.

Action required:

  • Please review if the changes in nemo_rl/models/dtensor/parallelize.py should also be applied to 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/optimized_tp_plans.py or 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/parallelizer.py
  • Update the appropriate related file(s) if necessary to maintain functional consistency
  • Request access to the NVIDIA-NeMo/Automodel repository, create a PR against the nemo-rl-submodule branch, and update the Automodel submodule in the nemo-rl index
  • Add @ffrujeri as a reviewer of this PR if you have any questions about the consistency requirements
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/dtensor/parallelize.py
  • Not modified: 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/optimized_tp_plans.py
  • Not modified: 3rdparty/Automodel-workspace/Automodel/nemo_automodel/components/distributed/parallelizer.py

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/dtensor_policy_worker.py
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 2, 2025

📝 Walkthrough

Walkthrough

Removed dtype handling from gradient clipping utility and updated policy workers to match new signature. clip_grad_by_total_norm_ now builds gradients only when clipping is needed (clip_coeff < 1.0), eliminating prior dtype casting. Policy workers drop the dtype argument in their train() clipping calls; other training flow remains unchanged.

Changes

Cohort / File(s) Summary
Gradient clipping core change
nemo_rl/models/dtensor/parallelize.py
Removed dtype parameter from clip_grad_by_total_norm_. Deferred gradient collection until clipping is required; eliminated .to(dtype) casts. Updated function signature accordingly.
Policy worker callsite updates
nemo_rl/models/policy/dtensor_policy_worker.py, nemo_rl/models/policy/dtensor_policy_worker_v2.py
Updated train() to call clip_grad_by_total_norm_ without dtype argument. No other logic changes; grad norm computation remains the same.

Sequence Diagram(s)

sequenceDiagram
  participant Trainer
  participant PolicyWorker
  participant GradUtils as Grad Utils

  Trainer->>PolicyWorker: train()
  PolicyWorker->>GradUtils: get_grad_norm(params, dtype=float32)
  GradUtils-->>PolicyWorker: total_norm
  PolicyWorker->>GradUtils: clip_grad_by_total_norm_(params, max_norm, total_norm)
  note right of GradUtils: New: lazily collect grads only if clip_coeff < 1.0<br/>No dtype casting performed
  GradUtils-->>PolicyWorker: gradients possibly clipped
  PolicyWorker-->>Trainer: step complete
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested labels

r0.4.0

Suggested reviewers

  • joyang-nv
  • yfw
  • parthchadha

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The title includes cherry-pick metadata and formatting noise that obscure the core change, making it longer and less clear than a concise summary of the fix for gradient clipping on non-float32 parameters. Rename the pull request to a clear, single sentence such as “Fix gradient clipping for non-float32 parameters” and omit the cherry-pick prefix and target branch reference.
Test Results For Major Changes ⚠️ Warning The PR adjusts gradient clipping behavior for non-float32 parameters, which directly impacts numerics, yet the provided description only contains the automated cherry-pick notice and offers no testing or validation details to demonstrate the absence of regressions, so the custom check requirement is not satisfied. Please update the PR description to include the relevant test results or other evidence showing that gradient clipping continues to behave correctly after these changes, especially for non-float32 parameter cases.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cherry-pick-1158-r0.4.0

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d82ca75 and 2dbc493.

📒 Files selected for processing (3)
  • nemo_rl/models/dtensor/parallelize.py (1 hunks)
  • nemo_rl/models/policy/dtensor_policy_worker.py (0 hunks)
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py (0 hunks)
💤 Files with no reviewable changes (2)
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/dtensor_policy_worker.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/models/dtensor/parallelize.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/models/dtensor/parallelize.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/models/dtensor/parallelize.py (1)

743-751: LGTM! Gradient clipping now preserves the original dtype, lazy grads construction avoids unnecessary overhead, and all call sites correctly compute and pass total_norm.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@jseppanen jseppanen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@chtruong814 chtruong814 deleted the branch r0.4.0 October 2, 2025 13:05
@chtruong814 chtruong814 closed this Oct 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants