Skip to content

Comments

[Megatron-FSDP] Fix incorrect gradient scaling target.#3023

Merged
cspades merged 2 commits intoNVIDIA:mainfrom
cspades:cye/grad-scale-bugfix
Jan 22, 2026
Merged

[Megatron-FSDP] Fix incorrect gradient scaling target.#3023
cspades merged 2 commits intoNVIDIA:mainfrom
cspades:cye/grad-scale-bugfix

Conversation

@cspades
Copy link
Member

@cspades cspades commented Jan 21, 2026

What does this PR do ?

  • When not using per-token loss or collective averaging, we weren't actually applying a 1/DP scaling to the gradient for Megatron-FSDP. This was experienced outside of Megatron-LM, e.g. fully_shard and explains why we don't have gradient scaling parity with FSDP2. Will add golden value or FSDP2 comparison tests later.

Details

  • gbuf.data != bucket.data when the gradient is sharded. So we were applying the scaling to the accumulated gradient shard, then adding in a non-scaled reduced gradient. So we end up with something like:
((((0 / DP + grad_step_1) / DP + grad_step_2) / DP + grad_step_3) / DP + grad_step_4 ... )

When gradient accumulation period > 1, we are scaling accumulated gradients by DP**(i-grad_acc_steps) where i is the i-th accumulated gradient, and when not accumulating, then we just don't scale by DP. 😓

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

Signed-off-by: Cory Ye <cye@nvidia.com>
@cspades cspades self-assigned this Jan 21, 2026
@cspades cspades requested review from a team as code owners January 21, 2026 02:56
@ko3n1g ko3n1g requested a review from a team January 21, 2026 02:57
@ko3n1g ko3n1g added this to the Core 0.16 milestone Jan 21, 2026
@cspades cspades added bug Something isn't working Expert Review Apply this label to indicate that your PR is ready for expert review. labels Jan 21, 2026
@shjwudp
Copy link
Contributor

shjwudp commented Jan 21, 2026

/ok to test e0ac855

@shjwudp shjwudp enabled auto-merge January 21, 2026 06:11
@cspades cspades added Final Review Apply this label to indicate that your PR is ready for final review. and removed Expert Review Apply this label to indicate that your PR is ready for expert review. labels Jan 21, 2026
@cspades cspades disabled auto-merge January 21, 2026 19:06
@cspades cspades enabled auto-merge January 21, 2026 19:06
@cspades cspades added this pull request to the merge queue Jan 21, 2026
@cspades cspades changed the title Fix incorrect gradient scaling target. [Megatron-FSDP] Fix incorrect gradient scaling target. Jan 21, 2026
Merged via the queue into NVIDIA:main with commit ba456fd Jan 22, 2026
50 of 54 checks passed
@cspades cspades deleted the cye/grad-scale-bugfix branch January 22, 2026 00:20
daiyaanarfeen pushed a commit to daiyaanarfeen/Megatron-LM that referenced this pull request Feb 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Final Review Apply this label to indicate that your PR is ready for final review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants