Skip to content

Make Megatron-FSDP torch.compile compatible#2425

Merged
shjwudp merged 14 commits intoNVIDIA:mainfrom
shjwudp:mfsdp_torch_compile
Jan 20, 2026
Merged

Make Megatron-FSDP torch.compile compatible#2425
shjwudp merged 14 commits intoNVIDIA:mainfrom
shjwudp:mfsdp_torch_compile

Conversation

@shjwudp
Copy link
Contributor

@shjwudp shjwudp commented Nov 28, 2025

What does this PR do ?

This PR makes Megatron-FSDP compatible with torch.compile by disabling compilation for its internal FSDP hooks using @torch.compiler.disable. These hooks rely on eager-mode behavior and can confuse the compiler’s tracing and graph construction, leading to graph breaks or errors when compiling models wrapped with Megatron-FSDP. By explicitly opting these hook entry points out of compilation, the main model computation remains compilable while Megatron-FSDP continues to manage sharding and communication in eager mode.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 28, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@shjwudp shjwudp changed the title Draft: disable m-fsdp hooks for torch compile compatible Make Megatron-FSDP torch.compile compatible Dec 16, 2025
@shjwudp
Copy link
Contributor Author

shjwudp commented Dec 16, 2025

/ok to test f45c937

@ko3n1g ko3n1g added this to the Core 0.16 milestone Dec 16, 2025
@shjwudp shjwudp marked this pull request as ready for review December 16, 2025 16:04
@shjwudp shjwudp requested review from a team as code owners December 16, 2025 16:04
@shjwudp shjwudp added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Dec 16, 2025
@shjwudp
Copy link
Contributor Author

shjwudp commented Dec 16, 2025

/ok to test 2237d64

Copy link
Member

@cspades cspades left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Torch compilation definitely works now, but using compile with Megatron-FSDP seems to hurt performance for a variety of standard model architectures, likely due to skipping compilation for our collectives and overall making the compiled program slower since the compilation needs to work around our collectives.

Linear

AVG STEP TIME (COMPILE=True): 0.03973395120352507
AVG STEP TIME (COMPILE=False): 0.029659099504351617

CNN

AVG STEP TIME (COMPILE=True): 0.030622328780591488
AVG STEP TIME (COMPILE=False): 0.02351114109158516

Transformer

AVG STEP TIME (COMPILE=True): 0.36532023292034865
AVG STEP TIME (COMPILE=False): 0.21396012518554927

I believe FSDP2 has a performance improvement from compilation, possibly due to:

Functional collectives. If you don't like DTensor, we also support "functional collectives", which are non-mutating versions of collective operations that can be used to manually implement SPMD operations in a compiler-friendly way without needing DTensor. (In fact, if you use traditional collective APIs and compile them, we will silently translate them into functional collectives for compiler passes.) When compiled, functional collectives don't necessarily force allocation of the output buffer as they can be re-inplaced. Importantly, functional collectives currently do NOT support autograd, see https://discuss.pytorch.org/t/supporting-autograd-for-collectives/219430

which is taken from @ezyang's blog: https://blog.ezyang.com/2025/08/state-of-torch-compile-august-2025/ among many other optimizations that makes FSDP2 lightly compatible and improved by torch.compile.

We'll likely need more changes to deeply support compilation, unless the model arch benefits greatly from compilation to the point that it offsets the lack of collective compilation and compile time overhead. Not experienced with compilation, so no immediate ideas at the moment, just wanted to write down these thoughts somewhere.

@cspades
Copy link
Member

cspades commented Dec 16, 2025

Clicked the "update branch" button since main branch CI was broken. CI should pass now.

@cspades
Copy link
Member

cspades commented Jan 15, 2026

What is going on with main branch 👀

2026-01-15T16:49:32.7940913Z 1-task-1-0/0 [default0]:FAILED tests/unit_tests/models/test_mamba_moe_model.py::TestMambaMoEModel::test_constructor

No way this is related to our changes...

@cspades
Copy link
Member

cspades commented Jan 15, 2026

Updated branch, should be fixed by: #2970

@cspades
Copy link
Member

cspades commented Jan 15, 2026

/ok to test 14080c1

@cspades
Copy link
Member

cspades commented Jan 19, 2026

/ok to test cad4126

@shjwudp shjwudp added this pull request to the merge queue Jan 20, 2026
Merged via the queue into NVIDIA:main with commit 35129e7 Jan 20, 2026
71 of 77 checks passed
@shjwudp shjwudp deleted the mfsdp_torch_compile branch January 20, 2026 02:21
daiyaanarfeen pushed a commit to daiyaanarfeen/Megatron-LM that referenced this pull request Feb 23, 2026
Co-authored-by: Cory Ye <44509866+cspades@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review Apply this label to indicate that your PR is ready for final review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants