Skip to content

Comments

fix(moe): improve error message for GroupedMLP FP8 support#3015

Open
liuyun7345 wants to merge 6 commits intoNVIDIA:mainfrom
liuyun7345:fix/improve-grouped-mlp-fp8-error-message
Open

fix(moe): improve error message for GroupedMLP FP8 support#3015
liuyun7345 wants to merge 6 commits intoNVIDIA:mainfrom
liuyun7345:fix/improve-grouped-mlp-fp8-error-message

Conversation

@liuyun7345
Copy link
Contributor

This commit improves the error message when users try to use FP8 with the legacy GroupedMLP implementation. The new message clearly explains:

  1. The legacy GroupedMLP only supports bf16
  2. For FP8 support, users should use TEGroupedMLP
  3. How to enable TEGroupedMLP (--no-moe-use-legacy-grouped-gemm)
  4. The required TransformerEngine version (>= 1.9)

Fixes #1564

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

This commit improves the error message when users try to use FP8 with
the legacy GroupedMLP implementation. The new message clearly explains:

1. The legacy GroupedMLP only supports bf16
2. For FP8 support, users should use TEGroupedMLP
3. How to enable TEGroupedMLP (--no-moe-use-legacy-grouped-gemm)
4. The required TransformerEngine version (>= 1.9)

Fixes NVIDIA#1564
@liuyun7345 liuyun7345 requested review from a team as code owners January 20, 2026 15:27
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 20, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team January 20, 2026 15:27
@yaox12
Copy link
Member

yaox12 commented Jan 21, 2026

/ok to test f74eee2

@yaox12 yaox12 added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Jan 21, 2026
@ko3n1g ko3n1g added this to the Core 0.16 milestone Jan 21, 2026
@github-actions
Copy link
Contributor

Thank you for your contribution!

NVIDIA Megatron-LM is currently transitioning to development on Github. We will aim to review your PR after we complete our transition and stabilize our Github development process.

Thank you for your understanding.

@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Jan 23, 2026
@chtruong814 chtruong814 removed the needs-follow-up Issue needs follow-up label Jan 30, 2026
@yaox12 yaox12 added Final Review Apply this label to indicate that your PR is ready for final review. and removed Expert Review Apply this label to indicate that your PR is ready for expert review. labels Feb 1, 2026
@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Feb 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request Final Review Apply this label to indicate that your PR is ready for final review. needs-follow-up Issue needs follow-up

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[QUESTION] Does the GroupedGEMM implementation support FP8 when TransformerEngine's version is used?

5 participants