Skip to content

Conversation

@zoooo0820
Copy link
Collaborator

@zoooo0820 zoooo0820 commented Dec 16, 2025

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

This PR is going to support Blockwise-FP8 inference on SM100.

Modifications

  1. Because the old version of DeepGEMM does not support sm100, we use the newer version DeepGEMM for sm100. And this dependency will be switched to PaddleFleet in the future.
  2. FP8 gemm / group_gemm on sm100 will use ue8m0 scale on sm100.
  3. For EP mode, FP8 DeepEP low_latency_dispatch will use ue8m0 scale on sm100.
  4. For activation quantization, we use paddle.incubate.nn.functional.fp8_quant_blockwise instead of per_token_quant / per_token_quant_padding. Add support ue8m0 for masked_per_token_quant, using in EP Decoding phase.

Usage or Command

requires paddlepaddle develop after(include) 20260119

For DeepGEMM installtion: https://github.com/PFCCLab/DeepGEMM
For DeepEP installtion: https://github.com/PFCCLab/DeepEP

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Dec 16, 2025

Thanks for your contribution!

@CLAassistant
Copy link

CLAassistant commented Dec 16, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
3 out of 4 committers have signed the CLA.

✅ fxyfxy777
✅ zoooo0820
✅ ckl117
❌ K11OntheBoat


K11OntheBoat seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@zoooo0820 zoooo0820 force-pushed the support_eb_sm100_fp8 branch from 5d43f4e to e608526 Compare December 16, 2025 11:14
@zoooo0820 zoooo0820 force-pushed the support_eb_sm100_fp8 branch 2 times, most recently from 6079bc8 to bb9c247 Compare December 17, 2025 10:04
@zoooo0820 zoooo0820 force-pushed the support_eb_sm100_fp8 branch from bb9c247 to 751a688 Compare December 17, 2025 11:30
@zoooo0820 zoooo0820 force-pushed the support_eb_sm100_fp8 branch from f671841 to b4431a0 Compare December 18, 2025 03:48
@codecov-commenter
Copy link

codecov-commenter commented Jan 23, 2026

Codecov Report

❌ Patch coverage is 29.36508% with 89 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@85db063). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...oy/model_executor/layers/quantization/fp8_utils.py 17.77% 35 Missing and 2 partials ⚠️
...el_executor/layers/moe/fused_moe_triton_backend.py 4.16% 23 Missing ⚠️
..._executor/layers/moe/fused_moe_deepgemm_backend.py 29.16% 15 Missing and 2 partials ⚠️
...del_executor/layers/quantization/block_wise_fp8.py 47.82% 9 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5593   +/-   ##
==========================================
  Coverage           ?   66.46%           
==========================================
  Files              ?      385           
  Lines              ?    50892           
  Branches           ?     7944           
==========================================
  Hits               ?    33827           
  Misses             ?    14578           
  Partials           ?     2487           
Flag Coverage Δ
GPU 66.46% <29.36%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

def _get_mn_major_tma_aligned_packed_ue8m0_tensor_torch_impl(
x: paddle.Tensor,
):
"""将FP32张量转换为TMA对齐的packed UE8M0格式张量"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否考虑换成英文注释

from fastdeploy.model_executor.ops.gpu.deep_gemm import ceil_div
try:
from deep_gemm import ceil_div
except ModuleNotFoundError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

最好打印出warning吧

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,这两个问题在后续PR一并修复

@zoooo0820 zoooo0820 changed the title [WIP] Support Ernie FP8 on sm100 [Feature] Support Ernie FP8 on sm100 Jan 29, 2026
@Jiang-Jia-Jun Jiang-Jia-Jun merged commit eb80724 into PaddlePaddle:develop Jan 29, 2026
27 of 35 checks passed
zoooo0820 added a commit that referenced this pull request Jan 29, 2026
ZhangYulongg pushed a commit that referenced this pull request Jan 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants