Skip to content

Conversation

@freeliuzc
Copy link
Collaborator

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings December 17, 2025 12:57
@paddle-bot
Copy link

paddle-bot bot commented Dec 17, 2025

Thanks for your contribution!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for multi-step MTP (Multi-Token Prediction) with CUDAGraph. The changes enable proper capture of CUDA graphs for MTP scenarios by adjusting capture sizes to account for multiple tokens generated per query per step.

Key Changes

  • Modified CUDA graph capture logic for MTP target model to use dynamic batch size calculation based on speculative token count
  • Updated _set_cudagraph_sizes to generate capture sizes scaled by tokens per query per step
  • Simplified target model capture by removing special handling for batch size 1

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
fastdeploy/worker/gpu_model_runner.py Simplified MTP target model capture logic, removed batch size 1 skip condition, updated batch size and expected_decode_len calculations
fastdeploy/config.py Added dec_token_per_query_per_step parameter to scale CUDA graph capture sizes appropriately for multi-step MTP

yuanlehome
yuanlehome previously approved these changes Dec 17, 2025
@codecov-commenter
Copy link

codecov-commenter commented Dec 17, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@ac73165). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5624   +/-   ##
==========================================
  Coverage           ?   62.88%           
==========================================
  Files              ?      329           
  Lines              ?    41700           
  Branches           ?     6368           
==========================================
  Hits               ?    26223           
  Misses             ?    13492           
  Partials           ?     1985           
Flag Coverage Δ
GPU 62.88% <100.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@gongshaotian gongshaotian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gongshaotian gongshaotian merged commit 6eada49 into PaddlePaddle:develop Dec 22, 2025
15 of 18 checks passed
freeliuzc added a commit to freeliuzc/FastDeploy that referenced this pull request Dec 22, 2025
…ddle#5624)

* support multi-step mtp with cudagraph

* fix usage

* fix unit test
freeliuzc added a commit to freeliuzc/FastDeploy that referenced this pull request Dec 23, 2025
…ddle#5624)

* support multi-step mtp with cudagraph

* fix usage

* fix unit test
freeliuzc added a commit that referenced this pull request Dec 23, 2025
…5670)

* support multi-step mtp with cudagraph

* fix usage

* fix unit test
qingqing01 pushed a commit that referenced this pull request Dec 23, 2025
…5695)

* support multi-step mtp with cudagraph

* fix usage

* fix unit test
if batch_size == 1:
logger.info("Skip token_num = 1, when capture Draft model for mtp")
else:
assert batch_size % 2 == 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert 删掉

if self.scheduler_config.splitwise_role == "decode"
else self.scheduler_config.max_num_batched_tokens
),
batch_size=int(batch_size / 2),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

batch_size=int(capture_size / (self.speculative_config.num_speculative_tokens + 1)),

),
batch_size=int(batch_size / 2),
in_capturing=True,
expected_decode_len=3,
Copy link
Collaborator

@gongshaotian gongshaotian Dec 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里以及 _dummy_run() 的退出逻辑 需要改下

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 + draft token + draft model eos token

logger.info(
f"Warm up the Target model with the num_tokens:{capture_size}, expected_decode_len:{self.speculative_config.num_speculative_tokens}"
)
if self.graph_opt_config.draft_model_use_cudagraph:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

打开这个启动参数

ckl117 pushed a commit to fxyfxy777/FastDeploy that referenced this pull request Dec 29, 2025
…ddle#5624)

* support multi-step mtp with cudagraph

* fix usage

* fix unit test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants