Skip to content

Conversation

@yuanlehome
Copy link
Collaborator

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings December 16, 2025 07:51
@paddle-bot
Copy link

paddle-bot bot commented Dec 16, 2025

Thanks for your contribution!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR removes an unused seq_lens_decoder parameter from the speculate_limit_thinking_content_length family of functions, which are used to limit thinking content length during speculative decoding.

Key Changes:

  • Removed seq_lens_decoder parameter from v1 and v2 CUDA kernel functions and their wrappers
  • Updated all test cases to remove the unused parameter
  • Removed redundant sequence length adjustment logic (now only step_idx is adjusted)

Critical Issue Found: The function call in post_process_specualate is missing required parameters after the removal.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

File Description
tests/operators/test_speculate_limit_thinking_content_length.py Removed seq_lens_decoder parameter from all test cases for both v1 and v2 functions; updated comments to reflect removal
fastdeploy/model_executor/pre_and_post_process.py Removed seq_lens_decoder parameter from function signatures and internal calls; bug introduced in call at line 444
custom_ops/gpu_ops/speculate_decoding/speculate_limit_thinking_content_length_v1.cu Removed seq_lens_decoder from kernel, wrapper function, and operator input declaration; removed redundant adjustment logic
custom_ops/gpu_ops/speculate_decoding/speculate_limit_thinking_content_length_v2.cu Removed seq_lens_decoder from kernel, wrapper function, and operator input declaration; removed redundant adjustment logic

Note on PR Description: The PR description is incomplete. The "Motivation" and "Modifications" sections are empty. Please add:

  1. Motivation: Explain that seq_lens_decoder was redundant because it was being updated in the same way as step_idx, and the actual sequence length update happens in the speculate_update function that is called after this function.
  2. Modifications: Detail that this PR removes the unused seq_lens_decoder parameter from the speculate_limit_thinking_content_length_v1 and v2 functions across CUDA kernels, Python wrappers, and test files.

accept_num=share_inputs["accept_num"],
seq_lens_decoder=share_inputs["seq_lens_decoder"],
think_end_id=think_end_id,
line_break_id=line_break_id,
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing required parameters in function call. The function speculate_limit_thinking_content_length requires stop_flags and eos_token_ids parameters (as seen in the function signature on line 147-148), but they are not provided in this call. After removing seq_lens_decoder, you should add:

  • stop_flags=share_inputs["stop_flags"], after line 450
  • eos_token_ids=share_inputs["eos_token_ids"], after stop_flags (only needed for the "" strategy, but the function signature requires it)

This will cause a runtime error when this function is called.

Suggested change
line_break_id=line_break_id,
line_break_id=line_break_id,
stop_flags=share_inputs["stop_flags"],
eos_token_ids=share_inputs["eos_token_ids"],

Copilot uses AI. Check for mistakes.
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@c9b47f9). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5590   +/-   ##
==========================================
  Coverage           ?   61.93%           
==========================================
  Files              ?      328           
  Lines              ?    41134           
  Branches           ?     6270           
==========================================
  Hits               ?    25476           
  Misses             ?    13743           
  Partials           ?     1915           
Flag Coverage Δ
GPU 61.93% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@yuanlehome yuanlehome merged commit 867803a into PaddlePaddle:develop Dec 16, 2025
21 of 27 checks passed
Jiang-Jia-Jun pushed a commit that referenced this pull request Dec 17, 2025
… (#5591)

* fix bug

* fix bug

---------

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants