Skip to content

Conversation

@freeliuzc
Copy link
Collaborator

Motivation

#5491

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings December 17, 2025 09:10
@paddle-bot
Copy link

paddle-bot bot commented Dec 17, 2025

Thanks for your contribution!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

这是一个Cherry-Pick PR,修复了推测解码(speculative decoding)中写入qknorm缓存的bug。主要修复了三个CUDA kernel函数中的逻辑错误和控制流问题。

  • 修正了判断编码器/解码器阶段的条件逻辑:从seq_lens_decoder[ori_bi] == 0改为seq_lens_encoder[ori_bi] > 0,语义更清晰准确
  • 修复了控制流bug:将循环中的return改为continue,避免提前退出导致后续有效token无法处理
  • 确保三种RoPE模式(qk_norm、标准、neox)的行为一致性

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
custom_ops/gpu_ops/append_attn/speculate_write_cache_with_rope_kernel.cu 为三个kernel调用添加seq_lens_encoder参数传递,支持新的编码器/解码器阶段判断逻辑
custom_ops/gpu_ops/append_attn/speculate_write_cache_with_rope_impl.cuh 修复三个CUDA kernel函数中的逻辑判断条件和控制流问题:正确使用seq_lens_encoder区分编码/解码阶段,并将return改为continue保证循环正常执行

@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (release/2.4@d67b64d). Learn more about missing BASE report.

Additional details and impacted files
@@              Coverage Diff               @@
##             release/2.4    #5617   +/-   ##
==============================================
  Coverage               ?   58.99%           
==============================================
  Files                  ?      327           
  Lines                  ?    40681           
  Branches               ?     6180           
==============================================
  Hits                   ?    24001           
  Misses                 ?    14815           
  Partials               ?     1865           
Flag Coverage Δ
GPU 58.99% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@freeliuzc freeliuzc merged commit d7d633a into PaddlePaddle:release/2.4 Dec 17, 2025
19 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants