Skip to content

Conversation

@DrRyanHuang
Copy link
Collaborator

给SOT添加 Warm Up 操作
cc @SigureMo

@paddle-bot
Copy link

paddle-bot bot commented Jul 21, 2025

Thanks for your contribution!

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow 🐾

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merge 一下最新的 develop 并跑一下 pre-commit

pre-commit run --files fastdeploy/model_executor/graph_optimization/utils.py fastdeploy/worker/gpu_model_runner.py fastdeploy/worker/gpu_worker.py

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow 🐾

Comment on lines +39 to +40
sot_warmup_guard, in_sot_warmup_mode = create_guard(False)
profile_run_guard, in_profile_run_mode = create_guard(False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是不是保留一个profile_run_guard 就够了,sot 和 cuda graph 都能通过 in_profile_run_mode 判断

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

目前是酱紫的(按时间顺序):

  • profile_run_guard 这个用来判断是否在 profile_run 阶段,如果是这个阶段,则确定跑动态图
  • sot_warmup_guard 这个用来判断是否在 SOT 的 warmup 阶段,这个阶段转静,并标记动态shape
  • 服务启动之后,如果是启动了SOT转静,则跑静态图,不再标记动态shape

如果只留下 profile_run_guard 的话,就没办法区分是在 SOT warmup 阶段跑的假数据,还是服务启动之后跑的真数据(服务启动后不再标记动态shape,只在 warmup 阶段标记)

@gongshaotian
Copy link
Collaborator

SOT 计划支持多硬件吗,gcu 那些model runner 需要适配吗

@DrRyanHuang
Copy link
Collaborator Author

SOT 计划支持多硬件吗,gcu 那些model runner 需要适配吗

@gongshaotian 其他硬件也会支持 warmup,下一个PR统一修改吧

@DrRyanHuang DrRyanHuang changed the title [SOT] Add sot warmup [SOT] Add sot warmup (NVIDIA GPU Only) Jul 22, 2025
@DrRyanHuang DrRyanHuang requested a review from gongshaotian July 22, 2025 12:39
Copy link
Collaborator

@gongshaotian gongshaotian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gongshaotian gongshaotian merged commit 95b5af2 into PaddlePaddle:develop Jul 22, 2025
4 of 5 checks passed
@DrRyanHuang DrRyanHuang deleted the sot_warmup branch July 22, 2025 13:43
luukunn added a commit to luukunn/FastDeploy that referenced this pull request Jul 29, 2025
* [MTP Fix] Fix code and register cpp operators (PaddlePaddle#2965)

* fix rl config local rank (PaddlePaddle#2957)

* [FIX]fix rejection sampling when topp=0 using _SAMPLING_EPS (PaddlePaddle#2967)

* fix rejection sampling when topp=0

* fix

* [SOT] Add sot warmup (NVIDIA GPU Only) (PaddlePaddle#2929)

* add sot warmup

* fix code style

* change batch_size list

* add param to config

* rm free_list settings && set sot_warmup_sizes

* finish debug with dynamic dims by type annotations

* add profile_run guard

* rm sth useless

* support chunk_prefill in fa3

* 【Infer】Improve the performance block_wise_fp8 of triton_moe_backend (PaddlePaddle#2942)

* Update README.md

* Update README.md

* delete max-len (PaddlePaddle#2959)

* [CI] add codestyle_check action (PaddlePaddle#2972)

* [CI] add codestyle_check action

* [CI] Integrate codestyle check via pre-commit in GitHub Actions

* fix mtp bug in pd-split mode (PaddlePaddle#2970)

* [BugFix] Add prefill restrictions for chunked_prefill+VL (PaddlePaddle#2983)

* Fix performance degradation bug of custom_all_reduce (PaddlePaddle#2981)

* FA3 fix bug (PaddlePaddle#2987)

* polish code for prefill restrictions (PaddlePaddle#2991)

* [Feature] Support block scheduler v1 for FD (PaddlePaddle#2928)

* Support FD block scheduler v1

* Support FD block scheduler v1

* Support FD block scheduler v1

* Fix according to copilot review

* Fix according to review

* Remove is_dummy

* Fix bug when real_bsz=1

* Fix infer first token cost time

---------

Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>

* update (PaddlePaddle#2978)

* [Code Simplification] fix init_distributed_environment() (PaddlePaddle#2982)

* support c4 attn && fix cache

* fix chunk_prefill

* [benchmark] add quantization for benchmark yaml (PaddlePaddle#2995)

* [Fix] fix mm ep empty run (PaddlePaddle#2999)

* add ci reuse action (PaddlePaddle#2968)

* add ci reuse action

* fix code formatting

* update

* [Feature] multi-source download (PaddlePaddle#2986)

* multi-source download

* multi-source download

* huggingface download revision

* requirement

* style

* add revision arg

* test

* pre-commit

* [LLM] update function name (PaddlePaddle#2985)

* [LLM] update function name

* [BugFix] fix multinode deployment (PaddlePaddle#2977)

* Update benchmark tools (PaddlePaddle#3004)

* update benchmark tools

* update benchmark tools

* update flake8 version to support pre-commit in python3.12 (PaddlePaddle#3000)

* update flake8 version to support pre-commit in python3.12

* polish code

* [Feature] multi source download (PaddlePaddle#3005)

* multi-source download

* multi-source download

* huggingface download revision

* requirement

* style

* add revision arg

* test

* pre-commit

* Change default download

* change requirements.txt

* modify English Documentation

* documentation

* [GCU] Update to develop (PaddlePaddle#2988)

* [Model] Provide clearer error for missing KV cache quantization scales (PaddlePaddle#3007)

* [Feature] Support_eplb (PaddlePaddle#2997)

* [Feature] support_eplb

* [Feature] support_eplb

* [Fix] fix mm ep

* Update setup.py

* [feat] add disable_chat_template in chat api as a substitute for previous raw_request (PaddlePaddle#3023)

* [feat] add disable_chat_template in chat api as a substitute for previous raw_request

* [fix] pre-commit code check

---------

Co-authored-by: GoldPancake <56388518+Deleter-D@users.noreply.github.com>
Co-authored-by: gaoziyuan <88373061+gzy19990617@users.noreply.github.com>
Co-authored-by: Sunny-bot1 <68891411+Sunny-bot1@users.noreply.github.com>
Co-authored-by: Ryan <zihaohuang@aliyun.com>
Co-authored-by: lizhenyun01 <1500424927@qq.com>
Co-authored-by: chen <103103266+ckl117@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
Co-authored-by: lizexu123 <39205361+lizexu123@users.noreply.github.com>
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Co-authored-by: freeliuzc <lzc842650834@gmail.com>
Co-authored-by: Zero Rains <linjunlu@zerorains.top>
Co-authored-by: zhink <33270771+zhink@users.noreply.github.com>
Co-authored-by: chenjian <1435317881@qq.com>
Co-authored-by: bukejiyu <52310069+bukejiyu@users.noreply.github.com>
Co-authored-by: xiegegege <46314656+xiegegege@users.noreply.github.com>
Co-authored-by: xiaoxiaohehe001 <49090790+xiaoxiaohehe001@users.noreply.github.com>
Co-authored-by: YUNSHEN XIE <1084314248@qq.com>
Co-authored-by: Yzc216 <101054010+Yzc216@users.noreply.github.com>
Co-authored-by: ltd0924 <32387785+ltd0924@users.noreply.github.com>
Co-authored-by: Zhang Yulong <35552275+ZhangYulongg@users.noreply.github.com>
Co-authored-by: EnflameGCU <118410644+EnflameGCU@users.noreply.github.com>
Co-authored-by: littledgg <61149469+littledgg@users.noreply.github.com>
Co-authored-by: 李泳桦 <39643373+liyonghua0910@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants