Skip to content

Conversation

@yuanlehome
Copy link
Collaborator

No description provided.

@paddle-bot
Copy link

paddle-bot bot commented Sep 12, 2025

Thanks for your contribution!

qingqing01
qingqing01 previously approved these changes Sep 12, 2025
)
if self.fd_config.load_config.dynamic_load_weight and self.parallel_config.tensor_parallel_size > 1:
paddle.distributed.broadcast(model_weights_signal_tensor, src=0, group=self.parallel_config.tp_group)
self.model_weights_signal[0] = model_weights_signal_tensor.item()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

单卡下这里其实没必要去 item()

@qingqing01 qingqing01 merged commit 2883746 into PaddlePaddle:feature/experimental_feature_20250908 Sep 13, 2025
14 of 15 checks passed
liyonghua0910 pushed a commit to liyonghua0910/FastDeploy that referenced this pull request Sep 17, 2025
Jiang-Jia-Jun pushed a commit that referenced this pull request Sep 18, 2025
* [fix] fix ep group all-reduce

* [fix] fix clear/update lock not working when workers > 1

* [chore] add preemption triggered info log

* [fix] fix code style

* fix model_weights_signal (#4092)

* fix model_weights_signal

---------

Co-authored-by: Yuanle Liu <yuanlehome@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants