Skip to content

enable finegrained_fp8 and granite_speech cases on XPU#38036

Merged
ydshieh merged 9 commits intohuggingface:mainfrom
yao-matrix:xpu
May 14, 2025
Merged

enable finegrained_fp8 and granite_speech cases on XPU#38036
ydshieh merged 9 commits intohuggingface:mainfrom
yao-matrix:xpu

Conversation

@yao-matrix
Copy link
Copy Markdown
Contributor

@ydshieh @IlyasMoutawwakil , pls help review, thx

yao-matrix added 3 commits May 9, 2025 07:26
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 9, 2025

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@github-actions github-actions Bot marked this pull request as draft May 9, 2025 07:30
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
Comment thread src/transformers/quantizers/quantizer_finegrained_fp8.py
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32))

@require_torch_multi_gpu
def test_quantized_model_multi_gpu(self):
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For multi_accelerator case ground truth. I can find both XPU and A100 return device 0, but the ground truth is {0,1}.
I look a bit deeper, for the target model meta-llama/Llama-3.2-1B, actually it has 2 modules_to_treat in accelerate infer_auto_device_map, as below. and the lm_head is tied with model, which means it only has 1 module to treat, and naturally can only placed to device 0.

I don't know whether there are any other scenarios I didn't considered, but for this case, seems the correct ground truth should be 0. @ydshieh , pls let me know your insights, thx

[('model', LlamaModel(
(embed_tokens): Embedding(128256, 2048)
(layers): ModuleList(
(0-15): 16 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(v_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(o_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
)
(mlp): LlamaMLP(
(gate_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(up_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(down_proj): FP8Linear(in_features=8192, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((2048,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)), ('lm_head', Linear(in_features=2048, out_features=128256, bias=False))]

Copy link
Copy Markdown
Member

@IlyasMoutawwakil IlyasMoutawwakil May 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this depends on the vram of your GPUs/XPUs ; it will only use both if one is not enough, otherwise maybe it would make sense to use another device map strategy here lik "balanced"

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let @SunMarc or @MekkCyber to share their thoughts for this.

On our CI, these tests are not collected, I believe it is due to the require_read_token decorator at the class level.

@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let @SunMarc or @MekkCyber to share their thoughts for this.

On our CI, these tests are not collected, I believe it is due to the require_read_token decorator at the class level.

@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue

I removed require_read_token in my local env and run this case.

Copy link
Copy Markdown
Contributor Author

@yao-matrix yao-matrix May 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IlyasMoutawwakil @SunMarc yes, i tried balanced in my local env too w/ the same consideration as yours(my XPU has 64GB VRAM), but the result is still 1. It seems split granularity is top-level module when available memory is enough to fit it, in infer_auto_device_map

Copy link
Copy Markdown
Member

@SunMarc SunMarc May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will investigate a bit more. @MekkCyber tested locally and it works but when running with pytest it fails

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SunMarc @MekkCyber You will need to remove require_read_token decorator in order to run these tests.
(if not done so yet)

related issue: #38093

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you think there is no more change required for this test_quantized_model_multi_gpu, feel free give a ✅ 🙏 .

From my side, I am just waiting a nit change regarding variable name.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's good to go from my side, we need to figure out why it fails with pytest but no need to include that in this pr, and thanks @ydshieh for the advice about require_read_token

@yao-matrix yao-matrix marked this pull request as ready for review May 9, 2025 07:43
@IlyasMoutawwakil IlyasMoutawwakil requested a review from MekkCyber May 9, 2025 10:29
Comment thread src/transformers/integrations/finegrained_fp8.py
Comment thread tests/quantization/finegrained_fp8/test_fp8.py Outdated
Copy link
Copy Markdown
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks ! Left some comments

Comment thread src/transformers/quantizers/quantizer_finegrained_fp8.py
self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32))

@require_torch_multi_gpu
def test_quantized_model_multi_gpu(self):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.

Comment thread tests/quantization/finegrained_fp8/test_fp8.py Outdated
@yao-matrix
Copy link
Copy Markdown
Contributor Author

@ydshieh , resolved comments, thx

Copy link
Copy Markdown
Contributor

@MekkCyber MekkCyber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks 🤗

@ydshieh ydshieh enabled auto-merge (squash) May 14, 2025 08:46
@ydshieh ydshieh merged commit 9b5ce55 into huggingface:main May 14, 2025
20 checks passed
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
…8036)

* enable finegrained_fp8 cases on XPU

Signed-off-by: Yao Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao Matrix <matrix.yao@intel.com>

* change back to auto

Signed-off-by: Yao Matrix <matrix.yao@intel.com>

* rename per comments

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Yao Matrix <matrix.yao@intel.com>
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
@yao-matrix yao-matrix deleted the xpu branch May 14, 2025 23:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants