enable finegrained_fp8 and granite_speech cases on XPU#38036
enable finegrained_fp8 and granite_speech cases on XPU#38036ydshieh merged 9 commits intohuggingface:mainfrom
Conversation
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
|
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
| self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32)) | ||
|
|
||
| @require_torch_multi_gpu | ||
| def test_quantized_model_multi_gpu(self): |
There was a problem hiding this comment.
For multi_accelerator case ground truth. I can find both XPU and A100 return device 0, but the ground truth is {0,1}.
I look a bit deeper, for the target model meta-llama/Llama-3.2-1B, actually it has 2 modules_to_treat in accelerate infer_auto_device_map, as below. and the lm_head is tied with model, which means it only has 1 module to treat, and naturally can only placed to device 0.
I don't know whether there are any other scenarios I didn't considered, but for this case, seems the correct ground truth should be 0. @ydshieh , pls let me know your insights, thx
[('model', LlamaModel(
(embed_tokens): Embedding(128256, 2048)
(layers): ModuleList(
(0-15): 16 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(v_proj): FP8Linear(in_features=2048, out_features=512, bias=False)
(o_proj): FP8Linear(in_features=2048, out_features=2048, bias=False)
)
(mlp): LlamaMLP(
(gate_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(up_proj): FP8Linear(in_features=2048, out_features=8192, bias=False)
(down_proj): FP8Linear(in_features=8192, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((2048,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((2048,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)), ('lm_head', Linear(in_features=2048, out_features=128256, bias=False))]
There was a problem hiding this comment.
I believe this depends on the vram of your GPUs/XPUs ; it will only use both if one is not enough, otherwise maybe it would make sense to use another device map strategy here lik "balanced"
There was a problem hiding this comment.
I will let @SunMarc or @MekkCyber to share their thoughts for this.
On our CI, these tests are not collected, I believe it is due to the require_read_token decorator at the class level.
@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue
There was a problem hiding this comment.
embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.
There was a problem hiding this comment.
I will let @SunMarc or @MekkCyber to share their thoughts for this.
On our CI, these tests are not collected, I believe it is due to the
require_read_tokendecorator at the class level.@yao-matrix You are able to run this test ...? I am surprised. I will take a look at this issue
I removed require_read_token in my local env and run this case.
There was a problem hiding this comment.
@IlyasMoutawwakil @SunMarc yes, i tried balanced in my local env too w/ the same consideration as yours(my XPU has 64GB VRAM), but the result is still 1. It seems split granularity is top-level module when available memory is enough to fit it, in infer_auto_device_map
There was a problem hiding this comment.
Will investigate a bit more. @MekkCyber tested locally and it works but when running with pytest it fails
There was a problem hiding this comment.
@SunMarc @MekkCyber You will need to remove require_read_token decorator in order to run these tests.
(if not done so yet)
related issue: #38093
There was a problem hiding this comment.
If you think there is no more change required for this test_quantized_model_multi_gpu, feel free give a ✅ 🙏 .
From my side, I am just waiting a nit change regarding variable name.
There was a problem hiding this comment.
I think it's good to go from my side, we need to figure out why it fails with pytest but no need to include that in this pr, and thanks @ydshieh for the advice about require_read_token
| self.assertEqual(quantized_model.config.quantization_config.weight_block_size, (32, 32)) | ||
|
|
||
| @require_torch_multi_gpu | ||
| def test_quantized_model_multi_gpu(self): |
There was a problem hiding this comment.
embed_tokens is indeed tied to lm_head but the layers can be dispatched to other gpus. setting "auto" in device_map will default to "balanced" strategy.
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
|
@ydshieh , resolved comments, thx |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…8036) * enable finegrained_fp8 cases on XPU Signed-off-by: Yao Matrix <matrix.yao@intel.com> * fix style Signed-off-by: Yao Matrix <matrix.yao@intel.com> * change back to auto Signed-off-by: Yao Matrix <matrix.yao@intel.com> * rename per comments Signed-off-by: Matrix Yao <matrix.yao@intel.com> --------- Signed-off-by: Yao Matrix <matrix.yao@intel.com> Signed-off-by: Matrix Yao <matrix.yao@intel.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
@ydshieh @IlyasMoutawwakil , pls help review, thx