Remove @slow for test_eager_matches_sdpa_inference#34558
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
ad647f2 to
2afd90e
Compare
fcc2bfd to
631cdec
Compare
test_eager_matches_sdpa_inference less flaky@slow for test_eager_matches_sdpa_inference less flaky
@slow for test_eager_matches_sdpa_inference less flaky@slow for test_eager_matches_sdpa_inference
| for key in ["image_token_index", "image_token_id", "video_token_index", "video_token_id", "vision_start_token_id"]: | ||
| token_index = getattr(config, key, None) | ||
| if token_index is not None and token_index < config.get_text_config().vocab_size: | ||
| logits_processor_kwargs["bad_words_ids"].append([token_index]) |
There was a problem hiding this comment.
make it more general.
vision_start_token_id is required for qwen2_vl
gante
left a comment
There was a problem hiding this comment.
Yay fewer slow tests 🙌
Added a few questions/suggestions to see if we can remove a few more overwritten cases 😈
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| @require_torch_sdpa | ||
| @unittest.skip("Albert requires `head_mask` which is currently not done in this test.") | ||
| def test_eager_matches_sdpa_inference(self): | ||
| pass |
There was a problem hiding this comment.
On skips like this, on Albert and other models: the test pulls the main input and the attention mask to manipulate them, finally sending them to the model. We could pop these items from inputs_dict, and then pass **inputs_dict to the model (e.g. model_eager(**prepared_inputs, **inputs_dict)) -- I think then we wouldn't need to skip tests due to missing inputs 🤗
There was a problem hiding this comment.
Maybe let's merge and do this in a follow-up PR. It's never worked before.
|
|
||
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| @require_torch_sdpa | ||
| @slow |
There was a problem hiding this comment.
Can't we just delete the test? (it has # Copied from tests.test_modeling_common.ModelTesterMixin.test_eager_matches_sdpa_inference and it inherits the mixin, so it should run the original test!)
There was a problem hiding this comment.
no, it fails. I don't dive into why it's failing (input issues) though.
There was a problem hiding this comment.
Interesting -- in that case, how does # Copied from work? 👀
There was a problem hiding this comment.
# Copied from is only applied to files under src I believe :-) but people sometimes use it in the tests/ 😆
|
Tagging @zucchini-nlp to double-check VLM test changes :) |
zucchini-nlp
left a comment
There was a problem hiding this comment.
Thanks, LGTM! Left a few questions and things we can clean up more
| return floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) | ||
|
|
||
| @require_torch_sdpa | ||
| @slow |
There was a problem hiding this comment.
i think this test dont need skip anymore since we dont check if model has SDPA layers within this test anymore. But it can be skipped due to the same flakiness
There was a problem hiding this comment.
It still has input preparation issues, where I added below to another model test class
"Idefics requires both text and image inputs which is currently not done in this test."
As mentioned in a reply to Joao's comment, let's try to do it in a follow up PR
| input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) | ||
| attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=torch_device) | ||
|
|
||
| input_ids[:, -1] = self.pad_token_id |
There was a problem hiding this comment.
for my understanding, any reason why last token has to be a pad?
There was a problem hiding this comment.
to avoid index error
(modeling code)
vision_tokens = input_ids[vision_start_indices + 1]
…4558) * update * update * update * update * update * update * update * update * update * update * update --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
What does this PR do?
And make it less flaky