Make TF32 tests hardware-aware for PyTorch 2.9+#43151
Make TF32 tests hardware-aware for PyTorch 2.9+#43151Shraman123 wants to merge 4 commits intohuggingface:mainfrom
Conversation
|
@Shraman123 I dont think we can write torch tests, cause CI does not have multiple version of torch. |
|
@Shraman123 looking into this further, seems like pytorch issue, can you try filing but and link the issue from Huggingface. As I just checked another comment on the issue and in traceback I see pytorch inductor still using old API. thanks. |
khushali9
left a comment
There was a problem hiding this comment.
Thank you for removing the tests, but then I do not see any other files changed, are you sure all your changes are in. Thanks
|
Thanks for pointing this out. After removing the TF32 test, there are no additional functional changes left in this PR. This is intentional — the behavior was determined to be PyTorch-owned rather than a Transformers bug. The purpose of this PR is therefore limited to removing a fragile test that relies on PyTorch internal precision semantics, which cannot be reliably validated in CI. Please let me know if you’d prefer closing this PR instead, or if removing the test alone is acceptable. |
khushali9
left a comment
There was a problem hiding this comment.
@Shraman123 aah cool if you tested this and as we determined to be pytorch issue, we can just close this as this PR does not have any changes to merge. cc - @Rocketknight1
Fixes #42371
TF32 tests assumed fp32_precision == "tf32" after enabling, which is not true on
CPU-only or unsupported hardware. PyTorch reports "none" in those cases.
This change: