Reapply: "relax tolerances for all unary float ops (#9585)", "Add SupportedTensorDtypes::BOOL (#9584)", new op_mul test (#11206)#11919
Conversation
…portedTensorDtypes::BOOL (#9584)", new op_mul test (#11206) These were reverted because they were part of a stack with interenal test failures. Original #9585 summary: We were requiring ourselves to compute at double-precision, but ATen actually converts non-floating-point types to `float` by default, not `double`. Use the ATen tolerances everywhere. Original #9584 summary: none Original #11206 summary: This tests a possibly-surprising result: int8(100) * int8(100) with output type of long is 16 in ATen, even though the output type can hold 10000. Differential Revision: [D76754823](https://our.internmc.facebook.com/intern/diff/D76754823/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11919
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit cec3e8c with merge base 222d9e3 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
|
duplicate PR |
Stack from ghstack (oldest at bottom):
These were reverted because they were part of a stack with interenal test failures.
Original #9585 summary:
We were requiring ourselves to compute at double-precision, but ATen
actually converts non-floating-point types to
floatby default, notdouble. Use the ATen tolerances everywhere.Original #9584 summary: none
Original #11206 summary:
This tests a possibly-surprising result: int8(100) * int8(100) with
output type of long is 16 in ATen, even though the output type can hold 10000.
Differential Revision: D76754823