Retake: Merged Arm backend: Add INT16 support to rescale operation #13802#14300
Retake: Merged Arm backend: Add INT16 support to rescale operation #13802#14300
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14300
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 113 PendingAs of commit 2149436 with merge base 957915f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Add 16A8W quantization support and test for the add operation in ExecutorTorch ARM backend. This follows the pattern established for linear operations, extending int16 support to add operations. Changes: - Add INT16 dtype validation support in op_add.py - Add test_add_tensor_16a8w_tosa_INT test function - Enable test_add.py in test targets configuration The 16A8W configuration uses 16-bit activations with 8-bit weights, enabling higher precision for activations while maintaining weight efficiency. Differential Revision: [D80510463](https://our.internmc.facebook.com/intern/diff/D80510463/) [ghstack-poisoned]
#13802 (comment) failed to cp to main