Arm backend: Add 16A8W support and test for add operation#14039
Conversation
Pull Request resolved: #13789 Add 16A8W quantization support and comprehensive tests for the add operation in ExecutorTorch ARM backend targeting Ethos U55 and U85 NPUs. This follows the pattern established for linear operations, extending int16 support to add operations with hardware-specific testing. Changes: - Add INT16 dtype validation support in op_add.py - Add test_add_tensor_16a8w_tosa_INT test function with U55/U85 pipeline support - Add U55 and U85 specific 16A8W tests with proper xfail decorators - Fix U55/U85 test parameter usage (remove unsupported tosa_extensions, clean quantizer function calls) - Update xfail reasons to consistent 'Vela compilation fails with Invalid arguments' pattern ghstack-source-id: 308053642 ghstack-source-id: 308053642 @exported-using-ghexport @bypass-github-pytorch-ci-checks @bypass-github-pytorch-ci-checks @bypass-github-executorch-ci-checks Differential Revision: [D80510463](https://our.internmc.facebook.com/intern/diff/D80510463/)
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14039
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3038914 with merge base 1a7441f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #13789 by @Ninja91
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/Ninja91/5/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/Ninja91/5/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/Ninja91/5/orig
@diff-train-skip-merge