[ET-VK] Statically quantized add#14649
[ET-VK] Statically quantized add#14649facebook-github-bot merged 3 commits intogh/SS-JIA/334/basefrom
Conversation
## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14649
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Unrelated FailuresAs of commit 4a9196e with merge base 049c9fc ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) ghstack-source-id: 312658920 Pull Request resolved: #14649
This PR needs a
|
## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) [ghstack-poisoned]
Pull Request resolved: #14649 ## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) ghstack-source-id: 312663831
## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) [ghstack-poisoned]
Pull Request resolved: #14649 ## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. ghstack-source-id: 312809809 Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/)
48e2d0a
into
gh/SS-JIA/334/base
Pull Request resolved: #14649 ## Changes Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks. ghstack-source-id: 312809809 Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/)
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #14649 by @SS-JIA ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/head Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/333/orig Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/orig Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) @diff-train-skip-merge Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: pytorch#14649 by @SS-JIA ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/head Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/333/orig Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/334/orig Differential Revision: [D83437828](https://our.internmc.facebook.com/intern/diff/D83437828/) @diff-train-skip-merge Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
Stack from ghstack (oldest at bottom):
Changes
Title says it all! This diff adds an implementation of binary operators where all tensors are quantized to 8-bit with per-tensor scale and zero point. This is required for many convolution neural networks.
Differential Revision: D83437828