Support sine operator on XNNPACK#14711
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14711
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Cancelled JobAs of commit 68f6f0e with merge base 53ccfd0 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@GregoryComer has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83623086. |
add82a8 to
9943711
Compare
|
Cosine is coming in D83623619, will export as a PR once this is merged because I didn't use ghexport. |
|
@GregoryComer has imported this pull request. If you are a Meta employee, you can view this in D83623086. |
Summary: Wire up the unary sine operator in xnnpack for fp32 and fp16. Differential Revision: D83623086 Pulled By: GregoryComer
9943711 to
68f6f0e
Compare
|
cherry pick on to 1.0? |
I wasn't originally planning to, given that it's late, but it's reasonably low risk so I'm okay to pick it if you'd like. |
|
@pytorchbot cherry-pick --onto release/1.0 -c critical |
Summary: Wire up the unary sine operator in xnnpack for fp32 and fp16. Differential Revision: D83623086 (cherry picked from commit 6efddba)
Cherry picking #14711The cherry pick PR is at #15144 and it is recommended to link a critical cherry pick PR with an issue. The following tracker issues are updated: Details for Dev Infra teamRaised by workflow job |
| [0.0, 0.1, 0.5, 0.785398], | ||
| [-0.5, -0.785398, 1.5708, -1.5708], |
There was a problem hiding this comment.
XNNPACK does use a faster approximation algorithm, and does have tests against std::sin(), with atol/rotl = 3*std::numeric_limits<T>::epsilon() and 5*std::numeric_limits<T>::epsilon(), which is also used by ET::Portable::sin(), so some tolerance will be required. I guess we can increase the tensor sizes here and perhaps find the tolerance required, and live with that for now?
Summary: Wire up the unary sine operator in xnnpack for fp32 and fp16. Differential Revision: D83623086
Summary: Wire up the unary sine operator in xnnpack for fp32 and fp16.
Differential Revision: D83623086