Skip to content

Conversation

@driazati
Copy link
Member

@driazati driazati commented Apr 27, 2022

Following on #11042, this changes tolerances to fix some other ONNX test failures that have come up in the past several days

test link
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/3047/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/3044/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/3068/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11032/2/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-10833/11/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11035/1/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11074/1/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11068/1/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearsigmoid[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11078/1/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-10867/10/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearsigmoid[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/ci-gpu-update/4/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearsigmoid[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/3132/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearsigmoid[cuda] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/3143/pipeline/
tests/python/frontend/onnx/test_forward.py::test_qlinearleakyrelu[llvm] https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11146/2/pipeline/

cc @areusch

@driazati driazati marked this pull request as ready for review April 27, 2022 22:11
@github-actions
Copy link
Contributor

github-actions bot commented May 4, 2022

It has been a while since this PR was updated, @areusch please leave a review or address the outstanding comments. @driazati if this PR is still a work in progress, please convert it to a draft until it is ready for review.

@github-actions github-actions bot requested a review from areusch May 9, 2022 18:54
Following on #11042, this changes tolerances to fix some other ONNX test failures that have come up in the past several days
@areusch
Copy link
Contributor

areusch commented May 11, 2022

cc @altanh can you take a look?

@github-actions
Copy link
Contributor

It has been a while since this PR was updated, @altanh @areusch please leave a review or address the outstanding comments. @driazati if this PR is still a work in progress, please convert it to a draft until it is ready for review.

@altanh
Copy link
Contributor

altanh commented May 18, 2022

I'm suspicious of a single value being ~1e-2 off while the rest are below 1e-5... tricky. We could reduce tolerances to that but that's quite a reduction

@driazati
Copy link
Member Author

Closing in favor of #11376, agreed that the tolerances are too high here (and not even enough for all the failures I've seen, some are off by atol=0.02) to merge and would silently indicate nothing is wrong. We don't really have any other choice but to disable these tests until someone can identify and fix the values

@driazati driazati closed this May 19, 2022
@masahi
Copy link
Member

masahi commented May 19, 2022

yeah, for quantized ops, getting accuracy aligned with frameworks is challenging, due to slight difference in how low-level numerics is done (fixed point vs fp32 etc). In PyTorch frontnend, we skip accuracy check entirely for quantized ops / models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants