Skip to content

Conversation

@vvchernov
Copy link
Contributor

@vvchernov vvchernov commented Nov 8, 2022

QLinearMatMul has supported rank =2 only for both input tensors.
It was extended using _qnn.op.dense and _qnn.op.batch_matmul for all ranks
Y = X*W
Works:

  1. int8 and int8 or uint8 and uint8 input data types
  2. x_rank = 1, w_rank = 2
  3. x_rank = 2, w_rank = 2
  4. x_rank > 2, w_rank = 2
  5. x_rank = any >= w_rank > 2

Note: Different types of input tensors (int8 and uint8) does not work currently

@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 8, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@vvchernov vvchernov changed the title WIP: [QNN, ONNX] Extension of QLinearMatMul in ONNX front-end for all ranks of input tensors [QNN, ONNX] Extension of QLinearMatMul in ONNX front-end for all ranks of input tensors Nov 9, 2022
@masahi masahi merged commit b4b90d7 into apache:main Nov 10, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 10, 2022
…s of input tensors (apache#13322)

* QLinearMatMul was extended for all ranks of a and b

* CI test for QLinearMatMul was implemented (onnx front-end)

* fix after black check

* numpy type fix

* fix weight scale and zero point, output type

* fix after pylint

* resolve different input types in tests

* skip resolved TODO

* update covering of QLinearMatMul by tests

* pylint fixes

* skip test of QLinearMatMul on CUDA

Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…s of input tensors (apache#13322)

* QLinearMatMul was extended for all ranks of a and b

* CI test for QLinearMatMul was implemented (onnx front-end)

* fix after black check

* numpy type fix

* fix weight scale and zero point, output type

* fix after pylint

* resolve different input types in tests

* skip resolved TODO

* update covering of QLinearMatMul by tests

* pylint fixes

* skip test of QLinearMatMul on CUDA

Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
@vvchernov vvchernov deleted the vc/QLinearMatMul branch February 24, 2023 06:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants