Skip to content

Conversation

@jainris
Copy link
Contributor

@jainris jainris commented Sep 21, 2020

  • Added dilation_value attribute to dilate operator of Relay/TOPI.
    (Enables custom value for dilation, instead of always 0)
  • Added tests for dilation_value of dilate operator in Relay and TOPI.
  • Added support for quantized input in TRANSPOSE_CONV operator of TFLite.
  • Added tests for quantized input in TRANSPOSE_CONV operator of TFLite.

…NSPOSE_CONV for TFLite.

* Added dilation_value attribute to dilate operator of Relay/TOPI.
  (Enables custom value for dilation, instead of always 0)
* Added tests for dilation_value of dilate operator in Relay and TOPI.
* Added support for quantized input in TRANSPOSE_CONV operator of TFLite.
* Added tests for quantized input in TRANSPOSE_CONV operator of TFLite.
@jainris
Copy link
Contributor Author

jainris commented Sep 21, 2020

@anijain2305 anijain2305 self-assigned this Sep 21, 2020
Copy link
Contributor

@mbaret mbaret left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks almost there to me. Could we see if there's a hosted model somewhere with a transpose convolution in that we can test with? Also ping @giuseros as I know you're familiar with the maths behind this.

@mbaret
Copy link
Contributor

mbaret commented Sep 23, 2020

also ping @siju-samuel

@anijain2305
Copy link
Contributor

anijain2305 commented Sep 23, 2020

Dilation part is good.

I am not sure about the conv2d transpose portion. My concern is that we now have to replicate the logic for different framework parsers. My suggestion would be to add qnn.conv2d_tranpose op and perform the "dilation + qnn.op.conv2d" lowering in QNN Legalize (example here - https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/qnn/op/legalizations.py#L266).

For now, we can make the transformation for all targets, not just specifically to ARM.

This will keep the option open to improve the schedule of conv2d_transpose as a whole if needed.

@ZihengJiang ZihengJiang added the status: need update need update based on feedbacks label Sep 23, 2020
@jainris
Copy link
Contributor Author

jainris commented Sep 24, 2020

Quantized Transpose Convolution code needs some changes, so bringing dilate operator changes independently in #6550.

@jainris jainris closed this Sep 24, 2020
giuseros pushed a commit to giuseros/incubator-tvm that referenced this pull request Nov 11, 2020
This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose
giuseros pushed a commit to giuseros/incubator-tvm that referenced this pull request Nov 11, 2020
This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain
giuseros pushed a commit to giuseros/incubator-tvm that referenced this pull request Nov 11, 2020
This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain
giuseros pushed a commit to giuseros/incubator-tvm that referenced this pull request Nov 11, 2020
This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
mbaret pushed a commit that referenced this pull request Nov 26, 2020
* Add initial support for quantized transpose convolution in Relay

This work is based on @jainris initial PR: #6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>

* Fix linting

* Addressing review comments

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
…che#6899)

* Add initial support for quantized transpose convolution in Relay

This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>

* Fix linting

* Addressing review comments

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
…che#6899)

* Add initial support for quantized transpose convolution in Relay

This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>

* Fix linting

* Addressing review comments

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
…che#6899)

* Add initial support for quantized transpose convolution in Relay

This work is based on @jainris initial PR: apache#6523

I added a relay.qnn.conv2d_transpose node. The strategy I followed is to
convert to int16 and invoke nn.conv2d_transpose (which already exists in
relay). Main changes:

- The node declaration lives in relay/qnn/op/convolution_transpose.cc
- Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py.
- I added and tested the operator in the tflite front-end
- I added a unit-test in Relay for qnn.conv2d_transpose

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>

* Fix linting

* Addressing review comments

Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status: need update need update based on feedbacks

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants