Skip to content

Conversation

@echuraev
Copy link
Contributor

When the number of output channels is less than 4, then we cannot pack such convolution to textures, although we can repack and extend tensors from 4d to 5d in runtime.

It is happened because function PropBoundToInputs is invoked for all stages in when InferBound pass or LowerSchedule function is called.

PropBoundToInputs has a logic in it that helps to developer to avoid out of bound access. And based on the output shape, it propagates it to inputs.

Imagine that we want to transform a 4d tensor with 3 channels to 5d, extend its number of channels to 4 and then transform it back to 4d tensor with number of channels equal to 3. Example below:

[1, 3, 6, 6] -> [1, 1, 6, 6, 4] -> [1, 3, 6, 6]

In this case, we might write a boundary check in the repacking and extending compute function to handle the case when the iterator by the channels is out of bounds for the intermediate tensor.

To avoid such problem, PropBoundToInputs has a logic which propagates a bounds from output tensor to input. In case when it is a possible situation that the compute has out of bounds access, then the range is decreased to value when such situation cannot be achieved. That means that for the example above the loop by channels which should filling intermediate tensor will iterate in range [0, 2] instead of [0, 3].

As it was mentioned, to avoid such problem we use buffers and cuda schedules instead of textures for cases when number of output channels is less than 4 and we cannot pack such tensor to texture. I evaluated performance of such approach and it doesn't introduce any performance degradation. For such small convolutions performance with buffers even a bit better than with textures.

When the number of output channels is less than 4, then we cannot pack
such convolution to textures, although we can repack and extend tensors
from 4d to 5d in runtime.

It is happened because function `PropBoundToInputs` is invoked for all
stages in when InferBound pass or LowerSchedule function is called.

`PropBoundToInputs` has a logic in it that helps to developer to avoid
out of bound access. And based on the output shape, it propagates it to
inputs.

Imagine that we want to transform a 4d tensor with 3 channels to 5d,
extend its number of channels to 4 and then transform it back to 4d
tensor with number of channels equal to 3. Example below:
```
[1, 3, 6, 6] -> [1, 1, 6, 6, 4] -> [1, 3, 6, 6]
```
In this case, we might write a boundary check in the repacking and
extending compute function to handle the case when the iterator by the
channels is out of bounds for the intermediate tensor.

To avoid such problem, `PropBoundToInputs` has a logic which propagates
a bounds from output tensor to input. In case when it is a possible
situation that the compute has out of bounds access, then the range is
decreased to value when such situation cannot be achieved. That means
that for the example above the loop by channels which should filling
intermediate tensor will iterate in range [0, 2] instead of [0, 3].

As it was mentioned, to avoid such problem we use buffers and cuda
schedules instead of textures for cases when number of output channels
is less than 4 and we cannot pack such tensor to texture. I evaluated
performance of such approach and it doesn't introduce any performance
degradation. For such small convolutions performance with buffers even a
bit better than with textures.
@tvm-bot
Copy link
Collaborator

tvm-bot commented May 31, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@echuraev
Copy link
Contributor Author

echuraev commented Jun 1, 2023

@masahi, @csullivan could you please review this PR?

@masahi masahi merged commit 9da0261 into apache:main Jun 1, 2023
@echuraev echuraev deleted the echuraev/fix_adreno_conv2d_with_3_channels branch June 1, 2023 06:00
@srkreddy1238
Copy link
Contributor

Thanks for the PR @echuraev .
We too came across this and used similar workaround for some recent popular networks.

mei-ye pushed a commit to mei-ye/tvm that referenced this pull request Jun 1, 2023
When the number of output channels is less than 4, then we cannot pack
such convolution to textures, although we can repack and extend tensors
from 4d to 5d in runtime.

It is happened because function `PropBoundToInputs` is invoked for all
stages in when InferBound pass or LowerSchedule function is called.

`PropBoundToInputs` has a logic in it that helps to developer to avoid
out of bound access. And based on the output shape, it propagates it to
inputs.

Imagine that we want to transform a 4d tensor with 3 channels to 5d,
extend its number of channels to 4 and then transform it back to 4d
tensor with number of channels equal to 3. Example below:
```
[1, 3, 6, 6] -> [1, 1, 6, 6, 4] -> [1, 3, 6, 6]
```
In this case, we might write a boundary check in the repacking and
extending compute function to handle the case when the iterator by the
channels is out of bounds for the intermediate tensor.

To avoid such problem, `PropBoundToInputs` has a logic which propagates
a bounds from output tensor to input. In case when it is a possible
situation that the compute has out of bounds access, then the range is
decreased to value when such situation cannot be achieved. That means
that for the example above the loop by channels which should filling
intermediate tensor will iterate in range [0, 2] instead of [0, 3].

As it was mentioned, to avoid such problem we use buffers and cuda
schedules instead of textures for cases when number of output channels
is less than 4 and we cannot pack such tensor to texture. I evaluated
performance of such approach and it doesn't introduce any performance
degradation. For such small convolutions performance with buffers even a
bit better than with textures.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants