-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[OpenCL][Adreno] Fix conv2d when output channels < 4 #14996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OpenCL][Adreno] Fix conv2d when output channels < 4 #14996
Conversation
When the number of output channels is less than 4, then we cannot pack such convolution to textures, although we can repack and extend tensors from 4d to 5d in runtime. It is happened because function `PropBoundToInputs` is invoked for all stages in when InferBound pass or LowerSchedule function is called. `PropBoundToInputs` has a logic in it that helps to developer to avoid out of bound access. And based on the output shape, it propagates it to inputs. Imagine that we want to transform a 4d tensor with 3 channels to 5d, extend its number of channels to 4 and then transform it back to 4d tensor with number of channels equal to 3. Example below: ``` [1, 3, 6, 6] -> [1, 1, 6, 6, 4] -> [1, 3, 6, 6] ``` In this case, we might write a boundary check in the repacking and extending compute function to handle the case when the iterator by the channels is out of bounds for the intermediate tensor. To avoid such problem, `PropBoundToInputs` has a logic which propagates a bounds from output tensor to input. In case when it is a possible situation that the compute has out of bounds access, then the range is decreased to value when such situation cannot be achieved. That means that for the example above the loop by channels which should filling intermediate tensor will iterate in range [0, 2] instead of [0, 3]. As it was mentioned, to avoid such problem we use buffers and cuda schedules instead of textures for cases when number of output channels is less than 4 and we cannot pack such tensor to texture. I evaluated performance of such approach and it doesn't introduce any performance degradation. For such small convolutions performance with buffers even a bit better than with textures.
|
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment. Generated by tvm-bot |
|
@masahi, @csullivan could you please review this PR? |
|
Thanks for the PR @echuraev . |
When the number of output channels is less than 4, then we cannot pack such convolution to textures, although we can repack and extend tensors from 4d to 5d in runtime. It is happened because function `PropBoundToInputs` is invoked for all stages in when InferBound pass or LowerSchedule function is called. `PropBoundToInputs` has a logic in it that helps to developer to avoid out of bound access. And based on the output shape, it propagates it to inputs. Imagine that we want to transform a 4d tensor with 3 channels to 5d, extend its number of channels to 4 and then transform it back to 4d tensor with number of channels equal to 3. Example below: ``` [1, 3, 6, 6] -> [1, 1, 6, 6, 4] -> [1, 3, 6, 6] ``` In this case, we might write a boundary check in the repacking and extending compute function to handle the case when the iterator by the channels is out of bounds for the intermediate tensor. To avoid such problem, `PropBoundToInputs` has a logic which propagates a bounds from output tensor to input. In case when it is a possible situation that the compute has out of bounds access, then the range is decreased to value when such situation cannot be achieved. That means that for the example above the loop by channels which should filling intermediate tensor will iterate in range [0, 2] instead of [0, 3]. As it was mentioned, to avoid such problem we use buffers and cuda schedules instead of textures for cases when number of output channels is less than 4 and we cannot pack such tensor to texture. I evaluated performance of such approach and it doesn't introduce any performance degradation. For such small convolutions performance with buffers even a bit better than with textures.
When the number of output channels is less than 4, then we cannot pack such convolution to textures, although we can repack and extend tensors from 4d to 5d in runtime.
It is happened because function
PropBoundToInputsis invoked for all stages in when InferBound pass or LowerSchedule function is called.PropBoundToInputshas a logic in it that helps to developer to avoid out of bound access. And based on the output shape, it propagates it to inputs.Imagine that we want to transform a 4d tensor with 3 channels to 5d, extend its number of channels to 4 and then transform it back to 4d tensor with number of channels equal to 3. Example below:
In this case, we might write a boundary check in the repacking and extending compute function to handle the case when the iterator by the channels is out of bounds for the intermediate tensor.
To avoid such problem,
PropBoundToInputshas a logic which propagates a bounds from output tensor to input. In case when it is a possible situation that the compute has out of bounds access, then the range is decreased to value when such situation cannot be achieved. That means that for the example above the loop by channels which should filling intermediate tensor will iterate in range [0, 2] instead of [0, 3].As it was mentioned, to avoid such problem we use buffers and cuda schedules instead of textures for cases when number of output channels is less than 4 and we cannot pack such tensor to texture. I evaluated performance of such approach and it doesn't introduce any performance degradation. For such small convolutions performance with buffers even a bit better than with textures.