-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[TOPI] Depth wise Conv for NHWC #325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Huyuwei do you mind help to do a review? |
|
|
||
| s[temp].compute_inline() | ||
| if DepthwiseConv2d.op in s.outputs: | ||
| Output = DepthwiseConv2d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You will need to cache_write here so this compute block is in local register
| Output = DepthwiseConv2d | ||
| else: | ||
| Output = op.output(0) | ||
| #s[DepthwiseConv2d].set_scope("local") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set the scope of depthwise to be local
|
according to @tqchen's comment in #315:
Besides, it's better to add topi.testing.depthwise_conv2d Could you refer to this PR #328 and do the same thing for depthwise_conv2d_nhwc @wetliu |
|
I pulled the most recent TVM and try to build by make pylint. I got this error: Any idea what I can do to fix it so that I can build by make pylint? Thank you. |
|
This is the problem of upstream, it shall be fixed by #332 |
|
Can you please check what's happening as it cannot be built? @tqchen Is it the last version before you guys finish modifying the conv and depth conv? I don't want to modify the code again and again only because the upstream conv and deconv are still under developing. Thank you! |
|
You will need to do a rebase on the upstream master, i think the conv will be freezed for a while from now on |
|
OK. Thank you! |
|
[solved] Thank you! |
topi/include/topi/nn.h
Outdated
| auto kw = tvm::reduce_axis(tvm::Range{0, W->shape[1]}, "kw"); | ||
| auto T = (pad_h == 0 && pad_w == 0) | ||
| ? I | ||
| : pad(I, {tvm::Expr(0), tvm::Expr(0), pad_h, pad_w}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be {0, pad_h, pad_w, 0}
topi/include/topi/nn.h
Outdated
| int stride_h = 1, | ||
| int stride_w = 1, | ||
| std::string name = "tensor", | ||
| std::string tag = kDepthwiseConv2d_nhwc) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be kDepthwiseConv2dNHWC
|
please fix the updated comment and do another pass over the code |
|
Thanks, this is merged |
…pache#325) (apache#326) * Added elementwise-add test * Fix typo * Fixed elem-wise ops for lists with len>2
…pache#325) (apache#326) * Added elementwise-add test * Fix typo * Fixed elem-wise ops for lists with len>2
…pache#325) (apache#326) * Added elementwise-add test * Fix typo * Fixed elem-wise ops for lists with len>2
This PR add hash/equal/json support for shape tuple.
It still cannot pass the test case for the fusion with bn and relu. The main problem is in the scheduler does not work with the scale_shift_nhwc function, which is added in the nn.mapping.
Any help is valuable to me. Thank you!