-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[RELAY][OP] Dynamic conv2d batch size for cuda #6598
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| ) | ||
| cfg.add_flop(2 * N * CO * H * W * CI * KH * KW) | ||
| if isinstance(N, int): | ||
| cfg.add_flop(2 * N * CO * H * W * CI * KH * KW) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kevinthesun @icemelon9 @comaniac is this okay to autotvm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's okay in terms of the functionality, but the output message would be weird. Since the AutoTVM progress bar shows throughput instead of latency, users will always see 0 GFLOPS during the tuning process (https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/tuner/callback.py#L159).
Maybe we can still have the FLOPS with N=1 and pop a message saying we are tuning the kernel with N=1 but it can be used by the kernel with any batch size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I thought about 1 as well. But it actually maybe not 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine since generally AutoTVM can't be used for dynamic shape op. User won't see any flops info when N is symbolic.
| ) | ||
| cfg.add_flop(2 * N * CO * H * W * CI * KH * KW) | ||
| if isinstance(N, int): | ||
| cfg.add_flop(2 * N * CO * H * W * CI * KH * KW) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's okay in terms of the functionality, but the output message would be weird. Since the AutoTVM progress bar shows throughput instead of latency, users will always see 0 GFLOPS during the tuning process (https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/tuner/callback.py#L159).
Maybe we can still have the FLOPS with N=1 and pop a message saying we are tuning the kernel with N=1 but it can be used by the kernel with any batch size?
kevinthesun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Thanks @zhiics @kevinthesun |
This PR enables dynamic conv2d for CUDA.
CC @kevinthesun @icemelon9 @mbrookhart @comaniac