-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[RELAY][RUNTIME] Add compute and schedule attributes for all ops in relay/op/tensor.py #2050
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
this code seems way too highly repetitive to me. def sqrt_compute(attrs, inputs, output_type, target):
assert len(inputs) == 1
return [topi.sqrt(inputs[0])]
register_compute("sqrt", sqrt_compute)
register_schedule("sqrt", schedule_broadcast)there are 3 things: register_compute, register_schedule could be merged into one single function If I were to write this, I will declare a table that store the name, and the num of input, traverse it, and define function with metaprogramming, to minimize the chance of error, and to maximize reuse(other code that build ontop of relay can simply read the table). registry = {}
registry['log'] = 1
registry['add'] = 2
...Of course, it might be going too far for tvm coding style, but fusing register_compute/register_schedule is certainly doable, and so is using a hof. |
|
As per https://docs.tvm.ai/contribute/code_review.html#ensure-test-coverage please add test case for each of the operator added reference to https://github.com/dmlc/tvm/blob/master/nnvm/tests/python/compiler/test_top_level1.py#L167 |
python/tvm/relay/op/_tensor.py
Outdated
| import topi.cuda | ||
| from . import register | ||
|
|
||
| def register_schedule(op_name, schedule): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please document the public facing functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the registry functions should goto op.registry.py
|
@MarisaKirisame the argument against that style (which is roughly) what NNVM has done is that it becomes much harder for users to read and modify. Most code is effectively read-only, and especially code like this will not change often. I think it is better to err on repetitive and readable than inscrutable. |
| # zeros_like | ||
| def zeros_like_compute(attrs, inputs, output_type, target): | ||
| assert len(inputs) == 1 | ||
| return [topi.full_like(inputs[0], 0.0)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the output data type can likely be wrong here
python/tvm/relay/op/_tensor.py
Outdated
| import topi.cuda | ||
| from . import register | ||
|
|
||
| def register_schedule(op_name, schedule): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the registry functions should goto op.registry.py
| assert len(inputs) == 2 | ||
| return [topi.less_equal(inputs[0], inputs[1])] | ||
|
|
||
| register_compute("less_equal", less_equal_compute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per nnvm tradition, we should move most of the compute function to c++, and only leave a few(e.g. conv2d) in python
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our current plan is to do this in a second pass.
|
It seems that we have several classes named as |
|
Merge this in and will update the follow-up on #2059 to depend on this |
…elay/op/tensor.py (apache#2050)
…elay/op/tensor.py (apache#2050)
…elay/op/tensor.py (apache#2050)
This PR ports over schedule and compute attributes for the operators in
tensor.py. I hope this will serve as an example of adding this behavior for operators so others in the community can help continue the work of making Relay operator complete.I simply copied over the same scheduling primitives from NNVM, and am using out of the box compute functions. Feedback much appreciated.
For details on helping with the rest of the operators please read here #2051.
cc @tqchen @MarisaKirisame @slyubomirsky @zhiics @siju-samuel
There are two missing cases (concatenate and copy both of which require a little more work).