Skip to content

Conversation

@jroesch
Copy link
Member

@jroesch jroesch commented Nov 1, 2018

This PR ports over schedule and compute attributes for the operators in tensor.py. I hope this will serve as an example of adding this behavior for operators so others in the community can help continue the work of making Relay operator complete.

I simply copied over the same scheduling primitives from NNVM, and am using out of the box compute functions. Feedback much appreciated.

For details on helping with the rest of the operators please read here #2051.

cc @tqchen @MarisaKirisame @slyubomirsky @zhiics @siju-samuel

There are two missing cases (concatenate and copy both of which require a little more work).

@jroesch jroesch mentioned this pull request Nov 1, 2018
66 tasks
@MarisaKirisame
Copy link
Contributor

MarisaKirisame commented Nov 1, 2018

this code seems way too highly repetitive to me.
this is a typical snippet of _tensor.py, with every other line being virtually like this.

def sqrt_compute(attrs, inputs, output_type, target):
    assert len(inputs) == 1
    return [topi.sqrt(inputs[0])]
register_compute("sqrt", sqrt_compute)
register_schedule("sqrt", schedule_broadcast)

there are 3 things:
define, which assert len and use topi
register_compute
register_schedule

register_compute, register_schedule could be merged into one single function
sqrt_compute can be made into one line by writing a hof that take two argument (the topi func and num of input)

If I were to write this, I will declare a table that store the name, and the num of input, traverse it, and define function with metaprogramming, to minimize the chance of error, and to maximize reuse(other code that build ontop of relay can simply read the table).
It will be

registry = {}
registry['log'] = 1
registry['add'] = 2
...

Of course, it might be going too far for tvm coding style, but fusing register_compute/register_schedule is certainly doable, and so is using a hof.

@tqchen
Copy link
Member

tqchen commented Nov 1, 2018

import topi.cuda
from . import register

def register_schedule(op_name, schedule):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document the public facing functions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the registry functions should goto op.registry.py

@tqchen tqchen self-assigned this Nov 1, 2018
@jroesch
Copy link
Member Author

jroesch commented Nov 1, 2018

@MarisaKirisame the argument against that style (which is roughly) what NNVM has done is that it becomes much harder for users to read and modify. Most code is effectively read-only, and especially code like this will not change often. I think it is better to err on repetitive and readable than inscrutable.

# zeros_like
def zeros_like_compute(attrs, inputs, output_type, target):
assert len(inputs) == 1
return [topi.full_like(inputs[0], 0.0)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the output data type can likely be wrong here

import topi.cuda
from . import register

def register_schedule(op_name, schedule):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the registry functions should goto op.registry.py

assert len(inputs) == 2
return [topi.less_equal(inputs[0], inputs[1])]

register_compute("less_equal", less_equal_compute)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per nnvm tradition, we should move most of the compute function to c++, and only leave a few(e.g. conv2d) in python

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our current plan is to do this in a second pass.

@zhiics
Copy link
Member

zhiics commented Nov 2, 2018

It seems that we have several classes named as Module including the ones from runtime. I am totally okay with them, but would it be a little confusing for TVM beginners? But again, I like the renaming from Environment to Module as it is more friendly to those who are familiar with LLVM.

@tqchen tqchen merged commit 4e77eeb into apache:master Nov 5, 2018
@tqchen
Copy link
Member

tqchen commented Nov 5, 2018

Merge this in and will update the follow-up on #2059 to depend on this

FrozenGene pushed a commit to FrozenGene/tvm that referenced this pull request Dec 27, 2018
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
@jroesch jroesch deleted the compute-ops branch February 4, 2021 04:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants