Skip to content

Conversation

@slyubomirsky
Copy link
Contributor

As the title states, these are compute and schedule primitives for operators expand_dims and squeeze, with tests included. Please review (e.g., @jroesch, @MarisaKirisame) for correctness and style.

@slyubomirsky
Copy link
Contributor Author

Incidentally, is there a fundamental reason that Relay's squeeze needs to have a different calling convention from NNVM's? Namely, taking a list instead of a tuple and not allowing a single int in place of a list of one

@slyubomirsky
Copy link
Contributor Author

slyubomirsky commented Nov 20, 2018

Also is there a reason Relay's squeeze doesn't permit passing -1 (or negative dimensions in general) while NNVM's does?

@joshpoll
Copy link
Contributor

topi squeeze doesn't handle nonpositive dimensions. implementing those properly will require some thought, because resolving them requires shape computation

@joshpoll
Copy link
Contributor

the single int in place of a list could be handled by the register compute function

@slyubomirsky
Copy link
Contributor Author

I see, regarding the shape computation issue (I am not sure I agree with the rationale since many other Relay functions do the same thing with axis args; we should be consistent).

I don't think the register compute function alone can or should handle the discrepancy in calling conventions because the Relay function nn.squeeze itself documents the calling convention of lists -- my questions is whether there is a reason for that, or if it should be made like NNVM's.

@tqchen
Copy link
Member

tqchen commented Nov 20, 2018

It seems that we will need to add negative axis support to topi, to make it compatible. Note that we only need to know the dimension of the shape and they are known in topi compute.

We can provide the support for passing a single int in the python wrapping, by detecting the input type.

check_binary_op(opfunc, ref)


def test_expand_dims():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invoke this function from __main__

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do

@slyubomirsky
Copy link
Contributor Author

@tqchen thanks for the explanation. If I understand correctly, couldn't we handle negative entries in squeeze_compiler without changing the topi implementation at all? Or would we prefer to avoid this sort of preprocessing before invoking topi functions?

@tqchen
Copy link
Member

tqchen commented Nov 20, 2018

This is a simple feature that seems would make topi more general, so we could do it in the topi side. Also given that this is already a simple function, our goal is to move most of the compute to c++

@joshpoll
Copy link
Contributor

We should update the relation then, right?

@tqchen
Copy link
Member

tqchen commented Nov 20, 2018

@joshpoll yes

# expand_dims
@register_compute("expand_dims")
def expand_dims_compiler(attrs, inputs, output_type, target):
"""Compiler for expand_dims."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sure expand_dims in topi support negative index(I think they may already)

# squeeze
@register_compute("squeeze")
def squeeze_compiler(attrs, inputs, output_type, target):
"""Compiler for squeeze dims."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

squeeze already support negative index, and None I think so you just have to redirect to topi call

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should rename this to "compute" to match the style in the rest of the system.

@tqchen tqchen merged commit 9473dca into apache:master Nov 25, 2018
@tqchen
Copy link
Member

tqchen commented Nov 25, 2018

Thanks, @joshpoll @slyubomirsky @jroesch @siju-samuel , this is merged.

@tqchen
Copy link
Member

tqchen commented Nov 25, 2018

see followup changes in #2163

FrozenGene pushed a commit to FrozenGene/tvm that referenced this pull request Dec 27, 2018
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants