Skip to content

Conversation

@zhreshold
Copy link
Member

ONNX -> relay frontend migration.
This is co-authered by @siju-samuel with initial attempt #2245

  • Based on the fact that relay will not depend on NNVM, all nnvm onnx modules are stripped out and put into relay/frontend
  • Put possible reusable code into common module.

@siju-samuel
Copy link
Member

@zhreshold I had some patches on top of this change. you can apply this.
0001-bug-fixes-and-more-testcases-added.patch.zip

@tqchen
Copy link
Member

tqchen commented Dec 29, 2018

@srkreddy1238 @nishi-t @Huyuwei @hlu1 please help review this PR

Copy link
Contributor

@srkreddy1238 srkreddy1238 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial review. Will revisit after the CI passed.

'dilations': ('dilation', (0, 0)),
'pads': ('padding', (0, 0), revert_caffe2_pad),
'group': ('groups', 1)},
custom_check=dimension_constraint())(inputs, attr, params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't pass 3 inputs (if bias available).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will strip to first 2 inputs

},
disables=['output_shape'],
extras={'use_bias': len(inputs) == 3},
custom_check=dimension_constraint())(inputs, attr, params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

return AttrCvt('upsampling')(inputs, attr)


class Shape(OnnxOpConverter):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape op is a workaround in NNVM. We may need to revisit in relay.
Can be disabled here and handled outside this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will disable for now

'LRN': LRN.get_converter(opset),

# defs/reduction
#'ReduceMax': AttrCvt('max', transforms={'axes': 'axis'}),
Copy link
Contributor

@nishi-t nishi-t Jan 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you forget to delete above comment line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, will clean the comments

@zhreshold
Copy link
Member Author

@srkreddy1238 @nishi-t Can you guys help review again?

Copy link
Member

@jroesch jroesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a bunch of comments on the PR. Mostly issues with documentation, otherwise if tests pass looks good.

return default

def get_relay_op(op_name):
"""Get the callable function from relay based on opname:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Get the callable function from relay based on opname:
"""Get the callable function from Relay based on operator name.

Parameters
----------
op_name : str
The relay operator name.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The relay operator name.
The Relay operator name.

return out_shapes

def infer_channels(inputs, transpose=False):
"""A hack for getting 'channels' or 'units' since caffe2 don't provide
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""A hack for getting 'channels' or 'units' since caffe2 don't provide
"""A hack for getting 'channels' or 'units' since caffe2 does not provide

@@ -0,0 +1,1077 @@
# pylint: disable=invalid-name, import-self, len-as-condition, unused-argument, too-many-lines
"""ONNX: Open Neural Network Exchange frontend for relay."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""ONNX: Open Neural Network Exchange frontend for relay."""
"""ONNX: Open Neural Network Exchange frontend for Relay."""

return _impl

def revert_caffe2_pad(pads):
"""Caffe2 require two times the normal padding."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Caffe2 require two times the normal padding."""
"""Caffe2 requires two times the normal padding."""

inputs,
attrs,
opset):
"""Convert from onnx operator to relay operator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Convert from onnx operator to relay operator.
"""Convert ONNX operator into a Relay operator.

def from_onnx(model,
shape=None,
dtype="float32"):
"""Convert from ONNX"s model into compatible relay Function.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Convert from ONNX"s model into compatible relay Function.
"""Convert a ONNX model into an equivalent Relay function.

shape=None,
dtype="float32"):
"""Convert from ONNX"s model into compatible relay Function.
Onnx graph is a python protobuf object. The companion parameters will be handled automatically.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Onnx graph is a python protobuf object. The companion parameters will be handled automatically.
ONNX graphs are represented as a Python Protobuf object. The companion parameters will be handled automatically.


This article is an introductory tutorial to deploy ONNX models with Relay.

For us to begin with, onnx module is required to be installed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For us to begin with, onnx module is required to be installed.
For us to begin with, ONNX package must be installed.

with relay.build_config(opt_level=1):
graph, lib, params = relay.build(sym, target, params=params)

######################################################################
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we maybe just show how to use the create_executor interface in the tutorial as well? its much simpler then building a graph runtime by hand

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jroesch Can you please advice how to elegantly evaluate executor created with an unordered dict of weights?

exec = relay.build_module.create_executor('graph', mod, tvm.cpu(0), 'llvm')
# params is a dict of {'name', tvm.ndarray} 
tvm_output = exec.evaluate(mode)(x, *params.values()).asnumpy()   # wrong order

How do I feed the executor with correct weights without changing the params dict to odict?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhreshold I realize the current API does not actually support keyword argument style for parameters.

I'm going to update it with a PR right now, it should work like this:

exec = relay.build_module.create_executor('graph', mod, tvm.cpu(0), 'llvm')
tvm_output = exec.evaluate(mode)(x, **params).asnumpy()

Does that seem elegant enough?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that would perfectly solve my current problem

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update once the PR get merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is merged.

@icemelon icemelon added status: need update need update based on feedbacks and removed status: need review labels Jan 7, 2019
def _impl_v1(cls, inputs, attr, params):
# Result of this operator is prominently used by reshape operator.
# Just pass the input as it is so that reshape_like can be used there.
print("Shape: Differently implemented in relay as a bypass (dummy operator)")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use logging.warning instead of print.

@icemelon
Copy link
Member

@srkreddy1238 @jroesch @nishi-t please help review this PR again

@Huyuwei
Copy link
Contributor

Huyuwei commented Jan 10, 2019

@zhreshold
Copy link
Member Author

@Huyuwei Seems like there is tests/python/frontend and tests/python/relay/frontend both including mxnet and keras frontend already, can you clarify the relationship between them?

@Huyuwei
Copy link
Contributor

Huyuwei commented Jan 10, 2019

@zhreshold tests/python/frontend is the right place to hold frontend tests, tests/python/relay/frontend is duplicated and should be removed. In fact, tests/python/relay/frontend is not running in the CI
See #2316 (comment)

@icemelon
Copy link
Member

@Huyuwei I'll remove the duplicate tests in a separate PR.
@zhreshold Let's use tests/python/frontend for frontend tests.

@zhreshold
Copy link
Member Author

ok, working on that

@icemelon
Copy link
Member

@Huyuwei @jroesch If you are both okay with this PR, I'll then merge it.

@icemelon icemelon added status: need review and removed status: need update need update based on feedbacks labels Jan 11, 2019
Copy link
Member

@jroesch jroesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@icemelon icemelon merged commit 30a5a60 into apache:master Jan 11, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants