-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[FRONTEND][TENSORFLOW] Enhancements. #1923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc @masahi @sgrechanik-h @Huyuwei @FrozenGene welcome to review |
|
|
||
| # Infer shapes if passed explicitely | ||
| node_output = self._nodes[node.name] | ||
| if shape: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shape={} will behave the same way as shape=None. Is it intended? I'm thinking about some hypothetical corner cases like when there are no inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an optional arg to override any other way of getting output shapes. Leaving { } will result in inferring wrong shapes for a graph with inputs.
I feel model with no inputs is unrealistic and can be ignored here.
|
@sgrechanik-h please have a look. |
|
@tqchen Do you think we could accommodate them in TVM as docs & utils? |
|
In the interest of minimum codebase, let us keep it as a separate thing, but we can generate the proto and download them in the testcase. Thanks @srkreddy1238 |
|
@tqchen What do you suggest ? |
|
Maybe we can build up another set of test that runs nightly, and the normal fast test jobs to make sure general coverage |
|
Another possible approach is to provide the conversion dry run, which converts the model but does not run them. This can considerably reduce the test time and can be placed in the CI |
|
+1 for nightly build. Conversion dry run is not a straight forward as checkpoint to protobuf conversion need https://github.com/tensorflow/models (to generate initial protobuf) and freeze_graph built from tensorflow source to embed checkpoint into it. I may revisit into simplifying/automating this later for nightly build. For now I will leave a doc @ docs/frontend/tensorflow.md. |
07df8cb to
152fb65
Compare
|
@nishi-t welcome to review. |
| inputs = [] | ||
| for i in node.input: | ||
| if i in self._nodes: | ||
| inputs.append(self._nodes[i]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the case that causes KeyError in self._nodes[i] in my current work (PR #2001 ). I think that it is needed to handle KeyError. And, I'd appreciate your help.
For example:
def check_split_concat(ishape, **kwargs):
inp_array = np.random.uniform(size=ishape).astype(np.float32)
with tf.Graph().as_default():
in1 = tf.placeholder(shape=inp_array.shape, dtype=inp_array.dtype)
splited = tf.split(in1, **kwargs)
tf.concat(splited, axis=1)
compare_tf_with_tvm(inp_array, 'Placeholder:0', 'concat:0')
check_split_concat((5, 30), num_or_size_splits=[15, 15], axis=1)
In above code, the concat node has three inputs(split, split1, concat/axis), but split1 is not registered in _nodes.(the detail of graphdef in this case is here) Moreover, these split:X is increased depending on the number of split.
update: fix the link of my gist
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't care above my comment. I understand it is a problem which should be resolved separately from this PR. Thanks ;)
* Generalize the shape with explicite argument. * Supported entire range of mobilenet_v2 models. * Cast op updated to latest tensorflow. * Documentation updates. * CheckNumerics op handling without exception. * Test data from tensorflow official releases.
|
cc @nishi-t have another look pls. |
nishi-t
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
|
Thanks! this is now merged. |
* [FRONTEND][TENSORFLOW] Enhancements. * Generalize the shape with explicite argument. * Supported entire range of mobilenet_v2 models. * Cast op updated to latest tensorflow. * Documentation updates. * CheckNumerics op handling without exception. * Test data from tensorflow official releases. * * CI error. * * self review * * Enhanced reshape handling. * * docs. * * tutorials * * review comments. * * review.
* [FRONTEND][TENSORFLOW] Enhancements. * Generalize the shape with explicite argument. * Supported entire range of mobilenet_v2 models. * Cast op updated to latest tensorflow. * Documentation updates. * CheckNumerics op handling without exception. * Test data from tensorflow official releases. * * CI error. * * self review * * Enhanced reshape handling. * * docs. * * tutorials * * review comments. * * review.
* [FRONTEND][TENSORFLOW] Enhancements. * Generalize the shape with explicite argument. * Supported entire range of mobilenet_v2 models. * Cast op updated to latest tensorflow. * Documentation updates. * CheckNumerics op handling without exception. * Test data from tensorflow official releases. * * CI error. * * self review * * Enhanced reshape handling. * * docs. * * tutorials * * review comments. * * review.
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from others in the community.