-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[Unity][Op] introduce shape_to_tensor op
#14447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
| ctx->ReportFatal(Diagnostic::Error(call) | ||
| << op << " requires the input " << op->arguments[i]->name | ||
| << " to be Tensor. However, the given one is " | ||
| << " to be Tensor. However, the given one has a " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unrelated to this PR, but found the current message confusing during the debugging.
| shape_tuple: tvm.runtime.ShapeTuple | ||
| Shape tuple that we want to convert to NDArray at runtime | ||
| """ | ||
| return tvm.nd.array([int(v) for v in shape_tuple]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we assume it's always on cpu?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. shape tuple and shape computation happen on the CPU side.
a425bc7 to
5c8b7af
Compare
9870c35 to
fc0cc25
Compare
yongwww
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
In Unity, we have a clear distinction between
tensorandshape: we haveShapeExprandShapeStructInfoin AST,ShapeTuplein runtime container. Meanwhile, most of operators and their TOPI implementations are defined with tensors. For example,relax.takeis defined as follows:To allow the shape computation, this PR introduces a
shape_to_tensorop that convertsShapeTupletoNDArrayat runtime. This enables the common shape computation patterns like following:It's worth noting that
tensor_to_shapeop is already introduced in #14282, so roundtrip between shape and tensor would be now possible.Currently, this op requires special handling in
FoldConstantpass since this pass is only able to evaluateTIR primfunc, notPackedFunc. Once we extendFoldConstantto supportPackedFuncevaluation, we should be able to remove these unnecessary special handling.cc. @jwfromm @yongwww @psrivas2 @slyubomirsky @tqchen