-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[Relax][PyTorch] Enable decomposition for unary ops and refactor tests #18401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @tlopex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the PyTorch Relax frontend by formalizing and testing the decomposition of various unary operators. The changes ensure that these operations are correctly translated into a sequence of more fundamental Relax primitives, which is crucial for improved optimization and broader compatibility. The update involves both modifying existing test configurations and introducing new, detailed test cases to validate the decomposition logic for several key unary functions. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request enables decomposition for unary op tests and refactors some of them into separate tests. The changes look good overall, but I found an issue in the selu decomposition test where the scale factor is missing.
| with R.dataflow(): | ||
| lv: R.Tensor((1, 3, 10, 10), dtype="float32") = R.exp(input) | ||
| lv1: R.Tensor((1, 3, 10, 10), dtype="float32") = R.subtract( | ||
| R.const(1.0, "float32"), lv | ||
| ) | ||
| lv2: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(lv1) | ||
| lv3: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply( | ||
| R.const(-1.6732631921768188, "float32"), lv2 | ||
| ) | ||
| lv4: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(input) | ||
| lv5: R.Tensor((1, 3, 10, 10), dtype="float32") = R.add(lv3, lv4) | ||
| gv: R.Tuple(R.Tensor((1, 3, 10, 10), dtype="float32")) = (lv5,) | ||
| R.output(gv) | ||
| return gv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decomposition of selu seems to be missing the scale factor. According to the PyTorch documentation, selu(x) = scale * (max(0,x) + min(0, alpha * (exp(x) - 1))), which is equivalent to scale * elu(x, alpha). The current implementation only computes elu(x, alpha) but omits the final multiplication by scale (1.0507009873554805).
Also, the value of alpha used here (1.67326319...) is slightly different from the one specified in the documentation (1.67326324...), which might be due to floating-point precision differences, but the missing scale factor is a more significant issue.
| with R.dataflow(): | |
| lv: R.Tensor((1, 3, 10, 10), dtype="float32") = R.exp(input) | |
| lv1: R.Tensor((1, 3, 10, 10), dtype="float32") = R.subtract( | |
| R.const(1.0, "float32"), lv | |
| ) | |
| lv2: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(lv1) | |
| lv3: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply( | |
| R.const(-1.6732631921768188, "float32"), lv2 | |
| ) | |
| lv4: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(input) | |
| lv5: R.Tensor((1, 3, 10, 10), dtype="float32") = R.add(lv3, lv4) | |
| gv: R.Tuple(R.Tensor((1, 3, 10, 10), dtype="float32")) = (lv5,) | |
| R.output(gv) | |
| return gv | |
| with R.dataflow(): | |
| lv: R.Tensor((1, 3, 10, 10), dtype="float32") = R.exp(input) | |
| lv1: R.Tensor((1, 3, 10, 10), dtype="float32") = R.subtract( | |
| R.const(1.0, "float32"), lv | |
| ) | |
| lv2: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(lv1) | |
| lv3: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply( | |
| R.const(-1.6732632423543772, "float32"), lv2 | |
| ) | |
| lv4: R.Tensor((1, 3, 10, 10), dtype="float32") = R.nn.relu(input) | |
| lv5: R.Tensor((1, 3, 10, 10), dtype="float32") = R.add(lv3, lv4) | |
| lv6: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply(lv5, R.const(1.0507009873554805, "float32")) | |
| gv: R.Tuple(R.Tensor((1, 3, 10, 10), dtype="float32")) = (lv6,) | |
| R.output(gv) | |
| return gv |
|
cc @mshr-h |
## Related Issue - #18401 ## Why - When run_ep_decomposition=True is enabled, PyTorch decomposes pad operators into lower-level operations: - Constant mode → `constant_pad_nd.default` - Reflect/Replicate modes → `index.Tensor` with None indices - Circular mode → `copy.default` and `slice` operations - Some of the decomposed operators were not supported, causing failures ## How - Added support for `constant_pad_nd.default` and `copy.default` operator - Fixed `_index_tensor` to handle None indices by: - Using `take` operation when only one dimension is indexed (optimization) - Converting `None` to explicit `arange` for general cases - Updated test_pad to use run_ep_decomposition=True
## Related Issue - #18401 ## Why - When `run_ep_decomposition=True` is enabled, PyTorch decomposes binary operators into lower-level operations and some of them are not supported, which cause error ## How - Added support for `bitwise_and.Tensor`, `bitwise_and.Scalar`, `bitwise_xor.Tensor` and `bitwise_xor.Scalar` - Updated `test_binary` to use `run_ep_decomposition=True`
) ## Related Issue - #18401 ## How - Refactored `_index_tensor` to handle broadcast
This pr enables decomposition flag for all unary ops tests and refactors some decomposed tests.