Conversation
|
!test |
|
Review updated until commit f2ed3b9 Description
Changes walkthrough 📝
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
!test |
|
!test |
|
!test |
|
!test |
|
!test |
|
@wujingyue do you plan on refactoring this based on yesterday's discussion? |
This PR is orthogonal to that. |
|
API wise, it seems simpler to me to keep the knob than infer it in the program. For example: The current changes will raise an error. Having the knob makes it simpler. For the original case in discussion #5177 (comment):
|
Yes, this is intentional. All tests passing means we don't have a use case for this. (We might in the future but YAGNI.) Therefore, removing the knob is strictly simpler because the function has fewer parameters and the user has fewer chances to make mistakes.
Good ideas! Yes, I'll add these checks in this PR. |
2ed8669 to
2b2bc7d
Compare
|
!test |
Done |
naoyam
left a comment
There was a problem hiding this comment.
I'm partial with the change. On one hand, I agree that this version would be easier to use because there's no option and its functionality is sufficient. On the other hand, automatic removal of reduction IDs is not something we do commonly, so I consider that's a deviation from the common behavior. Also, since this is a common building block that could be used throughout the whole system, I'd generally prefer an explicit and verbose interface over an implicit and smarter interface.
The current code in fact uses ignore_reductions=true more often than false: Line 247 in 697d93d
Thanks! I hear your concerns. I'll keep those in mind and revert this change when we run into the potential problems you suggested. |
|
!test |
csrc/transform_replay.cpp
Outdated
| if (logical.size() > new_logical.size()) { | ||
| logical = TensorDomain::noReductions(logical); | ||
| ignore_reductions = true; | ||
| } else if (logical.size() < new_logical.size()) { | ||
| new_logical = TensorDomain::noReductions(new_logical); | ||
| ignore_reductions = true; | ||
| } else { | ||
| ignore_reductions = false; |
There was a problem hiding this comment.
Why not -> if same size, ignore_reductions=false, else, true
There was a problem hiding this comment.
My earlier comment was intended as:
if (logical.size() != new_logical.size()){
logical = TensorDomain::noReductions(logical)
new_logical = TensorDomain::noReductions(new_logical)
ignore_reductions=True
} else {
ignore_reductions=False
}
Sorry about the ambiguity. The current implementation still maintains conditional clearing of reduction axes from logical and new_logical. I was indicating that we clear all reduction axis if the sizes are unequal.
There was a problem hiding this comment.
Got it. It's related to #5221 (comment). This PR makes the contract stricter: when sizes don't match, the extra IterDomains and only them are reduction.
Therefore, this PR NVF_ERRORs the following example
self = [i0, i1, r0, r1]
new_self = [i0', i1', r2]
because the expected behavior is unclear. If r0 and r1 are loop-transformed, should we transform r2 like r0 or r1, or not transform it at all? (Recall that when sizes do match we map/replay reductions as well)
|
!test |
|
!test |
With this change, ignore_reductions is computed instead of given as a knob.