Conversation
|
Review updated until commit 89f3173 Description
Changes walkthrough 📝
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
!test |
|
|
||
| NVFUSER_DECLARE_CLONE_AND_CREATE | ||
|
|
||
| static ForLoop* createFromIterDomain(Val* index, IterDomain* iter_domain); |
There was a problem hiding this comment.
I'm on the fence about this. The method is coupled with the ForLoop class so I moved here to save some typing. The downside is less access control because createFromIterDomain could access private fields/methods of ForLoop.
|
!test |
|
!test |
nsarka
left a comment
There was a problem hiding this comment.
LGTM, I just had a minor question
| } | ||
|
|
||
| std::vector<Val*> cloned_outs = ir_cloner.clone(group.outputs()); | ||
| // All expressions in the group are expected to be stream parallelized in |
There was a problem hiding this comment.
Do we enforce this constraint? If so is there an assertion somewhere?
There was a problem hiding this comment.
We don't but we should. I'm waiting for a isResharding-like method to do that easily.
|
|
||
| // Finds the stream IterDomain in the outputs of a segment. | ||
| IterDomain* findStreamIterDomain(const std::vector<Val*>& outs) { | ||
| for (auto* out : ir_utils::filterByType<TensorView>(outs)) { |
There was a problem hiding this comment.
So we are finding the stream ID in any of the outputs of a segment? Why not use the above variation directly with any of the segment outputs as they must have mapped stream IDs.
There was a problem hiding this comment.
Because I'm not sure about CPU-scalar TensorViews from composite ops. But I should probably harden the check to enforce every TensorView to have a Stream IterDomain. Wdyt?
There was a problem hiding this comment.
In their blackbox state, it does not look we can currently support SDPA ops, for example. So adding an assert makes sense to signal something is wrong. I guess this is something I need to fix in PropagateShardingsPass also.
There was a problem hiding this comment.
In their blackbox state, it does not look we can currently support SDPA ops, for example.
Why not? At least, batch and/or head can be easily parallelized on stream without changing the implementation of the SDPA op, assuming ShardByStreams are added properly of course.
| auto* out = ops::newValLike(in, *in->getDataType())->as<TensorView>(); | ||
|
|
||
| TransformReplay::selfReplay(in->domain(), out->domain()); | ||
| // This is conservative and suboptimal. Consider reusing the algorithm in |
There was a problem hiding this comment.
No. It's one of the cases where out's contiguity ought to be different from in due to the slicing effect.
There was a problem hiding this comment.
Oh okay, got it!
So in such cases the replay may in fact overwrite a correct contiguity as most users of selfReplay create the new TensorDomain using ops API, which sets the contiguity correctly. This is something we should consider for #5316.
|
!test |
For #5289