preseg optimization pass - optimization for cat#2373
Merged
jjsjann123 merged 156 commits intomainfrom Oct 19, 2024
Merged
Conversation
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
jjsjann123
commented
Jun 26, 2024
Collaborator
Author
|
!build |
Collaborator
Author
|
!build --pybench |
Collaborator
Author
|
!build --pybench |
Collaborator
Author
|
!build |
Collaborator
Author
|
Don't see any odd failure in pybench. I'll merge this PR after CI clears. |
Collaborator
Author
|
!build |
Collaborator
Author
|
failures are related to distributed stuff. Don't seem to related but I'm gonna try merging main again. |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
Collaborator
Author
|
!build |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adding pre-segmentation optimization pass that optimize
cat(PadOp+CatOp) to avoid multiple kernels.PadOpfurther to its producer as described below. The goal is to either have the pad op directly on inputs to fusion, or to a point where segmentation at pad leaves a no-op fusion segment before pad.For details on the propagation logic, please see code comment
Note [ PadOp Propagation Rule ]
Note [ Handling TV with Multiple Uses via Frontier ]
PadOp, we also need to replace theCatOpwith a series of binary add, since its inputs are no longer directly produced byPadOp.