Enable reuse of dummy wgrad tensor#1651
Merged
ksivaman merged 5 commits intoNVIDIA:mainfrom Apr 8, 2025
Merged
Conversation
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com>
Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com>
Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com>
Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com>
for more information, see https://pre-commit.ci Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com>
Member
|
/te-ci pytorch L0 L1 |
timmoon10
reviewed
Apr 7, 2025
Comment on lines
691
to
701
| if getattr(weight, "zero_out_wgrad", False): | ||
| wgrad = torch.zeros( | ||
| weight.main_grad.shape, | ||
| dtype=weight.dtype, | ||
| device=torch.cuda.current_device(), | ||
| requires_grad=False, | ||
| wgrad = get_dummy_wgrad( | ||
| list(weight.main_grad.shape), | ||
| weight.dtype, | ||
| zero=True, | ||
| ) | ||
| else: | ||
| wgrad = torch.empty( | ||
| weight.main_grad.shape, | ||
| dtype=weight.dtype, | ||
| device=torch.cuda.current_device(), | ||
| requires_grad=False, | ||
| wgrad = get_dummy_wgrad( | ||
| list(weight.main_grad.shape), | ||
| weight.dtype, | ||
| ) |
Collaborator
There was a problem hiding this comment.
We could clean this up:
Suggested change
| if getattr(weight, "zero_out_wgrad", False): | |
| wgrad = torch.zeros( | |
| weight.main_grad.shape, | |
| dtype=weight.dtype, | |
| device=torch.cuda.current_device(), | |
| requires_grad=False, | |
| wgrad = get_dummy_wgrad( | |
| list(weight.main_grad.shape), | |
| weight.dtype, | |
| zero=True, | |
| ) | |
| else: | |
| wgrad = torch.empty( | |
| weight.main_grad.shape, | |
| dtype=weight.dtype, | |
| device=torch.cuda.current_device(), | |
| requires_grad=False, | |
| wgrad = get_dummy_wgrad( | |
| list(weight.main_grad.shape), | |
| weight.dtype, | |
| ) | |
| wgrad = get_dummy_wgrad( | |
| list(weight.main_grad.shape), | |
| weight.dtype, | |
| zero=not getattr(weight, "zero_out_wgrad", False), | |
| ) |
We could do a similar change in LayerNormLinear.
| return _multi_stream_cublas_workspace | ||
|
|
||
|
|
||
| def get_dummy_wgrad(shape: list, dtype: torch.dtype, zero=False) -> torch.Tensor: |
Collaborator
There was a problem hiding this comment.
This could be simplified with lru_cache.
| send_dst = cp_global_ranks[(rank + 1) % cp_size * cp_size_a2a + rank_a2a] | ||
| recv_src = cp_global_ranks[(rank - 1) % cp_size * cp_size_a2a + rank_a2a] | ||
| batch_p2p_comm = int(os.getenv("NVTE_BATCH_MHA_P2P_COMM", "0")) or (cp_size == 2) | ||
| batch_p2p_comm = int(os.getenv("NVTE_BATCH_MHA_P2P_COMM", "0")) |
Collaborator
There was a problem hiding this comment.
What's the motivation for this test change? It seems orthogonal to the functional changes.
timmoon10
approved these changes
Apr 7, 2025
wdykas
pushed a commit
to wdykas/TransformerEngine
that referenced
this pull request
Apr 14, 2025
* Use dummy wgrads for lower memory consumption Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Bug fix to avoid sharing gradients. Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Disable automatic use of batch_p2p_comm for CP2 Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Change weight to origin_weight for LN_LINEAR Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> --------- Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> Co-authored-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Peter Dykas <wdykas@nvidia.com>
ptrendx
pushed a commit
that referenced
this pull request
May 1, 2025
* Use dummy wgrads for lower memory consumption Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Bug fix to avoid sharing gradients. Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Disable automatic use of batch_p2p_comm for CP2 Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * Change weight to origin_weight for LN_LINEAR Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> --------- Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by: Vasudevan Rengasamy <vrengasamy@nvidia.com> Co-authored-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Please include a brief summary of the changes, relevant motivation and context.
Fixes # (issue)
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: