Support scaled optimizer state in distributed Adam optimizer#1771
Merged
crcrpar merged 15 commits intoNVIDIA:masterfrom Feb 8, 2024
Merged
Support scaled optimizer state in distributed Adam optimizer#1771crcrpar merged 15 commits intoNVIDIA:masterfrom
crcrpar merged 15 commits intoNVIDIA:masterfrom
Conversation
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Call _check_params_shard_dtypes within _local_step. Fuse scaling factor computation. Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
8 tasks
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Shows up in PyTorch builds starting 20240118. Signed-off-by: Tim Moon <tmoon@nvidia.com>
crcrpar
approved these changes
Feb 8, 2024
13 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds basic support for scaled optimizer state as discussed in the MS-AMP paper. The idea is that per-tensor scaling factors along with FP16/FP8 optimizer state results in lower memory usage than FP32 optimizer state with no degradation in convergence. This implementation is not quite the same as the MS-AMP FP8 optimizer since it only uses FP16 optimizer state and uses per-parameter-fragment scaling factors rather than per-parameter. It is a preliminary implementation and its performance could be improved with custom kernels (e.g. kernel to compute scaling factors, fused kernel with FP16-FP32 casts and Adam step).
In the process of debugging, I've also made some other performance optimizations and bugfixes: