Skip to content

Support scaled optimizer state in distributed Adam optimizer#1771

Merged
crcrpar merged 15 commits intoNVIDIA:masterfrom
timmoon10:distopt-scaled-state
Feb 8, 2024
Merged

Support scaled optimizer state in distributed Adam optimizer#1771
crcrpar merged 15 commits intoNVIDIA:masterfrom
timmoon10:distopt-scaled-state

Conversation

@timmoon10
Copy link
Contributor

This PR adds basic support for scaled optimizer state as discussed in the MS-AMP paper. The idea is that per-tensor scaling factors along with FP16/FP8 optimizer state results in lower memory usage than FP32 optimizer state with no degradation in convergence. This implementation is not quite the same as the MS-AMP FP8 optimizer since it only uses FP16 optimizer state and uses per-parameter-fragment scaling factors rather than per-parameter. It is a preliminary implementation and its performance could be improved with custom kernels (e.g. kernel to compute scaling factors, fused kernel with FP16-FP32 casts and Adam step).

In the process of debugging, I've also made some other performance optimizations and bugfixes:

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Call _check_params_shard_dtypes within _local_step. Fuse scaling factor computation.

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Shows up in PyTorch builds starting 20240118.

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments