Skip to content

Comments

Fix stale parameter index map in DistributedOptimizer#2872

Open
ahmadki wants to merge 2 commits intoNVIDIA:mainfrom
ahmadki:ahmadki/issue_2777
Open

Fix stale parameter index map in DistributedOptimizer#2872
ahmadki wants to merge 2 commits intoNVIDIA:mainfrom
ahmadki:ahmadki/issue_2777

Conversation

@ahmadki
Copy link
Member

@ahmadki ahmadki commented Jan 8, 2026

What does this PR do ?

During mixed-precision training, model_param_group_index_map becomes stale after parameters are reordered by dtype. The map is initialized based on original parameter order, but _build_model_and_main_param_groups reorders parameters (FP32 first, then FP16/BF16) for checkpoint consistency. The map was never updated to reflect this reordering. This caused get_parameter_state_dp_zero to access wrong parameters when saving checkpoints, resulting in size mismatch errors.

Solution: Rebuild model_param_group_index_map after parameter reordering to keep it synchronized with optimizer.param_groups.## Contribution process

Fixes #2777

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @megatron-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@ahmadki ahmadki requested review from a team as code owners January 8, 2026 15:38
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 8, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@Phlip79
Copy link
Member

Phlip79 commented Jan 12, 2026

/ok to test bfe8a04

Copy link
Member

@Phlip79 Phlip79 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments and tests!

@Phlip79 Phlip79 added Final Review Apply this label to indicate that your PR is ready for final review. complexity: medium labels Jan 12, 2026
During mixed-precision training, model_param_group_index_map becomes stale after parameters are reordered by dtype. The map is initialized based on original parameter order, but _build_model_and_main_param_groups reorders parameters (FP32 first, then FP16/BF16) for checkpoint consistency. The map was never updated to reflect this reordering. This caused get_parameter_state_dp_zero to access wrong parameters when saving checkpoints, resulting in size mismatch errors.

Solution: Rebuild model_param_group_index_map after parameter reordering to keep it synchronized with optimizer.param_groups.

Fixes NVIDIA#2777
Parameter shards should never participate in autograd, inconsistent with FP16/BF16
@ahmadki ahmadki force-pushed the ahmadki/issue_2777 branch from bfe8a04 to 537fa1a Compare January 13, 2026 16:40
Comment on lines +586 to +625
(self.model_param_group_index_map, self.opt_group_ranges) = (
self._build_optimizer_group_ranges(self.optimizer.param_groups, self.gbuf_ranges)
)

# Allocate main param shards.
(
self.model_float16_groups,
self.model_fp32_groups,
self.shard_float16_groups,
self.shard_fp32_groups,
self.shard_fp32_from_float16_groups,
) = self._build_model_and_main_param_groups(
self.gbuf_ranges, self.model_param_gbuf_map, self.opt_group_ranges, config
)

if isinstance(self.optimizer, HybridDeviceOptimizer):
self.optimizer = HybridDeviceOptimizer(
params=[g["orig_group"] for g in self.opt_group_ranges], **self.optimizer.defaults
)
else:
self.optimizer.param_groups = [g["orig_group"] for g in self.opt_group_ranges]
self.optimizer.load_state_dict(self.optimizer.state_dict())

# Rebuild model_param_group_index_map to reflect parameter reordering.
# The _build_model_and_main_param_groups method reorders parameters by dtype
# (FP32 first, then FP16/BF16), so we need to update the mapping to match
# the new positions in optimizer.param_groups.
for group_index, group_range in enumerate(self.opt_group_ranges):
param_order = 0
# First, add FP32 params (in the same order as they appear in group_range["params"])
for model_param in group_range["params"]:
if model_param.type() == 'torch.cuda.FloatTensor':
self.model_param_group_index_map[model_param] = (group_index, param_order)
param_order += 1
# Then, add FP16/BF16 params (in the same order as they appear in group_range["params"])
for model_param in group_range["params"]:
if model_param.type() in ['torch.cuda.HalfTensor', 'torch.cuda.BFloat16Tensor']:
self.model_param_group_index_map[model_param] = (group_index, param_order)
param_order += 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be possible to directly build model_param_group_index_map in the desired order in _build_optimizer_group_ranges?

@asolergi-nv
Copy link
Contributor

/ok to test 537fa1a

# (FP32 first, then FP16/BF16), so we need to update the mapping to match
# the new positions in optimizer.param_groups.
for group_index, group_range in enumerate(self.opt_group_ranges):
param_order = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Call this param_idx?

# First, add FP32 params (in the same order as they appear in group_range["params"])
for model_param in group_range["params"]:
if model_param.type() == 'torch.cuda.FloatTensor':
self.model_param_group_index_map[model_param] = (group_index, param_order)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this have repercussions on DDP's param_index_map too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: medium Final Review Apply this label to indicate that your PR is ready for final review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] DistributedOptimizer issues in Mixed-Precision: Stale model_param_group_index_map and .copy_ failure during get/load parameter_state

5 participants