Skip to content

feat: add Qwen3.5 CP support for MCore path#2312

Draft
zpqiu wants to merge 3 commits intoNVIDIA-NeMo:mainfrom
zpqiu:qwen3.5_mcore_cp
Draft

feat: add Qwen3.5 CP support for MCore path#2312
zpqiu wants to merge 3 commits intoNVIDIA-NeMo:mainfrom
zpqiu:qwen3.5_mcore_cp

Conversation

@zpqiu
Copy link
Copy Markdown
Contributor

@zpqiu zpqiu commented Apr 22, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

zpqiu added 3 commits April 22, 2026 08:24
- Megatron-Bridge: 95e5f38f → 53f4c398 (latest main)
- Megatron-LM:    d30c3ae5 → 546a448b (dev)

The dev-branch Megatron-LM is required for Qwen3.5 VL's GDN + context
parallelism support (NVIDIA/Megatron-LM#2642, NVIDIA/Megatron-LM#2644).
Sync the proxy setup.py CACHED_DEPENDENCIES to match the updated dev
pyproject.toml (nv-grouped-gemm, flash_mla, nvidia-resiliency-ext VCS pin,
emerging_optimizers python-version marker, etc.) and regenerate uv.lock.

Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
Qwen3.5 VL's mbridge wrapper (Qwen3VLModel) always runs its own
preprocess_packed_seqs internally to pack + CP-shard from a [B, max_seq]
input + attention_mask. NeMo-RL's existing path pre-packs and CP-shards
before the forward, which collides with mbridge's preprocessing and
produces shape mismatches at GDN / RoPE / MoE when CP > 1.

Add a sequence_packing.delegate_pack_to_model flag. When true,
_prepare_vlm_batch_for_megatron keeps the batch in [B, max_seq] layout,
builds a bool attention_mask from the (aligned) padded lengths, and
hands the model a PackedSeqParams whose cu_seqlens_padded matches what
mbridge will derive internally. The model owns packing and CP-sharding
from there.

For the target-side path (logprob / loss post-processing), we also
produce a packed [1, T] view of input_ids; downstream code already
slices per-sequence via cu_seqlens_padded.

PP > 1 is supported by absorbing the pad_full_seq_to deficit into the
last sequence (same technique as _pack_sequences_for_megatron), so the
decoder-side packed length is constant across microbatches.

Additional fixes needed for this path:
- community_import.py: set calculate_per_token_loss when CP > 1, which
  Qwen3VLModel asserts.
- setup.py: clarify the 'CP > 1 requires sequence_packing' error message
  to mention delegate_pack_to_model for VLM models.

Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
_get_tokens_on_this_cp_rank slices a tensor with a list of slices, which
PyTorch 2.9 deprecates ('Using a non-tuple sequence for multidimensional
indexing is deprecated'). Every GDN/attention layer triggers this on the
packed-CP path, flooding the worker logs. Casting to tuple matches the
recommended API and is functionally identical.

Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 22, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions
Copy link
Copy Markdown

✅ Submodule Fast-Forward Check Results

Check based on commit: 16ae363 (PR #2312 from qwen3.5_mcore_cp)

✅ Submodules that are properly updated:

Megatron-Bridge: ✅ PR branch is ahead of main branch (fast-forward)
Megatron-LM: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@zpqiu zpqiu added the CI:L1 Run doctests, unit tests, and functional tests label Apr 22, 2026
@zpqiu
Copy link
Copy Markdown
Contributor Author

zpqiu commented Apr 22, 2026

/ok to test 16ae363

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant