fix: Step-3.5-Flash layer_types mismatch and related recipe fixes#1916
Merged
fix: Step-3.5-Flash layer_types mismatch and related recipe fixes#1916
Conversation
Contributor
|
/ok to test 2d72807 |
Contributor
Author
|
/claude review |
Contributor
Author
|
/ok to test 2d72807 |
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
akoumpa
approved these changes
Apr 21, 2026
akoumpa
added a commit
that referenced
this pull request
Apr 21, 2026
) * fix: add tiktoken dep, patch Step-3.5-Flash layer_types mismatch, tune Qwen MoE recipes - Add tiktoken to base deps for Moonlight's TikToken-based remote tokenizer. - Retry AutoConfig.from_pretrained when upstream configs ship layer_types longer than num_hidden_layers (e.g. stepfun-ai/Step-3.5-Flash) by truncating layer_types in the raw config dict and rebuilding via the resolved config class (dynamic module or CONFIG_MAPPING). - Bump qwen3_moe_30b_hellaswag hf_kl_threshold 1e-3 -> 1e-2 and qwen3_moe_30b_uccl_ep ep_size 16 -> 8. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: hemildesai <hemild@nvidia.com> * Update uv lock Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> * Apply suggestion from @claude[bot] Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> --------- Signed-off-by: hemildesai <hemild@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
3 tasks
hemildesai
added a commit
that referenced
this pull request
Apr 21, 2026
) * fix: add tiktoken dep, patch Step-3.5-Flash layer_types mismatch, tune Qwen MoE recipes - Add tiktoken to base deps for Moonlight's TikToken-based remote tokenizer. - Retry AutoConfig.from_pretrained when upstream configs ship layer_types longer than num_hidden_layers (e.g. stepfun-ai/Step-3.5-Flash) by truncating layer_types in the raw config dict and rebuilding via the resolved config class (dynamic module or CONFIG_MAPPING). - Bump qwen3_moe_30b_hellaswag hf_kl_threshold 1e-3 -> 1e-2 and qwen3_moe_30b_uccl_ep ep_size 16 -> 8. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: hemildesai <hemild@nvidia.com> * Update uv lock Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> * Apply suggestion from @claude[bot] Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> --------- Signed-off-by: hemildesai <hemild@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Signed-off-by: hemildesai <hemild@nvidia.com>
akoumpa
added a commit
that referenced
this pull request
Apr 21, 2026
#1936) fix: Step-3.5-Flash layer_types mismatch and related recipe fixes (#1916) * fix: add tiktoken dep, patch Step-3.5-Flash layer_types mismatch, tune Qwen MoE recipes - Add tiktoken to base deps for Moonlight's TikToken-based remote tokenizer. - Retry AutoConfig.from_pretrained when upstream configs ship layer_types longer than num_hidden_layers (e.g. stepfun-ai/Step-3.5-Flash) by truncating layer_types in the raw config dict and rebuilding via the resolved config class (dynamic module or CONFIG_MAPPING). - Bump qwen3_moe_30b_hellaswag hf_kl_threshold 1e-3 -> 1e-2 and qwen3_moe_30b_uccl_ep ep_size 16 -> 8. * Update uv lock * Apply suggestion from @claude[bot] --------- Signed-off-by: hemildesai <hemild@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Hemil Desai <hemild@nvidia.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
linnanwang
pushed a commit
that referenced
this pull request
Apr 24, 2026
) * fix: add tiktoken dep, patch Step-3.5-Flash layer_types mismatch, tune Qwen MoE recipes - Add tiktoken to base deps for Moonlight's TikToken-based remote tokenizer. - Retry AutoConfig.from_pretrained when upstream configs ship layer_types longer than num_hidden_layers (e.g. stepfun-ai/Step-3.5-Flash) by truncating layer_types in the raw config dict and rebuilding via the resolved config class (dynamic module or CONFIG_MAPPING). - Bump qwen3_moe_30b_hellaswag hf_kl_threshold 1e-3 -> 1e-2 and qwen3_moe_30b_uccl_ep ep_size 16 -> 8. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: hemildesai <hemild@nvidia.com> * Update uv lock Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> * Apply suggestion from @claude[bot] Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> --------- Signed-off-by: hemildesai <hemild@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: NeMo Bot <nemo-bot@nvidia.com> Co-authored-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
tiktokento base deps so Moonlight's TikToken-based remote tokenizer can load.layer_typesis longer thannum_hidden_layers(e.g.stepfun-ai/Step-3.5-Flashships 48 vs 45).get_hf_confignow catches the validation error, truncateslayer_typesin the raw config dict, and rebuilds via the resolved config class (remote dynamic module orCONFIG_MAPPING).qwen3_moe_30b_hellaswaghf_kl_threshold1e-3 → 1e-2;qwen3_moe_30b_uccl_epep_size16 → 8.Test plan
tests/unit_tests/_transformers/test_model_init.pycover the helper (dynamic-module path, CONFIG_MAPPING fallback, no-op when lengths match, unresolved class) andget_hf_configretry behavior (triggers fix on validator error, reraises unrelatedValueError, preserves the "does not recognize this architecture" helpful message)._load_config_with_layer_types_fixagainststepfun-ai/Step-3.5-Flash— config loads with matching lengths (45/45).🤖 Generated with Claude Code