-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[megatron] support glm_moe_lite #7833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -192,10 +192,11 @@ def test_convert_precision(hf_model, mg_model, template, torch_dtype=torch.float | |
| _param = next(mg_language_model.parameters()) | ||
| mg_dtype = _param.dtype | ||
| mg_device = _param.device | ||
| # router to bfloat16 | ||
| for n, m in mg_language_model.named_modules(): | ||
| if n.endswith('router'): | ||
| m.to(mg_dtype) | ||
| if args.hf_model_type == 'minimax_m2': | ||
| # router to bfloat16 | ||
| for n, m in mg_language_model.named_modules(): | ||
| if n.endswith('router'): | ||
| m.to(mg_dtype) | ||
|
Comment on lines
+195
to
+199
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This model-specific logic for For example, you could add a |
||
| with torch.inference_mode(), _model_cpu_forward_context( | ||
| mg_modules, torch_dtype, 'cuda', share_embedding=share_embedding, target_device=mg_device): | ||
| mg_logits = forward_step_helper(mg_model, mg_inputs, dtype=torch_dtype) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
TODOcomment acknowledges this is a temporary modification. However, adding model-specific logic forglm4v_moeandglm4_moe_liteinside the generic_set_mlp_statefunction makes the code harder to maintain. As more models with special requirements are added, this function could become cluttered withif/elifstatements.A better approach would be to abstract this model-specific logic. You could introduce a method in the
GPTBridgethat can be overridden by model-specific bridge subclasses, or use a dispatch mechanism based onhf_model_type. This would improve code organization and make it easier to add or modify support for different models in the future.