Skip to content

Fix MoE multi-lora training#161

Merged
tastelikefeet merged 4 commits intomodelscope:mainfrom
tastelikefeet:fix/0416-2
Apr 16, 2026
Merged

Fix MoE multi-lora training#161
tastelikefeet merged 4 commits intomodelscope:mainfrom
tastelikefeet:fix/0416-2

Conversation

@tastelikefeet
Copy link
Copy Markdown
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for configurable target_modules in multi-LoRA implementations for both Megatron and Transformers, defaulting to 'all-linear'. It also adds 'Qwen3.6' to the model template mapping and implements logic to preserve the 'all-linear' configuration during model saving. Feedback focuses on ensuring state consistency during the save process by using a try...finally block and restoring performance-critical checks in activate_adapter to avoid redundant calls to set_adapter.

Comment thread src/twinkle/model/megatron/megatron.py
Comment thread src/twinkle/model/multi_lora.py Outdated
Comment thread src/twinkle/model/multi_lora.py Outdated
@tastelikefeet tastelikefeet merged commit ccf3a06 into modelscope:main Apr 16, 2026
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants