Conversation
Signed-off-by: zjgemi <liuxin_zijian@163.com>
WalkthroughWalkthroughThe recent updates to the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Workflow
participant Model
User->>Workflow: Start Training
Workflow->>Model: Initialize Model
alt Finetuning Enabled
Model->>Workflow: Check finetune parameters
Workflow->>Model: Set to finetune mode
end
Workflow->>Model: Execute Training
Model->>Workflow: Training Results
Workflow->>User: Return Results
Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (2)
Files skipped from review as they are similar to previous changes (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configuration File (
|
for more information, see https://pre-commit.ci
| }, | ||
| ) | ||
| return dpgen_step, finetune_step | ||
| return dpgen_step |
There was a problem hiding this comment.
Fix the return type of workflow_concurrent_learning.
The return type should be Step instead of Tuple[Step, Optional[Step]] to match the updated logic.
- ) -> Tuple[Step, Optional[Step]]:
+ ) -> Step:Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return dpgen_step | |
| ) -> Step: |
Tools
GitHub Check: pyright
[failure] 603-603:
Expression of type "Step" cannot be assigned to return type "Tuple[Step, Step | None]"
"Step" is incompatible with "Tuple[Step, Step | None]" (reportGeneralTypeIssues)
Signed-off-by: zjgemi <liuxin_zijian@163.com>
Signed-off-by: zjgemi <liuxin_zijian@163.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Outside diff range, codebase verification and nitpick comments (1)
dpgen2/entrypoint/args.py (1)
377-382: Clarify the documentation string fordo_finetune.The documentation string for
do_finetuneis clear but can be further simplified for better readability.- "Finetune the pretrained model during the first iteration. If it is set to True, then in the first iteration, " - 'the internal flag finetune_mode is set to "finetune". In this step, we finetune the pretrained model in the ' - 'train step. After that, in the following iterations, init_model_policy is forced to be "yes", the flag ' - 'finetune_mode is set as "no", which means we use `--init-frz-model` or `--init-model` to train based on ' - 'models from the previous iteration. The "do_finetune" flag is set to False by default, while the internal ' - 'flag finetune_mode is set to "no", which means anything related to finetuning will not be done.' + "Finetune the pretrained model during the first iteration. If set to True, the internal flag `finetune_mode` " + 'is set to "finetune" during the first iteration to finetune the pretrained model. In subsequent iterations, ' + 'the `init_model_policy` is forced to "yes" and `finetune_mode` is set to "no", using `--init-frz-model` or ' + '`--init-model` to train based on models from the previous iteration. By default, `do_finetune` is False, ' + 'and `finetune_mode` is "no", meaning finetuning is not performed.'
| new_template_script = step.outputs.parameters["template_script"].value | ||
| expected_list = [{"foo": "bar"} for i in range(self.numb_models)] | ||
| assert new_template_script == expected_list | ||
| assert new_template_script == self.template_script |
There was a problem hiding this comment.
Remove unnecessary assertion.
The assertion assert new_template_script == self.template_script is redundant as it does not provide additional value.
- assert new_template_script == self.template_scriptCommittable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert new_template_script == self.template_script |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #247 +/- ##
==========================================
- Coverage 83.84% 83.75% -0.10%
==========================================
Files 97 97
Lines 5368 5318 -50
==========================================
- Hits 4501 4454 -47
+ Misses 867 864 -3 ☔ View full report in Codecov by Sentry. |
dpgen2/op/run_dp_train.py
Outdated
| elif ( | ||
| do_init_model or finetune_mode == "train-init" | ||
| ) and not init_model_with_finetune: | ||
| elif do_init_model and not init_model_with_finetune: |
There was a problem hiding this comment.
it seems that _make_train_command_old is not used anymore, we may simply remove the function.
There was a problem hiding this comment.
_make_train_command_old has been removed.
Signed-off-by: zjgemi <liuxin_zijian@163.com>
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
PrepRunDPTrainclass.Chores