Skip to content

[moe] support low level zero optim#4429

Merged
ver217 merged 5 commits intohpcaitech:feature/moefrom
oahzxl:optim
Aug 14, 2023
Merged

[moe] support low level zero optim#4429
ver217 merged 5 commits intohpcaitech:feature/moefrom
oahzxl:optim

Conversation

@oahzxl
Copy link
Copy Markdown
Contributor

@oahzxl oahzxl commented Aug 14, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

📝 What does this PR do?

Summarize your work here.
if you have any plots/diagrams/screenshots/tables, please attach them here.
moe param should not be stored in working_groups because they have different parallel strategy. so we need to store them separately in param_groups instead of working_groups

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 91%.

Click me to view the complete report
Name                                           Stmts   Miss  Cover
------------------------------------------------------------------
colossalai/zero/low_level/low_level_optim.py     329     30    91%
------------------------------------------------------------------
TOTAL                                            329     30    91%

Comment thread colossalai/zero/low_level/low_level_optim.py
Copy link
Copy Markdown
Contributor

@ver217 ver217 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, can you create a moe param interface like dtensor? https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/tensor/d_tensor/api.py

@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 71%.

Click me to view the complete report
Name                                             Stmts   Miss  Cover
--------------------------------------------------------------------
colossalai/engine/gradient_handler/__init__.py       6      0   100%
colossalai/nn/layer/moe/experts.py                 128    104    19%
colossalai/nn/layer/moe/moe_param.py                 5      1    80%
colossalai/zero/low_level/low_level_optim.py       330     30    91%
--------------------------------------------------------------------
TOTAL                                              469    135    71%

@oahzxl
Copy link
Copy Markdown
Contributor Author

oahzxl commented Aug 14, 2023

By the way, can you create a moe param interface like dtensor? https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/tensor/d_tensor/api.py

done

@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 71%.

Click me to view the complete report
Name                                             Stmts   Miss  Cover
--------------------------------------------------------------------
colossalai/engine/gradient_handler/__init__.py       6      0   100%
colossalai/nn/layer/moe/experts.py                 128    104    19%
colossalai/tensor/moe_tensor/api.py                  5      1    80%
colossalai/zero/low_level/low_level_optim.py       330     30    91%
--------------------------------------------------------------------
TOTAL                                              469    135    71%

@ver217 ver217 merged commit 769fde5 into hpcaitech:feature/moe Aug 14, 2023
@oahzxl oahzxl deleted the optim branch August 16, 2023 03:45
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Sep 15, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Sep 15, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Oct 26, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants