Skip to content

[autoparallel] Patch meta information of torch.nn.Embedding#2760

Merged
YuliangLiu0306 merged 60 commits intohpcaitech:mainfrom
Cypher30:feature/embedding_metainfo
Feb 17, 2023
Merged

[autoparallel] Patch meta information of torch.nn.Embedding#2760
YuliangLiu0306 merged 60 commits intohpcaitech:mainfrom
Cypher30:feature/embedding_metainfo

Conversation

@Cypher30
Copy link
Copy Markdown
Contributor

@Cypher30 Cypher30 commented Feb 16, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234
Resolved #2735

📝 What does this PR do?

Summarize your work here.
if you have any plots/diagrams/screenshots/tables, please attach them here.

In this PR, I patch meta information of torch.nn.Embedding, and fix some small errors caused by several former PRs related to meta information patch.

NOTE: the temp memory during the backward phase of torch.nn.Embedding is quite weird (sometimes it doesn't have temp memory) and currently I just simply set it to zero, as in NLP tasks the temp memory cost is significantly smaller than gradient memory cost.

The tests will be skipped when running on torch 1.11.0, so I attach the test results here
Screenshot 2023-02-16 at 17 12 30

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

Cypher30 and others added 30 commits July 14, 2022 16:07
@Cypher30 Cypher30 added Run Build and Test auto-parallel related to the auto-parallel feature labels Feb 16, 2023
@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 39%.

Click me to view the complete report
Name                                                                                   Stmts   Miss  Cover
----------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/meta_profiler/meta_registry/__init__.py                           7      0   100%
colossalai/auto_parallel/meta_profiler/meta_registry/activation.py                        42     30    29%
colossalai/auto_parallel/meta_profiler/meta_registry/embedding.py                         23     14    39%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_activation_metainfo.py      63     40    37%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_embedding_metainfo.py       45     26    42%
----------------------------------------------------------------------------------------------------------
TOTAL                                                                                    180    110    39%

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 39%.

Click me to view the complete report
Name                                                                                   Stmts   Miss  Cover
----------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/meta_profiler/meta_registry/__init__.py                           7      0   100%
colossalai/auto_parallel/meta_profiler/meta_registry/activation.py                        42     30    29%
colossalai/auto_parallel/meta_profiler/meta_registry/embedding.py                         23     14    39%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_activation_metainfo.py      63     40    37%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_embedding_metainfo.py       45     26    42%
----------------------------------------------------------------------------------------------------------
TOTAL                                                                                    180    110    39%

@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 39%.

Click me to view the complete report
Name                                                                                   Stmts   Miss  Cover
----------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/meta_profiler/meta_registry/__init__.py                           7      0   100%
colossalai/auto_parallel/meta_profiler/meta_registry/activation.py                        42     30    29%
colossalai/auto_parallel/meta_profiler/meta_registry/embedding.py                         23     14    39%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_activation_metainfo.py      63     40    37%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_embedding_metainfo.py       45     26    42%
----------------------------------------------------------------------------------------------------------
TOTAL                                                                                    180    110    39%

@github-actions
Copy link
Copy Markdown
Contributor

The code coverage for the changed files is 47%.

Click me to view the complete report
Name                                                                                  Stmts   Miss  Cover
---------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/meta_profiler/meta_registry/__init__.py                          7      0   100%
colossalai/auto_parallel/meta_profiler/meta_registry/embedding.py                        23     14    39%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_embedding_metainfo.py      45     26    42%
---------------------------------------------------------------------------------------------------------
TOTAL                                                                                    75     40    47%

@YuliangLiu0306 YuliangLiu0306 merged commit a2b43e3 into hpcaitech:main Feb 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auto-parallel related to the auto-parallel feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Patch meta information of torch.nn.Embedding

2 participants