Skip to content
Merged

, #169

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
81 commits
Select commit Hold shift + click to select a range
424629f
[shardformer/sequence parallel] Cherry pick commit to new branch (#4450)
FoolPlayer Aug 16, 2023
6ef33f7
[shardformer] support DDP in HybridPlugin/add tp+dp tests (#4446)
Aug 16, 2023
26e29d5
[devops] add large-scale distributed test marker (#4452)
ver217 Aug 16, 2023
a78daf6
[shardformer] support interleaved pipeline (#4448)
Gy-Lu Aug 16, 2023
7c8be77
[shardformer/sequence parallel] support gpt2 seq parallel with pp/dp/…
FoolPlayer Aug 18, 2023
0ecd71e
[shardformer] bloom support sequence parallel (#4465)
flybird11111 Aug 18, 2023
a27e0bb
[shardformer] bert support sequence parallel. (#4455)
flybird11111 Aug 18, 2023
8739aa7
[shardformer] Pipeline/whisper (#4456)
CjhHa1 Aug 18, 2023
1c7df56
[shardformer] support tp+zero for shardformer (#4472)
Aug 21, 2023
5545114
rename chatglm to chatglm2 (#4484)
CjhHa1 Aug 22, 2023
351351a
[shardformer/sequence parallel] not support opt of seq-parallel, add …
FoolPlayer Aug 22, 2023
59e252e
[shardformer] chatglm support sequence parallel (#4482)
flybird11111 Aug 22, 2023
e04436a
[shardformer] tests for 3d parallel (#4493)
CjhHa1 Aug 23, 2023
3353e55
[shardformer] vit/llama/t5 ignore the sequence parallelism flag and s…
flybird11111 Aug 24, 2023
de8a65b
[shardformer] opt fix. (#4514)
flybird11111 Aug 25, 2023
44eab2b
[shardformer] support sharded checkpoint IO for models of HybridParal…
Aug 25, 2023
376533a
[shardformer] zero1+pp and the corresponding tests (#4517)
CjhHa1 Aug 28, 2023
c554b7f
[shardformer/fix overlap bug] fix overlap bug, add overlap as an opti…
FoolPlayer Aug 28, 2023
0387a47
[shardformer] fix emerged bugs after updating transformers (#4526)
Aug 29, 2023
1467e3b
[coati] add chatglm model (#4539)
yingliu-hpc Aug 29, 2023
e241b74
[shardformer] Add overlap support for gpt2 (#4535)
FoolPlayer Aug 29, 2023
1c43bfd
[coati] update ci
ver217 Aug 30, 2023
661a1ef
Merge pull request #4541 from ver217/coati/chatglm
yingliu-hpc Aug 30, 2023
c648dc0
fix colossalai version in coati examples
yingliu-hpc Aug 30, 2023
d367b88
[shardformer] fix opt test hanging (#4521)
flybird11111 Aug 30, 2023
9f852f2
keep requirements same with main branch
yingliu-hpc Aug 30, 2023
ec18fc7
[shardformer] support pp+tp+zero1 tests (#4531)
flybird11111 Aug 30, 2023
2c787d7
[shardformer] fix submodule replacement bug when enabling pp (#4544)
Aug 31, 2023
c9625db
[shardformer] support sharded optimizer checkpointIO of HybridParalle…
Aug 31, 2023
38ccb8b
[shardformer] support from_pretrained when loading model with HybridP…
Sep 1, 2023
508ca36
[pipeline] 1f1b schedule receive microbatch size (#4589)
ver217 Sep 1, 2023
63ecafb
[checkpointio] optimize zero optim checkpoint io (#4591)
ver217 Sep 4, 2023
7a978eb
[DOC] hotfix/llama2news (#4595)
binmakeswell Sep 4, 2023
8d7b022
[doc] add llama2 benchmark (#4604)
binmakeswell Sep 4, 2023
aaeb520
Merge pull request #4542 from hpcaitech/chatglm
yingliu-hpc Sep 4, 2023
24c0768
[shardformer] Pytree fix (#4533)
CjhHa1 Sep 4, 2023
0a94fcd
[shardformer] update bert finetune example with HybridParallelPlugin …
flybird11111 Sep 4, 2023
e79b1e8
[checkpointio] support huggingface from_pretrained for all plugins (#…
Sep 4, 2023
a39a5c6
Merge branch 'main' into feature/shardformer
ver217 Sep 4, 2023
86d2258
[shardformer] Add overlap optional for HybridParallelPlugin (#4615)
FoolPlayer Sep 5, 2023
ec08668
[shardformer] update shardformer readme (#4617)
flybird11111 Sep 5, 2023
e71d245
[test] ignore gpt2 shardformer test (#4619)
ver217 Sep 5, 2023
807e01a
[zero] hotfix master param sync (#4618)
ver217 Sep 5, 2023
bd18678
[test] fix gemini checkpoint and gpt test (#4620)
ver217 Sep 5, 2023
89fe027
[legacy] move trainer to legacy (#4545)
ver217 Aug 31, 2023
8accecd
[legacy] move engine to legacy (#4560)
ver217 Sep 4, 2023
ac178ca
[legacy] move builder and registry to legacy (#4603)
ver217 Sep 4, 2023
fae6c92
Merge branch 'main' into feature/shardformer
ver217 Sep 5, 2023
efba0f4
Merge pull request #4612 from hpcaitech/feature/shardformer
ver217 Sep 5, 2023
9709b8f
[release] update version (#4623)
ver217 Sep 6, 2023
c3d5fa3
[shardformer] Support customized policy for llamav2 based model with …
eric8607242 Sep 7, 2023
660eed9
[pipeline] set optimizer to optional in execute_pipeline (#4630)
Sep 7, 2023
295b38f
[example] update vit example for hybrid parallel plugin (#4641)
Sep 7, 2023
a686f9d
[devops] fix concurrency group and compatibility test (#4665)
ver217 Sep 8, 2023
7486ed7
[shardformer] update llama2/opt finetune example and fix llama2 polic…
flybird11111 Sep 9, 2023
536397c
[devops] fix concurrency group (#4667)
ver217 Sep 11, 2023
554aa95
[legacy] move communication and nn to legacy and refactor logger (#4671)
ver217 Sep 11, 2023
eedaa3e
[shardformer]fix gpt2 double head (#4663)
flybird11111 Sep 11, 2023
bce0f16
[Feature] The first PR to Add TP inference engine, kv-cache manager a…
tiandiao123 Sep 11, 2023
1d45473
[doc] Update booster user documents. (#4669)
Sep 12, 2023
8844691
[shardformer] update shardformer readme (#4689)
flybird11111 Sep 12, 2023
d8ceeac
[hotfix] fix typo in hybrid parallel io (#4697)
Sep 12, 2023
9c2feb2
fix some typo with colossalai/device colossalai/tensor/ etc. (#4171)
digger-yu Sep 12, 2023
068372a
[doc] add potential solution for OOM in llama2 example (#4699)
Sep 13, 2023
c7d6975
[shardformer] fix GPT2DoubleHeadsModel (#4703)
flybird11111 Sep 13, 2023
e2c0e7f
[hotfix] Fix import error: colossal.kernel without triton installed (…
yuanheng-zhao Sep 14, 2023
20190b4
[shardformer] to fix whisper test failed due to significant accuracy …
flybird11111 Sep 14, 2023
ce97790
[doc] fix llama2 code link (#4726)
binmakeswell Sep 14, 2023
f911d5b
[doc] Add user document for Shardformer (#4702)
Sep 15, 2023
8c2dda7
[format] applied code formatting on changed files in pull request 472…
github-actions[bot] Sep 15, 2023
50e5602
[doc] add shardformer support matrix/update tensor parallel documents…
Sep 15, 2023
e4fc57c
Optimized some syntax errors in the documentation and code under appl…
digger-yu Sep 15, 2023
4616263
[shardformer] update pipeline parallel document (#4725)
flybird11111 Sep 15, 2023
cd4e61d
[legacy] remove deterministic data loader test
ppt0011 Sep 15, 2023
6a03c93
[shardformer] update seq parallel document (#4730)
FoolPlayer Sep 15, 2023
608cffa
[example] add gpt2 HybridParallelPlugin example (#4653)
FoolPlayer Sep 15, 2023
73eb3e8
Merge pull request #4738 from ppt0011/main
ppt0011 Sep 15, 2023
451c346
[doc] polish shardformer doc (#4735)
Sep 15, 2023
ac27979
[shardformer] add custom policy in hybrid parallel plugin (#4718)
oahzxl Sep 15, 2023
4c4482f
[example] llama2 add fine-tune example (#4673)
flybird11111 Sep 15, 2023
d151dca
[doc] explaination of loading large pretrained models (#4741)
Sep 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
8 changes: 4 additions & 4 deletions .github/workflows/build_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ jobs:
run:
shell: bash
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-repare-cache
cancel-in-progress: true
steps:
- name: Copy testmon cache
Expand All @@ -87,7 +87,7 @@ jobs:
anyLibraryFileChanged: ${{ steps.find-lib-change.outputs.any_changed }}
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-detect-change
cancel-in-progress: true
steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -147,7 +147,7 @@ jobs:
run:
shell: bash
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-run-test
cancel-in-progress: true
steps:
- name: Checkout TensorNVMe
Expand Down Expand Up @@ -208,7 +208,7 @@ jobs:

- name: Execute Unit Testing
run: |
CURL_CA_BUNDLE="" PYTHONPATH=$PWD pytest --testmon --testmon-cov=. --durations=10 tests/
CURL_CA_BUNDLE="" PYTHONPATH=$PWD pytest -m "not largedist" --testmon --testmon-forceselect --testmon-cov=. --durations=10 tests/
env:
DATA: /data/scratch/cifar-10
NCCL_SHM_DISABLE: 1
Expand Down
7 changes: 3 additions & 4 deletions .github/workflows/compatiblity_test_on_dispatch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
name: Test for PyTorch Compatibility
needs: matrix_preparation
if: github.repository == 'hpcaitech/ColossalAI'
runs-on: [self-hosted, gpu]
runs-on: [self-hosted, 8-gpu]
strategy:
fail-fast: false
matrix: ${{fromJson(needs.matrix_preparation.outputs.matrix)}}
Expand All @@ -64,7 +64,7 @@ jobs:
- name: Install tensornvme
run: |
cd TensorNVMe
conda install cmake
apt update && apt install -y cmake
pip install -r requirements.txt
pip install -v .
- uses: actions/checkout@v2
Expand All @@ -83,8 +83,7 @@ jobs:
fi
- name: Install Colossal-AI
run: |
pip install -r requirements/requirements.txt
pip install -v --no-cache-dir .
CUDA_EXT=1 pip install -v .
pip install -r requirements/requirements-test.txt
- name: Unit Testing
run: |
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/compatiblity_test_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-prepare-matrix
cancel-in-progress: true
steps:
- uses: actions/checkout@v3
Expand All @@ -35,7 +35,7 @@ jobs:
name: Test for PyTorch Compatibility
needs: matrix_preparation
if: github.repository == 'hpcaitech/ColossalAI'
runs-on: [self-hosted, gpu]
runs-on: [self-hosted, 8-gpu]
strategy:
fail-fast: false
matrix: ${{fromJson(needs.matrix_preparation.outputs.matrix)}}
Expand All @@ -44,7 +44,7 @@ jobs:
options: --gpus all --rm -v /data/scratch/cifar-10:/data/scratch/cifar-10
timeout-minutes: 120
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-run-test-${{ matrix.container }}
cancel-in-progress: true
steps:
- name: Install dependencies
Expand All @@ -58,7 +58,7 @@ jobs:
- name: Install tensornvme
run: |
cd TensorNVMe
conda install cmake
apt update && apt install -y cmake
pip install -r requirements.txt
pip install -v .
- uses: actions/checkout@v2
Expand All @@ -78,7 +78,7 @@ jobs:

- name: Install Colossal-AI
run: |
pip install -v --no-cache-dir .
CUDA_EXT=1 pip install -v .
pip install -r requirements/requirements-test.txt
- name: Unit Testing
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/compatiblity_test_on_schedule.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
name: Test for PyTorch Compatibility
needs: matrix_preparation
if: github.repository == 'hpcaitech/ColossalAI'
runs-on: [self-hosted, gpu]
runs-on: [self-hosted, 8-gpu]
strategy:
fail-fast: false
matrix: ${{fromJson(needs.matrix_preparation.outputs.matrix)}}
Expand All @@ -54,7 +54,7 @@ jobs:
- name: Install tensornvme
run: |
cd TensorNVMe
conda install cmake
apt update && apt install -y cmake
pip install -r requirements.txt
pip install -v .
- uses: actions/checkout@v2
Expand All @@ -75,7 +75,7 @@ jobs:

- name: Install Colossal-AI
run: |
pip install -v --no-cache-dir .
CUDA_EXT=1 pip install -v .
pip install -r requirements/requirements-test.txt

- name: Unit Testing
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/doc_check_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-check-i18n
cancel-in-progress: true
steps:
- uses: actions/checkout@v2
Expand All @@ -35,7 +35,7 @@ jobs:
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-check-doc
cancel-in-progress: true
steps:
- uses: actions/checkout@v2
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/doc_test_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
any_changed: ${{ steps.changed-files.outputs.any_changed }}
changed_files: ${{ steps.changed-files.outputs.all_changed_files }}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-detect-change
cancel-in-progress: true
name: Detect changed example files
steps:
Expand Down Expand Up @@ -63,7 +63,7 @@ jobs:
run:
shell: bash
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-run-doctest
cancel-in-progress: true
steps:
- name: Checkout ColossalAI-Documentation
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/example_check_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
anyChanged: ${{ steps.setup-matrix.outputs.anyChanged }}
name: Detect changed example files
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-detect-change
cancel-in-progress: true
steps:
- uses: actions/checkout@v3
Expand Down Expand Up @@ -81,7 +81,7 @@ jobs:
options: --gpus all --rm -v /data/scratch/examples-data:/data/
timeout-minutes: 10
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-run-example-${{ matrix.directory }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v3
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/run_chatgpt_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,8 @@ jobs:
- name: Checkout ColossalAI
uses: actions/checkout@v2

- name: Install ColossalAI and ChatGPT
- name: Install ChatGPT
run: |
pip install -e .
cd applications/Chat
pip install -v .
pip install -r examples/requirements.txt
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/run_chatgpt_unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ jobs:
- name: Checkout ColossalAI
uses: actions/checkout@v2

- name: Install ColossalAI and ChatGPT
- name: Install ChatGPT
run: |
pip install -e .
cd applications/Chat
pip install -v .
pip install -r requirements-test.txt
Expand Down
32 changes: 32 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -396,3 +396,35 @@ Copyright 2021- HPC-AI Technology Inc. All rights reserved.
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

---------------- LICENSE FOR VLLM TEAM ----------------

from VLLM TEAM:

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://github.com/vllm-project/vllm/blob/main/LICENSE

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

---------------- LICENSE FOR LIGHTLLM TEAM ----------------

from LIGHTLLM TEAM:

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://github.com/ModelTC/lightllm/blob/main/LICENSE

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
15 changes: 12 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
</div>

## Latest News
* [2023/09] [70 Billion Parameter LLaMA2 Model Training Accelerated by 195%](https://www.hpc-ai.tech/blog/70b-llama2-training)
* [2023/07] [HPC-AI Tech Raises 22 Million USD in Series A Funding](https://www.hpc-ai.tech/blog/hpc-ai-tech-raises-22-million-usd-in-series-a-funding-to-fuel-team-expansion-and-business-growth)
* [2023/07] [65B Model Pretraining Accelerated by 38%, Best Practices for Building LLaMA-Like Base Models Open-Source](https://www.hpc-ai.tech/blog/large-model-pretraining)
* [2023/03] [ColossalChat: An Open-Source Solution for Cloning ChatGPT With a Complete RLHF Pipeline](https://medium.com/@yangyou_berkeley/colossalchat-an-open-source-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline-5edf08fb538b)
Expand All @@ -50,7 +51,7 @@
<li>
<a href="#Parallel-Training-Demo">Parallel Training Demo</a>
<ul>
<li><a href="#LLaMA">LLaMA</a></li>
<li><a href="#LLaMA2">LLaMA 1/2</a></li>
<li><a href="#GPT-3">GPT-3</a></li>
<li><a href="#GPT-2">GPT-2</a></li>
<li><a href="#BERT">BERT</a></li>
Expand Down Expand Up @@ -217,8 +218,16 @@ Acceleration of [AlphaFold Protein Structure](https://alphafold.ebi.ac.uk/)
<p align="right">(<a href="#top">back to top</a>)</p>

## Parallel Training Demo
### LLaMA2
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/llama2_pretraining.png" width=600/>
</p>

- 70 billion parameter LLaMA2 model training accelerated by 195%
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama2)
[[blog]](https://www.hpc-ai.tech/blog/70b-llama2-training)

### LLaMA
### LLaMA1
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/images/LLaMA_pretraining.png" width=600/>
</p>
Expand Down Expand Up @@ -463,7 +472,7 @@ To cite this project, you can use the following BibTeX citation.
}
```

Colossal-AI has been accepted as official tutorial by top conferences [NeurIPS](https://nips.cc/), [SC](https://sc22.supercomputing.org/), [AAAI](https://aaai.org/Conferences/AAAI-23/),
Colossal-AI has been accepted as official tutorial by top conferences [NeurIPS](https://nips.cc/), [SC](https://sc22.supercomputing.org/), [AAAI](https://aaai.org/Conferences/AAAI-23/),
[PPoPP](https://ppopp23.sigplan.org/), [CVPR](https://cvpr2023.thecvf.com/), [ISC](https://www.isc-hpc.com/), [NVIDIA GTC](https://www.nvidia.com/en-us/on-demand/session/gtcspring23-S51482/) ,etc.

<p align="right">(<a href="#top">back to top</a>)</p>
6 changes: 2 additions & 4 deletions applications/Chat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,6 @@ We provide an online inference server and a benchmark. We aim to run inference o
We support 8-bit quantization (RTN), 4-bit quantization (GPTQ), and FP16 inference.

Online inference server scripts can help you deploy your own services.

For more details, see [`inference/`](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat/inference).

## Coati7B examples
Expand Down Expand Up @@ -428,7 +427,7 @@ Thanks so much to all of our amazing contributors!
</a>
</div>

- An open-source low cost solution for cloning [ChatGPT](https://openai.com/blog/chatgpt/) with a complete RLHF pipeline. [[demo]](https://chat.colossalai.org)
- An open-source low-cost solution for cloning [ChatGPT](https://openai.com/blog/chatgpt/) with a complete RLHF pipeline. [[demo]](https://chat.colossalai.org)

<p id="ChatGPT_scaling" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/chatgpt/ChatGPT%20scaling.png" width=800/>
Expand Down Expand Up @@ -469,8 +468,7 @@ Coati is developed by ColossalAI Team:
- [ofey404](https://github.com/ofey404)
- [Wenhao Chen](https://github.com/CWHer)

The Phd student from [(HPC-AI) Lab](https://ai.comp.nus.edu.sg/) also contributed a lot to this project.

The PhD student from [(HPC-AI) Lab](https://ai.comp.nus.edu.sg/) also contributed a lot to this project.
- [Zangwei Zheng](https://github.com/zhengzangw)
- [Xue Fuzhao](https://github.com/XueFuzhao)

Expand Down
Loading