From 2eba0ea20fa8a36cd973fe233cf9cad00e09718b Mon Sep 17 00:00:00 2001 From: kingkingofall <83848390+kingkingofall@users.noreply.github.com> Date: Tue, 4 Apr 2023 13:31:17 +0800 Subject: [PATCH 1/2] fix stage 2 fix stage 2 --- applications/Chat/examples/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/applications/Chat/examples/README.md b/applications/Chat/examples/README.md index 49401ec30db5..6c02606eab93 100644 --- a/applications/Chat/examples/README.md +++ b/applications/Chat/examples/README.md @@ -57,7 +57,7 @@ You can run the `examples/train_rm.sh` to start a reward model training. You can also use the following cmd to start training a reward model. ``` -torchrun --standalone --nproc_per_node=4 train_reward_model.py +torchrun --standalone --nproc_per_node=4 train_reward_model.py \ --pretrain "/path/to/LLaMa-7B/" \ --model 'llama' \ --strategy colossalai_zero2 \ From 241478ac0ae44aebdec98c78736632b87ed67250 Mon Sep 17 00:00:00 2001 From: kingkingofall <83848390+kingkingofall@users.noreply.github.com> Date: Wed, 5 Apr 2023 10:08:24 +0800 Subject: [PATCH 2/2] add torch --- applications/Chat/inference/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/applications/Chat/inference/README.md b/applications/Chat/inference/README.md index 6c23bc73cd60..434677c98fa5 100644 --- a/applications/Chat/inference/README.md +++ b/applications/Chat/inference/README.md @@ -51,6 +51,7 @@ Please ensure you have downloaded HF-format model weights of LLaMA models. Usage: ```python +import torch from transformers import LlamaForCausalLM USE_8BIT = True # use 8-bit quantization; otherwise, use fp16