bash train.sh
/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/initialize.py:48: UserWarning: `config` is deprecated and will be removed soon.
warnings.warn("`config` is deprecated and will be removed soon.")
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Configuration file will be saved at: config_file
Tensorboard logs will be saved at: Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45
Model checkpoint will be saved at: Saved_trained_model/Test-2023-10-12-17-34-45
Load dataset: ['spliced_tokenized_output_arrow/part-00000', 'spliced_tokenized_output_arrow/part-00001', 'spliced_tokenized_output_arrow/part-00002', 'spliced_tokenized_output_arrow/part-00003', 'spliced_tokenized_output_arrow/part-00004', 'spliced_tokenized_output_arrow/part-00005', 'spliced_tokenized_output_arrow/part-00006', 'spliced_tokenized_output_arrow/part-00007', 'spliced_tokenized_output_arrow/part-00008', 'spliced_tokenized_output_arrow/part-00009']
Max CUDA memory after data loader: 0.00 MB
Flash-attention enabled successfully
Model params: 6.59 B
Load pretrained model checkpoint from New_Model/
Booster init max CUDA memory: 2972.95 MB
Booster init max CPU memory: 23649.47 MB
Epoch 0: 0%| | 0/787 [00:00<?, ?it/s]Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
Epoch 0: 0%| | 0/787 [00:02<?, ?it/s]
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
Traceback (most recent call last):
batch_output = model(**batch)
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
outputs = self.model( File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 205737) of binary: /data_lc/envs/coloai/bin/python
Traceback (most recent call last):
File "/data_lc/envs/coloai/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.13.1', 'console_scripts', 'torchrun')())
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 205738)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 205739)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 205740)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 205737)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 train.py --pretrained New_Model/ --dataset spliced_tokenized_output_arrow/part-00000 spliced_tokenized_output_arrow/part-00001 spliced_tokenized_output_arrow/part-00002 spliced_tokenized_output_arrow/part-00003 spliced_tokenized_output_arrow/part-00004 spliced_tokenized_output_arrow/part-00005 spliced_tokenized_output_arrow/part-00006 spliced_tokenized_output_arrow/part-00007 spliced_tokenized_output_arrow/part-00008 spliced_tokenized_output_arrow/part-00009 --plugin gemini_auto --save_interval 400 --save_dir Saved_trained_model/Test-2023-10-12-17-34-45 --tensorboard_dir Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45 --num_epochs 1 --micro_batch_size 2 --lr 1e-4 --mixed_precision bf16 --grad_clip 1.0 --weight_decay 0.01 --warmup_steps 100 --use_flash_attn --freeze_non_embeds_params on 127.0.0.1, is localhost: True, exception: Encountered a bad command exit code!
Command: 'cd /data_lc/LLM/Colossal-LLaMA-2 && export SHELL="/bin/bash" WANDB_BASE_URL="http://cd8.host.8head.com:45001" CONDA_EXE="/home/ubuntu/miniconda3/bin/conda" PWD="/data_lc/LLM/Colossal-LLaMA-2" LOGNAME="root" XDG_SESSION_TYPE="tty" CONDA_PREFIX="/data_lc/envs/coloai" MOTD_SHOWN="pam" HOME="/root" LANG="en_US.UTF-8" LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:" CONDA_PROMPT_MODIFIER="(coloai) " SSH_CONNECTION="192.9.0.1 5395 192.9.1.2 22" LESSCLOSE="/usr/bin/lesspipe %s %s" XDG_SESSION_CLASS="user" TERM="xterm" LESSOPEN="| /usr/bin/lesspipe %s" USER="root" CONDA_SHLVL="2" DISPLAY="localhost:10.0" SHLVL="2" XDG_SESSION_ID="42" CONDA_PYTHON_EXE="/home/ubuntu/miniconda3/bin/python" LD_LIBRARY_PATH="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7/lib64::/usr/local/cuda/lib64" XDG_RUNTIME_DIR="/run/user/0" SSH_CLIENT="192.9.0.1 5395 22" CONDA_DEFAULT_ENV="coloai" OMP_NUM_THREADS="4" CUDA_HOME="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7" XDG_DATA_DIRS="/usr/local/share:/usr/share:/var/lib/snapd/desktop" PATH="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7/bin:/usr/local/cuda-11.8/bin:/data_lc/envs/coloai/bin:/home/ubuntu/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/0/bus" SSH_TTY="/dev/pts/2" CONDA_PREFIX_1="/home/ubuntu/miniconda3" CPATH="/usr/include:" OLDPWD="/data_lc/LLM/Colossal-LLaMA-2/flash_attention/csrc" _="/data_lc/envs/coloai/bin/colossalai" && torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 train.py --pretrained New_Model/ --dataset spliced_tokenized_output_arrow/part-00000 spliced_tokenized_output_arrow/part-00001 spliced_tokenized_output_arrow/part-00002 spliced_tokenized_output_arrow/part-00003 spliced_tokenized_output_arrow/part-00004 spliced_tokenized_output_arrow/part-00005 spliced_tokenized_output_arrow/part-00006 spliced_tokenized_output_arrow/part-00007 spliced_tokenized_output_arrow/part-00008 spliced_tokenized_output_arrow/part-00009 --plugin gemini_auto --save_interval 400 --save_dir Saved_trained_model/Test-2023-10-12-17-34-45 --tensorboard_dir Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45 --num_epochs 1 --micro_batch_size 2 --lr 1e-4 --mixed_precision bf16 --grad_clip 1.0 --weight_decay 0.01 --warmup_steps 100 --use_flash_attn --freeze_non_embeds_params'
Exit code: 1
Stdout: already printed
Stderr: already printed
====== Training on All Nodes =====
127.0.0.1: failure
====== Stopping All Nodes =====
127.0.0.1: finish
bash train.sh
/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/initialize.py:48: UserWarning: `config` is deprecated and will be removed soon.
warnings.warn("`config` is deprecated and will be removed soon.")
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
[10/12/23 17:34:50] INFO colossalai - colossalai - INFO:
/data_lc/envs/coloai/lib/python3.10/site-packages/c
olossalai/initialize.py:63 launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 4
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Configuration file will be saved at: config_file
Tensorboard logs will be saved at: Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45
Model checkpoint will be saved at: Saved_trained_model/Test-2023-10-12-17-34-45
Load dataset: ['spliced_tokenized_output_arrow/part-00000', 'spliced_tokenized_output_arrow/part-00001', 'spliced_tokenized_output_arrow/part-00002', 'spliced_tokenized_output_arrow/part-00003', 'spliced_tokenized_output_arrow/part-00004', 'spliced_tokenized_output_arrow/part-00005', 'spliced_tokenized_output_arrow/part-00006', 'spliced_tokenized_output_arrow/part-00007', 'spliced_tokenized_output_arrow/part-00008', 'spliced_tokenized_output_arrow/part-00009']
Max CUDA memory after data loader: 0.00 MB
Flash-attention enabled successfully
Model params: 6.59 B
Load pretrained model checkpoint from New_Model/
Booster init max CUDA memory: 2972.95 MB
Booster init max CPU memory: 23649.47 MB
Epoch 0: 0%| | 0/787 [00:00<?, ?it/s]Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
Traceback (most recent call last):
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
Epoch 0: 0%| | 0/787 [00:02<?, ?it/s]
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
Traceback (most recent call last):
batch_output = model(**batch)
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 385, in <module>
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
main()
File "/data_lc/LLM/Colossal-LLaMA-2/train.py", line 327, in main
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
batch_output = model(**batch)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/colossalai/zero/gemini/gemini_ddp.py", line 248, in forward
outputs = self.module(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
outputs = self.model(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1038, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
layer_outputs = decoder_layer(
outputs = self.model( File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 925, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
layer_outputs = decoder_layer(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
return forward_call(*input, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 635, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
TypeError: attention_forward() got an unexpected keyword argument 'padding_mask'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 205737) of binary: /data_lc/envs/coloai/bin/python
Traceback (most recent call last):
File "/data_lc/envs/coloai/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.13.1', 'console_scripts', 'torchrun')())
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/data_lc/envs/coloai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 205738)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 205739)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 205740)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-10-12_17:36:22
host : ubuntu
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 205737)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 train.py --pretrained New_Model/ --dataset spliced_tokenized_output_arrow/part-00000 spliced_tokenized_output_arrow/part-00001 spliced_tokenized_output_arrow/part-00002 spliced_tokenized_output_arrow/part-00003 spliced_tokenized_output_arrow/part-00004 spliced_tokenized_output_arrow/part-00005 spliced_tokenized_output_arrow/part-00006 spliced_tokenized_output_arrow/part-00007 spliced_tokenized_output_arrow/part-00008 spliced_tokenized_output_arrow/part-00009 --plugin gemini_auto --save_interval 400 --save_dir Saved_trained_model/Test-2023-10-12-17-34-45 --tensorboard_dir Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45 --num_epochs 1 --micro_batch_size 2 --lr 1e-4 --mixed_precision bf16 --grad_clip 1.0 --weight_decay 0.01 --warmup_steps 100 --use_flash_attn --freeze_non_embeds_params on 127.0.0.1, is localhost: True, exception: Encountered a bad command exit code!
Command: 'cd /data_lc/LLM/Colossal-LLaMA-2 && export SHELL="/bin/bash" WANDB_BASE_URL="http://cd8.host.8head.com:45001" CONDA_EXE="/home/ubuntu/miniconda3/bin/conda" PWD="/data_lc/LLM/Colossal-LLaMA-2" LOGNAME="root" XDG_SESSION_TYPE="tty" CONDA_PREFIX="/data_lc/envs/coloai" MOTD_SHOWN="pam" HOME="/root" LANG="en_US.UTF-8" LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:" CONDA_PROMPT_MODIFIER="(coloai) " SSH_CONNECTION="192.9.0.1 5395 192.9.1.2 22" LESSCLOSE="/usr/bin/lesspipe %s %s" XDG_SESSION_CLASS="user" TERM="xterm" LESSOPEN="| /usr/bin/lesspipe %s" USER="root" CONDA_SHLVL="2" DISPLAY="localhost:10.0" SHLVL="2" XDG_SESSION_ID="42" CONDA_PYTHON_EXE="/home/ubuntu/miniconda3/bin/python" LD_LIBRARY_PATH="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7/lib64::/usr/local/cuda/lib64" XDG_RUNTIME_DIR="/run/user/0" SSH_CLIENT="192.9.0.1 5395 22" CONDA_DEFAULT_ENV="coloai" OMP_NUM_THREADS="4" CUDA_HOME="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7" XDG_DATA_DIRS="/usr/local/share:/usr/share:/var/lib/snapd/desktop" PATH="/data_lc/LLM/Colossal-LLaMA-2/cuda11.7/bin:/usr/local/cuda-11.8/bin:/data_lc/envs/coloai/bin:/home/ubuntu/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/0/bus" SSH_TTY="/dev/pts/2" CONDA_PREFIX_1="/home/ubuntu/miniconda3" CPATH="/usr/include:" OLDPWD="/data_lc/LLM/Colossal-LLaMA-2/flash_attention/csrc" _="/data_lc/envs/coloai/bin/colossalai" && torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 train.py --pretrained New_Model/ --dataset spliced_tokenized_output_arrow/part-00000 spliced_tokenized_output_arrow/part-00001 spliced_tokenized_output_arrow/part-00002 spliced_tokenized_output_arrow/part-00003 spliced_tokenized_output_arrow/part-00004 spliced_tokenized_output_arrow/part-00005 spliced_tokenized_output_arrow/part-00006 spliced_tokenized_output_arrow/part-00007 spliced_tokenized_output_arrow/part-00008 spliced_tokenized_output_arrow/part-00009 --plugin gemini_auto --save_interval 400 --save_dir Saved_trained_model/Test-2023-10-12-17-34-45 --tensorboard_dir Saved_trained_model_tensorboard/Test-2023-10-12-17-34-45 --num_epochs 1 --micro_batch_size 2 --lr 1e-4 --mixed_precision bf16 --grad_clip 1.0 --weight_decay 0.01 --warmup_steps 100 --use_flash_attn --freeze_non_embeds_params'
Exit code: 1
Stdout: already printed
Stderr: already printed
====== Training on All Nodes =====
127.0.0.1: failure
====== Stopping All Nodes =====
127.0.0.1: finish
🐛 Describe the bug
在application下面的colossal-llama2,运行train.sh的时候,有如下报错信息:
Environment
------------ Environment ------------
Colossal-AI version: 0.3.3
PyTorch version: 1.13.1
System CUDA version: 11.7
CUDA version required by PyTorch: 11.7