Describe the bug
Fail to merge Lora.
To Reproduce
Steps to reproduce the behavior:
- Go to 'LMFlow'
- FineTune with command
./scripts/run_finetune_with_lora.sh --model_name_or_path /model/Qwen1.5-14B-Chat --dataset_path /finetune --output_lora_path /lora/loraft-Qwen1.5-14B-Chat
- Run command
bash ./scripts/run_merge_lora.sh --model_name_or_path Qwen/Qwen1.5-14B-Chat --lora_model_path /lora/loraft-Qwen1.5-14B-Chat --output_model_path /ft-model/loraft-Qwen1.5-14B-Chat
Expected behavior
Merge LoRA weight and the base model
Error
Traceback (most recent call last):
File "/home/jimmy/LMFlow/examples/merge_lora.py", line 74, in <module>
main()
File "/home/jimmy/LMFlow/examples/merge_lora.py", line 69, in main
model.merge_lora_weights()
File "/home/jimmy/LLM-FT/LMFlow/src/lmflow/models/hf_decoder_model.py", line 625, in merge_lora_weights
self.get_backend_model().merge_and_unload()
File "/home/jimmy/LLM-FT/LMFlow/src/lmflow/models/hf_model_mixin.py", line 537, in get_backend_model
return self.backend_model
AttributeError: 'HFDecoderModel' object has no attribute 'backend_model'
Enviroment:
Describe the bug
Fail to merge Lora.
To Reproduce
Steps to reproduce the behavior:
./scripts/run_finetune_with_lora.sh --model_name_or_path /model/Qwen1.5-14B-Chat --dataset_path /finetune --output_lora_path /lora/loraft-Qwen1.5-14B-Chatbash ./scripts/run_merge_lora.sh --model_name_or_path Qwen/Qwen1.5-14B-Chat --lora_model_path /lora/loraft-Qwen1.5-14B-Chat --output_model_path /ft-model/loraft-Qwen1.5-14B-ChatExpected behavior
Merge LoRA weight and the base model
Error
Enviroment: