Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ pip3 install -e .
pip3 install -r requirements.txt
pip3 install transformers # likely you want the latest version of transformers
pip3 install flash-attn==2.5.8 --no-build-isolation
'''
```


## Running commands
Expand All @@ -31,4 +31,6 @@ Example for Llama-3-8B-Instruct
```
lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3-8B-Instruct,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,trust_remote_code=True --batch_size auto --tasks gsm8k --seed 0 --gen_kwargs temperature=0.0 --output_path Meta-Llama-3-8B-Instruct_gsm8k_temperature=0.0_results.json --log_samples
lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3-8B-Instruct,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,trust_remote_code=True --batch_size auto --tasks gsm1k --seed 0 --gen_kwargs temperature=0.0 --output_path Meta-Llama-3-8B-Instruct_gsm1k_temperature=0.0_results.json --log_samples
'''
```

**Note:** due to API changes (latest model API versions changing over time), nondeterminism of API reuslts, and slightly different library versions of the HuggingFace transformers libraries required to run each model, the precise results may not be fully exactly reproducible.
Loading