If you are submitting a bug report, please fill in the following details and use the tag [bug].
Describe the bug
My requirement is to load unsloth/llama-3.2-3b-instruct (drop-in replacement to meta-llama/llama-3.2-3b-instruct because my request was rejected) from a local folder because I put the model weights on a NFS
Code example
# %%
model_name = "Llama-3.2-3B-Instruct"
print(f"Loading {model_name} model")
model_path = os.path.join(
"/media/tao/disk4T/jason/transformers",
f"unsloth/{model_name}",
)
hf_model = AutoModelForCausalLM.from_pretrained(
model_path,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
)
# %%
model = HookedTransformer.from_pretrained(
model_name=f"meta-llama/{model_name}",
hf_model=hf_model,
tokenizer=tokenizer,
device="cuda",
n_devices=2,
)
model.generate("hello world", max_new_tokens=200, do_sample=True, temperature=0.3)
System Info
Describe the characteristic of your environment:
- Describe how
transformer_lens was installed: pip
- What OS are you using? Linux
- Python version: 3.11
Additional context
Add any other context about the problem here.
Checklist
If you are submitting a bug report, please fill in the following details and use the tag [bug].
Describe the bug
My requirement is to load
unsloth/llama-3.2-3b-instruct(drop-in replacement tometa-llama/llama-3.2-3b-instructbecause my request was rejected) from a local folder because I put the model weights on a NFSCode example
System Info
Describe the characteristic of your environment:
transformer_lenswas installed: pipAdditional context
Add any other context about the problem here.
Checklist