Skip to content
Merged
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
b7ed014
Update qwen2.md
Aravind-11 Apr 2, 2025
f575b89
Update qwen2.md
Aravind-11 Apr 2, 2025
fbd92fb
Update qwen2.md
Aravind-11 Apr 2, 2025
671239d
Update qwen2.md
Aravind-11 Apr 2, 2025
9ecb393
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
3c2c2d0
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
f8757c2
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
687c614
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
a9e82c0
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
d511823
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
dc3c8fe
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
11696de
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
2855091
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
d805987
Update qwen2.md
Aravind-11 Apr 2, 2025
01b7b05
Merge branch 'huggingface:main' into main
Aravind-11 Apr 2, 2025
7d5cf1f
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
ebebcf3
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
c2dd36f
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
c38d33a
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
3fecacb
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
33de8ef
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
9438cb3
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
b894f4b
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
c94b34e
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
c5c463e
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
c75e622
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
fa11097
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
ef96105
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
f9ae4f8
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
2ebbfc3
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
1ddbd9e
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
e182841
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
8094176
Update docs/source/en/model_doc/qwen2.md
Aravind-11 Apr 2, 2025
1ffecb1
Merge branch 'main' into main
Aravind-11 Apr 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
136 changes: 111 additions & 25 deletions docs/source/en/model_doc/qwen2.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,50 +14,136 @@ rendered properly in your Markdown viewer.

-->

# Qwen2

<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
<div style="float: right;">
<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
</div>
</div>

## Overview
# Qwen2

Qwen2 is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, Qwen2-72B, Qwen2-Audio, etc.
[Qwen2](https://huggingface.co/papers/2407.10671) is a family of large language models (pretrained, instruction-tuned and mixture-of-experts) available in sizes from 0.5B to 72B parameters. The models are built on the Transformer architecture featuring enhancements like group query attention (GQA), rotary positional embeddings (RoPE), a mix of sliding window and full attention, and dual chunk attention with YARN for training stability. Qwen2 models support multiple languages and context lengths up to 131,072 tokens.

### Model Details
You can find all the official Qwen2 checkpoints under the [Qwen2](https://huggingface.co/collections/Qwen/qwen2-6659360b33528ced941e557f) collection.

Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
> [!TIP]
> Click on the Qwen2 models in the right sidebar for more examples of how to apply Qwen2 to different language tasks.

The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`], and from the command line using the instruction-tuned models.

## Usage tips
<hfoptions id="usage">
<hfoption id="Pipeline">

`Qwen2-7B` and `Qwen2-7B-Instruct` can be found on the [Huggingface Hub](https://huggingface.co/Qwen)
```python
import torch
from transformers import pipeline

pipe = pipeline(
task="text-generation",
model="Qwen/Qwen2-1.5B-Instruct",
torch_dtype=torch.bfloat16,
device_map=0
)

messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me about the Qwen2 model family."},
]
outputs = pipe(messages, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"][-1]['content'])
```

In the following, we demonstrate how to use `Qwen2-7B-Instruct` for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose.
</hfoption>
<hfoption id="AutoModel">

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> device = "cuda" # the device to load the model onto
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-1.5B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="sdpa"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")

prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to("cuda")

generated_ids = model.generate(
model_inputs.input_ids,
Comment thread
Aravind-11 marked this conversation as resolved.
cache_implementation="static",
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-7B-Instruct", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct")
</hfoption>
<hfoption id="transformers-cli">

>>> prompt = "Give me a short introduction to large language model."
```bash
# pip install -U flash-attn --no-build-isolation
transformers-cli chat --model_name_or_path Qwen/Qwen2-7B-Instruct --torch_dtype auto --attn_implementation flash_attention_2 --device 0
```

>>> messages = [{"role": "user", "content": prompt}]
</hfoption>
</hfoptions>

>>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.

>>> model_inputs = tokenizer([text], return_tensors="pt").to(device)
The example below uses [bitsandbytes](../quantization/bitsandbytes) to quantize the weights to 4-bits.

>>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
```python
# pip install -U flash-attn --no-build-isolation
import torch
Comment thread
Aravind-11 marked this conversation as resolved.
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B")
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-7B",
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=quantization_config,
attn_implementation="flash_attention_2"
)

inputs = tokenizer("The Qwen2 model family is", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

>>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]

>>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Notes

- Ensure your Transformers library version is up-to-date. Qwen2 requires Transformers>=4.37.0 for full support.

## Qwen2Config

Expand Down