From f69b866aafaf55db309510c28a502924f8513876 Mon Sep 17 00:00:00 2001 From: Avigyan Sinha Date: Sat, 29 Mar 2025 15:40:34 +0530 Subject: [PATCH 01/10] feat: updated model card for qwen_2.5_vl --- docs/source/en/model_doc/qwen2_5_vl.md | 193 ++++++++++--------------- 1 file changed, 73 insertions(+), 120 deletions(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index b2c138999e6f..d19379c58ccd 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -14,45 +14,75 @@ rendered properly in your Markdown viewer. --> -# Qwen2.5-VL - -
+
+
PyTorch FlashAttention -SDPA +SDPA
-## Overview - -The [Qwen2.5-VL](https://qwenlm.github.io/blog/qwen2_5-vl/) model is an update to [Qwen2-VL](https://arxiv.org/abs/2409.12191) from Qwen team, Alibaba Group. +# Qwen2.5-VL -The abstract from this update is the following: +The [Qwen2.5-VL](https://qwenlm.github.io/blog/qwen2_5-vl/) model is a multimodal vision-language model developed by the Qwen team, Alibaba Group, combining an enhanced ViT encoder with the Qwen2.5 LLM. As an update to [Qwen2-VL](https://arxiv.org/abs/2409.12191), it offers improved visual reasoning and video understanding capabilities. The model uses a refined ViT architecture with SwiGLU and RMSNorm, making it more aligned with the LLM’s structure. -*Qwen2.5-VL marks a major step forward from Qwen2-VL, built upon the latest Qwen2.5 LLM. We've accelerated training and testing through the strategic implementation of window attention within the ViT. The ViT architecture itself has been refined with SwiGLU and RMSNorm, aligning it more closely with the LLM's structure. A key innovation is the expansion of native dynamic resolution to encompass the temporal dimension, in addition to spatial aspects. Furthermore, we've upgraded MRoPE, incorporating absolute time alignment on the time axis to allow the model to effectively capture temporal dynamics, regardless of frame rate, leading to superior video understanding.* +Qwen2.5-VL introduces window attention in the ViT, accelerating both training and inference. It also supports dynamic resolution across both spatial and temporal dimensions, making it highly effective for video analysis. The upgraded MRoPE (Multi-Resolutional Rotary Positional Encoding) now includes absolute time alignment on the time axis, enabling it to capture temporal dynamics across varying frame rates, enhancing its video comprehension abilities. -## Usage example +You can find all the original Qwen2.5-VL checkpoints under the [Qwen2.5-VL](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5) collection. -### Single Media inference +> [!TIP] +> Click on the Qwen2.5-VL models in the right sidebar for more examples of how to apply Qwen2.5-VL to different vision and language tasks. -The model can accept both images and videos as input. Here's an example code for inference. +The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class. -```python + + +```py import torch -from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor +from transformers import pipeline +pipe = pipeline( + task="image-text-to-text", + model="Qwen/Qwen2.5-VL-7B-Instruct", + device="cuda", + torch_dtype=torch.bfloat16 +) +messages = [ + { + "role": "user", + "content": [ + { + "type": "image", + "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", + }, + { "type": "text", "text": "Describe this image."}, + ] + } +] +pipe(text=messages,max_new_tokens=20, return_full_text=False) -# Load the model in half-precision on the available device(s) -model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", device_map="auto") -processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") +``` + + -conversation = [ +```py +import torch +from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor + +model = Qwen2_5_VLForConditionalGeneration.from_pretrained( + "Qwen/Qwen2.5-VL-7B-Instruct", + torch_dtype=torch.float16, + device_map="auto", + attn_implementation="sdpa" +) +processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") +messages = [ { "role":"user", "content":[ { "type":"image", - "url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" + "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" }, { "type":"text", @@ -60,120 +90,33 @@ conversation = [ } ] } -] - -inputs = processor.apply_chat_template( - conversation, - add_generation_prompt=True, - tokenize=True, - return_dict=True, - return_tensors="pt" -).to(model.device) - - -# Inference: Generation of the output -output_ids = model.generate(**inputs, max_new_tokens=128) -generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] -output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) -print(output_text) -# Video -conversation = [ - { - "role": "user", - "content": [ - {"type": "video", "path": "/path/to/video.mp4"}, - {"type": "text", "text": "What happened in the video?"}, - ], - } ] +image = Image.open(requests.get("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", stream=True).raw) inputs = processor.apply_chat_template( - conversation, - video_fps=1, + messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt" -).to(model.device) - -# Inference: Generation of the output -output_ids = model.generate(**inputs, max_new_tokens=128) -generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] -output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) -print(output_text) -``` - -### Batch Mixed Media Inference +).to("cuda:0") -The model can batch inputs composed of mixed samples of various types such as images, videos, and text. Here is an example. - -```python -# Conversation for the first image -conversation1 = [ - { - "role": "user", - "content": [ - {"type": "image", "path": "/path/to/image1.jpg"}, - {"type": "text", "text": "Describe this image."} - ] - } -] - -# Conversation with two images -conversation2 = [ - { - "role": "user", - "content": [ - {"type": "image", "path": "/path/to/image2.jpg"}, - {"type": "image", "path": "/path/to/image3.jpg"}, - {"type": "text", "text": "What is written in the pictures?"} - ] - } -] - -# Conversation with pure text -conversation3 = [ - { - "role": "user", - "content": "who are you?" - } -] - - -# Conversation with mixed midia -conversation4 = [ - { - "role": "user", - "content": [ - {"type": "image", "path": "/path/to/image3.jpg"}, - {"type": "image", "path": "/path/to/image4.jpg"}, - {"type": "video", "path": "/path/to/video.jpg"}, - {"type": "text", "text": "What are the common elements in these medias?"}, - ], - } +generated_ids = model.generate(**inputs, max_new_tokens=128) +generated_ids_trimmed = [ + out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] - -conversations = [conversation1, conversation2, conversation3, conversation4] -# Preparation for batch inference -ipnuts = processor.apply_chat_template( - conversations, - video_fps=1, - add_generation_prompt=True, - tokenize=True, - return_dict=True, - return_tensors="pt" -).to(model.device) - - -# Batch Inference -output_ids = model.generate(**inputs, max_new_tokens=128) -generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] -output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) +output_text = processor.batch_decode( + generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False +) print(output_text) ``` + + + +### Notes -### Usage Tips +Qwen2.5-VL is a multimodal conversational model designed for image, video, and text understanding, making it highly versatile for both general-purpose and fine-tuned downstream tasks such as image captioning, visual question answering (VQA), scene description, and video understanding. #### Image Resolution trade-off @@ -199,6 +142,16 @@ This ensures each image gets encoded using a number between 256-1024 tokens. The By default, images and video content are directly included in the conversation. When handling multiple images, it's helpful to add labels to the images and videos for better reference. Users can control this behavior with the following settings: ```python +import torch +from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor + +model = Qwen2_5_VLForConditionalGeneration.from_pretrained( + "Qwen/Qwen2.5-VL-7B-Instruct", + torch_dtype=torch.float16, + device_map="auto", + attn_implementation="sdpa" +) +processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") conversation = [ { "role": "user", From 97107f021d14cd7faf4c9b188e9617919e39594d Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Tue, 1 Apr 2025 04:28:27 +0530 Subject: [PATCH 02/10] applied suggested change 1 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index d19379c58ccd..408c3f2a439d 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -23,7 +23,7 @@ rendered properly in your Markdown viewer. # Qwen2.5-VL -The [Qwen2.5-VL](https://qwenlm.github.io/blog/qwen2_5-vl/) model is a multimodal vision-language model developed by the Qwen team, Alibaba Group, combining an enhanced ViT encoder with the Qwen2.5 LLM. As an update to [Qwen2-VL](https://arxiv.org/abs/2409.12191), it offers improved visual reasoning and video understanding capabilities. The model uses a refined ViT architecture with SwiGLU and RMSNorm, making it more aligned with the LLM’s structure. +[Qwen2.5-VL](https://huggingface.co/papers/2502.13923) is a multimodal vision-language model, available in 3B, 7B, and 72B parameters, pretrained on 4.1T tokens. The model introduces window attention in the ViT encoder to accelerate training and inference, dynamic FPS sampling on the spatial and temporal dimensions for better video understanding across different sampling rates, and an upgraded MRoPE (multi-resolutional rotary positional encoding) mechanism to better capture and learn temporal dynamics. Qwen2.5-VL introduces window attention in the ViT, accelerating both training and inference. It also supports dynamic resolution across both spatial and temporal dimensions, making it highly effective for video analysis. The upgraded MRoPE (Multi-Resolutional Rotary Positional Encoding) now includes absolute time alignment on the time axis, enabling it to capture temporal dynamics across varying frame rates, enhancing its video comprehension abilities. From a07c7dddfbd7c5fc7a1612d6dc0e2f254d29e654 Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Tue, 1 Apr 2025 04:36:20 +0530 Subject: [PATCH 03/10] applied suggested change 2 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 408c3f2a439d..0039831321ef 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -25,7 +25,6 @@ rendered properly in your Markdown viewer. [Qwen2.5-VL](https://huggingface.co/papers/2502.13923) is a multimodal vision-language model, available in 3B, 7B, and 72B parameters, pretrained on 4.1T tokens. The model introduces window attention in the ViT encoder to accelerate training and inference, dynamic FPS sampling on the spatial and temporal dimensions for better video understanding across different sampling rates, and an upgraded MRoPE (multi-resolutional rotary positional encoding) mechanism to better capture and learn temporal dynamics. -Qwen2.5-VL introduces window attention in the ViT, accelerating both training and inference. It also supports dynamic resolution across both spatial and temporal dimensions, making it highly effective for video analysis. The upgraded MRoPE (Multi-Resolutional Rotary Positional Encoding) now includes absolute time alignment on the time axis, enabling it to capture temporal dynamics across varying frame rates, enhancing its video comprehension abilities. You can find all the original Qwen2.5-VL checkpoints under the [Qwen2.5-VL](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5) collection. From f68bbe3f720fb0e16db60ae216edefbd5925b9c3 Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Tue, 1 Apr 2025 09:43:54 +0530 Subject: [PATCH 04/10] applied suggested change 3 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 0039831321ef..191d08c8a346 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -115,7 +115,10 @@ print(output_text) ### Notes -Qwen2.5-VL is a multimodal conversational model designed for image, video, and text understanding, making it highly versatile for both general-purpose and fine-tuned downstream tasks such as image captioning, visual question answering (VQA), scene description, and video understanding. +- Use Qwen2.5-VL for video inputs by setting `"type": "video"` as shown below. +- Use Qwen2.5-VL for a mixed batch of inputs (images, videos, text) as show below. +- Use the `min_pixels` and `max_pixels` parameters in [`AutoProcessor`] to set the resolution. Higher resolution can require more compute whereas reducing the resolution can save memory. +- Add labels when handling multiple images or videos for better reference as shown below. #### Image Resolution trade-off From 9942344721e336fec08584cb3ac8362bfc718ba7 Mon Sep 17 00:00:00 2001 From: Avigyan Sinha Date: Tue, 1 Apr 2025 14:13:18 +0530 Subject: [PATCH 05/10] fix: made requested changes for quantization and notes --- docs/source/en/model_doc/qwen2_5_vl.md | 215 ++++++++++++++----------- 1 file changed, 119 insertions(+), 96 deletions(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 191d08c8a346..e1248b119b66 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -113,111 +113,134 @@ print(output_text) -### Notes - -- Use Qwen2.5-VL for video inputs by setting `"type": "video"` as shown below. -- Use Qwen2.5-VL for a mixed batch of inputs (images, videos, text) as show below. -- Use the `min_pixels` and `max_pixels` parameters in [`AutoProcessor`] to set the resolution. Higher resolution can require more compute whereas reducing the resolution can save memory. -- Add labels when handling multiple images or videos for better reference as shown below. - -#### Image Resolution trade-off - -The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs. +Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. +The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4. ```python -min_pixels = 224*224 -max_pixels = 2048*2048 -processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) -``` - -In case of limited GPU RAM, one can reduce the resolution as follows: - -```python -min_pixels = 256*28*28 -max_pixels = 1024*28*28 -processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) -``` -This ensures each image gets encoded using a number between 256-1024 tokens. The 28 comes from the fact that the model uses a patch size of 14 and a temporal patch size of 2 (14 x 2 = 28). - -#### Multiple Image Inputs - -By default, images and video content are directly included in the conversation. When handling multiple images, it's helpful to add labels to the images and videos for better reference. Users can control this behavior with the following settings: - -```python -import torch -from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor - +quantization_config = TorchAoConfig("int4_weight_only", group_size=128) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", - torch_dtype=torch.float16, + torch_dtype=torch.bfloat16, device_map="auto", - attn_implementation="sdpa" + quantization_config=quantization_config ) -processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") -conversation = [ - { - "role": "user", - "content": [ - {"type": "image"}, - {"type": "text", "text": "Hello, how are you?"} - ] - }, - { - "role": "assistant", - "content": "I'm doing well, thank you for asking. How can I assist you today?" - }, - { - "role": "user", - "content": [ - {"type": "text", "text": "Can you describe these images and video?"}, - {"type": "image"}, - {"type": "image"}, - {"type": "video"}, - {"type": "text", "text": "These are from my vacation."} - ] - }, - { - "role": "assistant", - "content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?" - }, - { - "role": "user", - "content": "It was a trip to the mountains. Can you see the details in the images and video?" - } -] - -# default: -prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True) -# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' - - -# add ids -prompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True) -# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' ``` +### Notes -#### Flash-Attention 2 to speed up generation - -First, make sure to install the latest version of Flash Attention 2: - -```bash -pip install -U flash-attn --no-build-isolation -``` - -Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. - -To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model: - -```python -from transformers import Qwen2_5_VLForConditionalGeneration - -model = Qwen2_5_VLForConditionalGeneration.from_pretrained( - "Qwen/Qwen2.5-VL-7B-Instruct", - torch_dtype=torch.bfloat16, - attn_implementation="flash_attention_2", -) -``` +- Use Qwen2.5-VL for video inputs by setting `"type": "video"` as shown below. + ```python + conversation = [ + { + "role": "user", + "content": [ + {"type": "video", "path": "/path/to/video.mp4"}, + {"type": "text", "text": "What happened in the video?"}, + ], + } + ] + + inputs = processor.apply_chat_template( + conversation, + video_fps=1, + add_generation_prompt=True, + tokenize=True, + return_dict=True, + return_tensors="pt" + ).to(model.device) + + # Inference: Generation of the output + output_ids = model.generate(**inputs, max_new_tokens=128) + generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] + output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) + print(output_text) + ``` +- Use Qwen2.5-VL for a mixed batch of inputs (images, videos, text). Add labels when handling multiple images or videos for better reference + as show below. + ```python + import torch + from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor + + model = Qwen2_5_VLForConditionalGeneration.from_pretrained( + "Qwen/Qwen2.5-VL-7B-Instruct", + torch_dtype=torch.float16, + device_map="auto", + attn_implementation="sdpa" + ) + processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") + conversation = [ + { + "role": "user", + "content": [ + {"type": "image"}, + {"type": "text", "text": "Hello, how are you?"} + ] + }, + { + "role": "assistant", + "content": "I'm doing well, thank you for asking. How can I assist you today?" + }, + { + "role": "user", + "content": [ + {"type": "text", "text": "Can you describe these images and video?"}, + {"type": "image"}, + {"type": "image"}, + {"type": "video"}, + {"type": "text", "text": "These are from my vacation."} + ] + }, + { + "role": "assistant", + "content": "I'd be happy to describe the images and video for you. Could you please provide more context about your vacation?" + }, + { + "role": "user", + "content": "It was a trip to the mountains. Can you see the details in the images and video?" + } + ] + + # default: + prompt_without_id = processor.apply_chat_template(conversation, add_generation_prompt=True) + # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' + + + # add ids + prompt_with_id = processor.apply_chat_template(conversation, add_generation_prompt=True, add_vision_id=True) + # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nPicture 1: <|vision_start|><|image_pad|><|vision_end|>Hello, how are you?<|im_end|>\n<|im_start|>assistant\nI'm doing well, thank you for asking. How can I assist you today?<|im_end|>\n<|im_start|>user\nCan you describe these images and video?Picture 2: <|vision_start|><|image_pad|><|vision_end|>Picture 3: <|vision_start|><|image_pad|><|vision_end|>Video 1: <|vision_start|><|video_pad|><|vision_end|>These are from my vacation.<|im_end|>\n<|im_start|>assistant\nI'd be happy to describe the images and video for you. Could you please provide more context about your vacation?<|im_end|>\n<|im_start|>user\nIt was a trip to the mountains. Can you see the details in the images and video?<|im_end|>\n<|im_start|>assistant\n' + ``` + +- Use the `min_pixels` and `max_pixels` parameters in [`AutoProcessor`] to set the resolution. + + ```python + min_pixels = 224*224 + max_pixels = 2048*2048 + processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) + ``` + + Higher resolution can require more compute whereas reducing the resolution can save memory as follows: + + ```python + min_pixels = 256*28*28 + max_pixels = 1024*28*28 + processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) + ``` +- We can implement flash attention 2 to speed up generation as shown below. + ```python + from transformers import Qwen2_5_VLForConditionalGeneration + + model = Qwen2_5_VLForConditionalGeneration.from_pretrained( + "Qwen/Qwen2.5-VL-7B-Instruct", + torch_dtype=torch.bfloat16, + attn_implementation="flash_attention_2", + ) + ``` + Do ensure that flash attention 2 is installed: + ```bash + pip install -U flash-attn --no-build-isolation + ``` + + Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. From 70e64e161fcb39c4ed974b3c328b499994a1c0dc Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Thu, 3 Apr 2025 05:35:43 +0530 Subject: [PATCH 06/10] suggeested model card change 4 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index e1248b119b66..8cbc54d389d6 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -99,7 +99,7 @@ inputs = processor.apply_chat_template( tokenize=True, return_dict=True, return_tensors="pt" -).to("cuda:0") +).to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ From 12de7f29c4d8b9b176e6168baaded410d6bb073d Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Thu, 3 Apr 2025 05:36:45 +0530 Subject: [PATCH 07/10] updated model card wiht suggested change 5 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 8cbc54d389d6..c503f0b0cf79 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -114,6 +114,7 @@ print(output_text) Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. + The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4. ```python From 2ec452a8a35e37dac8a419ea49ff12d52572448b Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Thu, 3 Apr 2025 05:39:56 +0530 Subject: [PATCH 08/10] updated model card wiht suggested change 6 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index c503f0b0cf79..3e3b6664f2b0 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -42,7 +42,7 @@ from transformers import pipeline pipe = pipeline( task="image-text-to-text", model="Qwen/Qwen2.5-VL-7B-Instruct", - device="cuda", + device=0, torch_dtype=torch.bfloat16 ) messages = [ From b5af9a590d1ecd7edb61698264516d7110044a97 Mon Sep 17 00:00:00 2001 From: Avigyan Sinha <72064090+arkhamHack@users.noreply.github.com> Date: Thu, 3 Apr 2025 05:40:25 +0530 Subject: [PATCH 09/10] updated model card wiht suggested change 7 Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/model_doc/qwen2_5_vl.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 3e3b6664f2b0..85fed44aa318 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -92,7 +92,6 @@ messages = [ ] -image = Image.open(requests.get("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", stream=True).raw) inputs = processor.apply_chat_template( messages, add_generation_prompt=True, From e480dc2f615f70c296cb81e5992d3fe652ffa78d Mon Sep 17 00:00:00 2001 From: Avigyan Sinha Date: Thu, 3 Apr 2025 05:47:18 +0530 Subject: [PATCH 10/10] feat: applied requested changes --- docs/source/en/model_doc/qwen2_5_vl.md | 22 +++------------------- 1 file changed, 3 insertions(+), 19 deletions(-) diff --git a/docs/source/en/model_doc/qwen2_5_vl.md b/docs/source/en/model_doc/qwen2_5_vl.md index 85fed44aa318..2d38fe82e614 100644 --- a/docs/source/en/model_doc/qwen2_5_vl.md +++ b/docs/source/en/model_doc/qwen2_5_vl.md @@ -117,6 +117,9 @@ Quantization reduces the memory burden of large models by representing the weigh The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4. ```python +import torch +from transformers import TorchAoConfig, Gemma3ForConditionalGeneration, AutoProcessor + quantization_config = TorchAoConfig("int4_weight_only", group_size=128) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", @@ -225,25 +228,6 @@ model = Qwen2_5_VLForConditionalGeneration.from_pretrained( max_pixels = 1024*28*28 processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) ``` -- We can implement flash attention 2 to speed up generation as shown below. - ```python - from transformers import Qwen2_5_VLForConditionalGeneration - - model = Qwen2_5_VLForConditionalGeneration.from_pretrained( - "Qwen/Qwen2.5-VL-7B-Instruct", - torch_dtype=torch.bfloat16, - attn_implementation="flash_attention_2", - ) - ``` - Do ensure that flash attention 2 is installed: - ```bash - pip install -U flash-attn --no-build-isolation - ``` - - Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. - - - ## Qwen2_5_VLConfig [[autodoc]] Qwen2_5_VLConfig