Skip to content

Fix: add None check for extractors in video_processor_class_from_name#42774

Open
Blaizzy wants to merge 1 commit intohuggingface:mainfrom
Blaizzy:pc/fix-video-processor-error
Open

Fix: add None check for extractors in video_processor_class_from_name#42774
Blaizzy wants to merge 1 commit intohuggingface:mainfrom
Blaizzy:pc/fix-video-processor-error

Conversation

@Blaizzy
Copy link
Copy Markdown

@Blaizzy Blaizzy commented Dec 10, 2025

Here's a PR message for the fix:


What does this PR do?

Fixes TypeError: argument of type 'NoneType' is not iterable when loading VLM processors without torchvision installed.

The Problem

When torchvision is not available, the VIDEO_PROCESSOR_MAPPING_NAMES dictionary values are set to None (lines 84-87 in video_processing_auto.py). However, video_processor_class_from_name() attempts to check if class_name in extractors without first verifying that extractors is not None, causing the TypeError.

The Fix

Added a None check before the membership test:

# Before
if class_name in extractors:

# After  
if extractors is not None and class_name in extractors:

This allows the function to safely skip entries where the video processor class is None (i.e., when torchvision is unavailable) and continue searching through other mappings or return None gracefully pointing out the root cause of the error.

Traceback

Before fix:

Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mlx/bin/mlx_vlm.generate", line 7, in <module>
    sys.exit(main())
  ...
  File ".../transformers/models/auto/video_processing_auto.py", line 94, in video_processor_class_from_name
    if class_name in extractors:
       ^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable

After fix:

  File "/Users/prince_canuma/Documents/Projects/LLMs/transformers/transformers/src/transformers/utils/import_utils.py", line 1863, in __getattribute__
    requires_backends(cls, cls._backends)
  File "/Users/prince_canuma/Documents/Projects/LLMs/transformers/transformers/src/transformers/utils/import_utils.py", line 1849, in requires_backends
    raise ImportError("".join(failed))
ImportError: 
Qwen3VLVideoProcessor requires the Torchvision library but it was not found in your environment. Check out the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.

Reproduction

# In an environment without torchvision
mlx_vlm.generate --model Qwen/Qwen3-VL-2B-Instruct --prompt "Describe this image." --image "image.png"

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: auto

@Rocketknight1
Copy link
Copy Markdown
Member

Makes sense to me but cc @zucchini-nlp @molbap @yonigozlan to check if this is the right thing to do when we don't have torchvision

def video_processor_class_from_name(class_name: str):
for module_name, extractors in VIDEO_PROCESSOR_MAPPING_NAMES.items():
if class_name in extractors:
if extractors is not None and class_name in extractors:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like this must have been an equal check class_name == extractor because in video mapping we have only one class per model. I doubt that class_name in extractor is needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants