Fix torch only support for fast Processors#42824
Fix torch only support for fast Processors#42824Blaizzy wants to merge 20 commits intohuggingface:mainfrom
Conversation
|
cc @yonigozlan @molbap but not sure who to tag for MLX! |
|
@ArthurZucker has some context as well |
ArthurZucker
left a comment
There was a problem hiding this comment.
LGTM, @yonigozlan is it fine for you as well?
yonigozlan
left a comment
There was a problem hiding this comment.
Hello @Blaizzy ! Sorry for the delay. I'm ok with merging this if it can improve compatibility with mlx! We don't have any mlx image processor in the library for now, but I'm guessing this is to use with custom image processors?
As discussed with Arthur, we'll move towards having different backends for image processors, so we could add one for mlx. However image processors are quite different across models, so we would need to have custom ones for each backend and models, as making image processing operation not library dependent seems difficult. Anyway these are issues for another PR ;)
|
|
||
|
|
||
| class Pop2PianoFeatureExtractionTester: | ||
| def __init__( |
There was a problem hiding this comment.
Not sure if these modifications are related to this PR or if they were added here by mistake?
There was a problem hiding this comment.
They were a mistake 😅. I was following the formatter feedback.
Thank you very much @yonigozlan! Yes, that will be the path I'll take in the meantime. Because for majority of models they can return mlx tensors from transformers but a few use torch and torch vision explicitly, that forces torch as dep. |
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
| if isinstance(value, np.ndarray): | ||
| return mx.array(value) | ||
| else: | ||
| return mx.array(value) |
There was a problem hiding this comment.
| if isinstance(value, np.ndarray): | |
| return mx.array(value) | |
| else: | |
| return mx.array(value) | |
| return mx.array(value) |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: pop2piano |
|
View the CircleCI Test Summary for this PR: https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=42824&sha=4adaad |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Hey @Blaizzy ! I fixed some tiny formatting errors and moved the test to However, I can't use mlx on my machine, and we don't have mlx support in our CI. Would you mind running the test ( |
What does this PR do?
Adds MLX tensor support to
BatchFeatureand removes the PyTorch-only restriction in fast image processing utilities.Changes:
is_mlx_arrayexport totransformers/utils/__init__.pyBatchFeature.convert_to_tensors()(feature_extraction_utils.py)image_processing_utils_fast.py)This enables using
return_tensors="mlx"with vision models and fast image processors on Apple Silicon, which previously raised:Before submitting
Pull Request section?
to it if that's the case.
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
Who can review?