Add fast image processor Janus, Deepseek VL, Deepseek VL hybrid#39739
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
zucchini-nlp
left a comment
There was a problem hiding this comment.
Great thanks! Yep, indeed, adding fast from the model release is better. I will nudge users to do so
| if kwargs.get("image_mean", None) is None: | ||
| background_color = (127, 127, 127) | ||
| else: | ||
| background_color = tuple([int(x * 255) for x in kwargs.get("image_mean")]) | ||
| if kwargs.get("high_res_image_mean", None) is None: | ||
| background_color = (127, 127, 127) | ||
| else: | ||
| background_color = tuple([int(x * 255) for x in kwargs.get("high_res_image_mean")]) |
There was a problem hiding this comment.
I see it is copied from "slow" processor but it looks weird. Ig this was meant to use image_mean and if not high_res_image_mean. Can we prettify a bit here for readbility?
There was a problem hiding this comment.
Yes I meant to ask you, not sure what the original code intended to do in the slow processor. I copied it anyway for consistency, but it doesn't really make sense, because here it will always take the value depending on high_res_image_mean. Maybe it would make more sense to have two different background colors?
There was a problem hiding this comment.
The values for high res and low res are identical, so that's why we didn't see issues prob. Having two values makes sense, even if it is the same value
| for shape, stacked_high_res_padded_images in high_res_padded_images.items(): | ||
| if do_resize: | ||
| stacked_images = self.resize( | ||
| image=stacked_high_res_padded_images, size=size, min_size=min_size, interpolation=interpolation |
There was a problem hiding this comment.
hmm, in the slow processors for simple pixel_values we don't resize padded high-res images but the original PIL images. Can you verify this is correct?
There was a problem hiding this comment.
Actually struggled to get the equivalence tests to pass because of that, but it looks like we do resize from the resized (and padded as padding is done in resize in the slow processor) high-res-images in the slow image processor:
Equivalence tests pass like this, but didn't pass when resizing from original PIL image
Happy to change both slow and fast image processors if that's not intended, but that will be a breaking change
There was a problem hiding this comment.
Oh right, the naming is same lol. Ig that's intended then, because the converted model passes equivalence tests. It just looks weird
…b.com/yonigozlan/transformers into add-fast-image-proc-janus-deepseek-vl
|
[For maintainers] Suggested jobs to run (before merge) run-slow: auto, deepseek_vl, deepseek_vl_hybrid, janus |
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
…ingface#39739) * add fast image processor Janus, deepseek_vl, deepseek_vl_hybrid * fix after review
As the title says.
Cc @zucchini-nlp as I think you reviewed these models?
Also it would be great to have fast image processors on release for the newest models, don't hesitate to ping me on the PRs, happy to help!