-
Notifications
You must be signed in to change notification settings - Fork 29.8k
Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors #38157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors #38157
Conversation
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
c6f31c4
to
cd404b7
Compare
There was a p 8000 roblem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! Very very nice PR 🤗, looks like a clear win and it's nice to have the benchmarks attached 🤗
Would only try to simplify group_images_by_shape
in order to avoid having the exact same logic twice!
src/transformers/image_transforms.py
Outdated
grouped_images = {} | ||
grouped_images_index = {} | ||
for i, image in enumerate(images): | ||
shape = image.shape[1:] | ||
if shape not in grouped_images: | ||
grouped_images[shape] = [] | ||
grouped_images[shape].append(image) | ||
grouped_images_index[i] = (shape, len(grouped_images[shape]) - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same as _flatten_nested_images
with one level less no? Maybe we could have a function doing both and use it here to group things by semantic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or otherwise we could maybe add a dummy nesting dim if it's not nested, and only unnest for the return? Because we basically do the exact same things twice with an extra dimension in the if not is_nested
and the other branch, so would be cleaner to only do it once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or otherwise we could maybe add a dummy nesting dim if it's not nested, and only unnest for the return? Because we basically do the exact same things twice with an extra dimension in the if not is_nested and the other branch, so would be cleaner to only do it once
I think this could get confusing, and need some reworking for processors that currently use grouped_images_index between group_images_by_shape
and reorder_images
such as pixtral.
) -> list["torch.Tensor"]: | ||
processed_images: dict[tuple[int, int], "torch.Tensor"], | ||
grouped_images_index: dict[Union[int, tuple[int, int]], tuple[tuple[int, int], int]], | ||
is_nested: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just infer is_nested
from data instead of having it as an arg everywhere in the functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is an easy way to infer this as some image processing methods, notably the ones that split images into patches, handle non nested images in inputs, but output patches of images which are not easily distinguishable from nested images, but for the former we need is_nested=False
and for the latter is_nested=True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright then, thanks for the explanation
src/transformers/models/idefics2/image_processing_idefics2_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/utils/args_doc.py
Outdated
disable_grouping = { | ||
"description": """ | ||
Whether to disable grouping of images by size to process them individually and not in batches. | ||
""", | ||
"shape": None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to add a line about the benchmark -> on gpu much better to do it, on cpu no - just so that users have an idea of the impact
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you forgot here haha - but fine if not super relevant in general (though for idefics and smolvlm it's clear)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right, added comments about that elsewhere but forgot to add it here. Adding it now!
if is_torch_available(): | ||
import torch | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the usual
…ted-fast-image-proc
…ted-fast-image-proc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! This feels much less redundant than before, great work! 🤗 Feel free to merge after the last very small nits
src/transformers/utils/args_doc.py
Outdated
disable_grouping = { | ||
"description": """ | ||
Whether to disable grouping of images by size to process them individually and not in batches. | ||
""", | ||
"shape": None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you forgot here haha - but fine if not super relevant in general (though for idefics and smolvlm it's clear)
What does this PR do?
Several things added to this PR:
Thanks a lot to @sushmanthreddy and @rootonchair for their PRs on idefics2/3 image processors (here and here)
Here are the results for idefics2 and idefics3/smolvlm:
Idefics2 time per images:
With different image sizes:
Idefics2 speedups:
With different image sizes:
Idefics3/SmolVLM time per images:
With different image sizes:
Idefics3/SmolVLM speedups:
With different image sizes: