8000 Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors by yonigozlan · Pull Request #38157 · huggingface/transformers · GitHub
[go: up one dir, main page]

Skip to content

Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors #38157

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

yonigozlan
Copy link
Member
@yonigozlan yonigozlan commented May 15, 2025

What does this PR do?

Several things added to this PR:

  • Idefics2/3 + smolvlm fast image processors. Cc @andimarafioti :)
  • Improvements in the base fast image processors to better handle nested images
    • group_images_by_shape and reorder_images can now handle nested images, flattening them for processing then rebuilding the original nesting
  • Improvements/uniformization to fast image processor tests (use torch.testing.assertclose)
  • Disable grouping by default when processing on cpu, enable it on gpu for all processors. As the benchmarks below suggests, it seems that grouping images when processing on cpu is almost always slower, but almost always faster on gpu. This seems to be the case for other image processors as well.

Thanks a lot to @sushmanthreddy and @rootonchair for their PRs on idefics2/3 image processors (here and here)

Here are the results for idefics2 and idefics3/smolvlm:

Idefics2 time per images:

time_per_image_all_configs

With different image sizes:

8000
time_per_image_all_configstime_per_image_all_configs
time_per_image_all_configs time_per_image_all_configs

Idefics2 speedups:

speedup_vs_slow

With different image sizes:

speedup_vs_slow speedup_vs_slow
speedup_vs_slow speedup_vs_slow

Idefics3/SmolVLM time per images:

time_per_image_all_configs

With different image sizes:

time_per_image_all_configs time_per_image_all_configs
time_per_image_all_configs time_per_image_all_configs

Idefics3/SmolVLM speedups:

speedup_vs_slow

With different image sizes:

speedup_vs_slow speedup_vs_slow
speedup_vs_slow speedup_vs_slow

Copy link
Contributor

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@github-actions github-actions bot marked this pull request as draft May 15, 2025 15:17
@yonigozlan yonigozlan marked this pull request as ready for review May 15, 2025 15:20
@yonigozlan yonigozlan requested a review from ArthurZucker May 15, 2025 15:41
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yonigozlan yonigozlan force-pushed the improve-support-nested-fast-image-proc branch from c6f31c4 to cd404b7 Compare May 15, 2025 19:36
@yonigozlan yonigozlan mentioned this pull request May 15, 2025
5 tasks
Copy link
Member
@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! Very very nice PR 🤗, looks like a clear win and it's nice to have the benchmarks attached 🤗
Would only try to simplify group_images_by_shape in order to avoid having the exact same logic twice!

Comment on lines 917 to 924
grouped_images = {}
grouped_images_index = {}
for i, image in enumerate(images):
shape = image.shape[1:]
if shape not in grouped_images:
grouped_images[shape] = []
grouped_images[shape].append(image)
grouped_images_index[i] = (shape, len(grouped_images[shape]) - 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same as _flatten_nested_images with one level less no? Maybe we could have a function doing both and use it here to group things by semantic?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or otherwise we could maybe add a dummy nesting dim if it's not nested, and only unnest for the return? Because we basically do the exact same things twice with an extra dimension in the if not is_nested and the other branch, so would be cleaner to only do it once

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or otherwise we could maybe add a dummy nesting dim if it's not nested, and only unnest for the return? Because we basically do the exact same things twice with an extra dimension in the if not is_nested and the other branch, so would be cleaner to only do it once

I think this could get confusing, and need some reworking for processors that currently use grouped_images_index between group_images_by_shape and reorder_images such as pixtral.

) -> list["torch.Tensor"]:
processed_images: dict[tuple[int, int], "torch.Tensor"],
grouped_images_index: dict[Union[int, tuple[int, int]], tuple[tuple[int, int], int]],
is_nested: bool = False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just infer is_nested from data instead of having it as an arg everywhere in the functions?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is an easy way to infer this as some image processing methods, notably the ones that split images into patches, handle non nested images in inputs, but output patches of images which are not easily distinguishable from nested images, but for the former we need is_nested=False and for the latter is_nested=True

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright then, thanks for the explanation

Comment on lines 207 to 211
disable_grouping = {
"description": """
Whether to disable grouping of images by size to process them individually and not in batches.
""",
"shape": None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to add a line about the benchmark -> on gpu much better to do it, on cpu no - just so that users have an idea of the impact

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you forgot here haha - but fine if not super relevant in general (though for idefics and smolvlm it's clear)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah you're right, added comments about that elsewhere but forgot to add it here. Adding it now!

Comment on lines 26 to 27
if is_torch_available():
import torch
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the usual

@yonigozlan yonigozlan requested a review from Cyrilvallez June 18, 2025 20:03
Copy link
Member
@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! This feels much less redundant than before, great work! 🤗 Feel free to merge after the last very small nits

Comment on lines 207 to 211
disable_grouping = {
"description": """
Whether to disable grouping of images by size to process them individually and not in batches.
""",
"shape": None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you forgot here haha - but fine if not super relevant in general (though for idefics and smolvlm it's clear)

@yonigozlan yonigozlan enabled auto-merge (squash) June 23, 2025 14:13
@yonigozlan yonigozlan merged commit d29482c into huggingface:main Jun 23, 2025
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0