-
Notifications
You must be signed in to change notification settings - Fork 29k
[WiP] Add EoMT Model #37610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base:
main
Are you sure you want to change the base?
[WiP] Add EoMT Model #37610
Conversation
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the |
Image segmentation model so cc @qubvel @NielsRogge! |
@qubvel A rough draft is ready for inference 🤗 . Now i am adding support for training and they use mask_annealing to determine probability for attn masks. Is it required in this HF implementation also (I don't see for any other model) or are we fine with having a fixed prob for attn mask |
Hey @yaswanth19, do you mean the mask probability changes during the model training? Would be nice to have it but not sure it can be easily implemented tbh. May 8000 be we can just add a callback for a Trainer to change it? similar to learning rate callback (I mean not adding it to the Transformers actually, but to the docs/fine-tuning guide) |
Yup exactly, and adding a trainer callback seems to be a good idea. I will check the feasibility of implementation and if simple enough then we can implement in the model itself else pivot to trainer callback. |
@yaswanth19 Thanks for your great work! A few thoughts: Let me know your thoughts. |
Thanks @tommiekerssies for your initial thoughts.
Yup, will add the complete test suite once I have a implementation ready.
Thanks for bringing this to my attention, I can make some correction to initialization later on when we have the end-to-end code ready. IMO, most of the user don't init from scratch and will either finetune it or just perform inference. But having said that I will look at timm implementation and will init in the same way
Ideally yes 😅 But that's not the library coding standard (Don't want to introduce a hard dependency on
Transformers has one model one file philosophy and because of that I have copied the Mask2Former loss completely here. It can be subjective call with Modular file in the sense we can expose Mask2Former loss and import it here for EoMT (Will require additional changes in mask2former) but that can be discussed during reviews with the core maintainer. |
Thanks for the clarifications! Regarding mask annealing, I agree that 0 for now is fine. That means effectively disabling masked attention and mask annealing, which is what the current code already does, so no changes needed on that front. For weight initialization and the ViT backbone, I understand the constraints around using timm. In that case, I’d just make sure that the non-ViT parameters (query embeddings, mask MLP, upscale blocks, class head) aren’t using any custom initializations and instead follow PyTorch defaults. Should be a quick fix. Let me know if you’d like me to look at any part in more detail. |
Hi @qubvel ,I’m working on refactoring the training logic for EoMT , and I’m running into a design challenge: In the original single‐class implementation, they call _predict (which uses the class predictor head) on intermediate layer outputs to build attention masks. Because everything lives in one class, this is straightforward. Refer: https://github.com/tue-mps/eomt/blob/c311b377d3189c976163e4ceb2156d90bb7db88f/models/eomt.py#L130 In our modular HF version, the encoder (EoMTEncoder) only runs the transformer blocks, and _predict (with mask_head, upscale_block, and class_predictor) lives in EoMTModel or EoMTForUniversalSegmentation. That separation means the encoder loop can’t access _predict , so we can’t reconstruct the original training flow. I have two solutions in mind, LMK your thoughts on the below approaches and suggest any other better alternative: 1.) Club all the classes from 2.) Move _predict which includes mask_head, upscale_block into EoMTEncoder and somehow pass the class_head. Pass these modules into the encoder class so that inside its forward loop it can call _predict, build the attention mask, and feed it into the next block. IMO this is a bit dirty and flow is tangled 😅 . Here the _predict func is the same code which is used in Mask2Former for get mask_logits and class_logits from model output. |
@qubvel A ping for your comments on the above issue/challenge so that I can refactor it this weekend 😅 |
@yaswanth19 can you send me your email address so we can set up a Slack channel to discuss the integration, rather than having to discuss everything on the Github thread? |
Yup sure @NielsRogge , Here is my email id: yaswanthgali8@gmail.com |
Nice work @yaswanth19 ! For fixing the initialisation, maybe you could have a look here, as the Mask2Former implementation in HuggingFace suffers from the same bug: #35877 (comment) |
What does this PR do?
Fixes #37171 and continuation of #37392
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
What's Left: