-
Notifications
You must be signed in to change notification settings - Fork 24.3k
[decomp] Remove pixel_shuffle from core aten decomps #118921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118921
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1f02229 with merge base 6692f2c ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
f7233c7
to
eecb6d6
Compare
What's the rationale for this function to be a core aten op? The initial rationale was that it is "difficult / impossible to write a decomp for it" and this clearly not the case. |
pixel_shuffle is a core aten op (https://pytorch.org/docs/main/torch.compiler_ir.html#core-aten-ir) so we should not decompose it. #118239 added a decomp for it which is causing an internal test failure (https://www.internalfb.com/intern/test/281475090561210/) which cases on the pixel_shuffle operator.
eecb6d6
to
1f02229
Compare
@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Sure, I'm not opposed to removing it from the core aten ops, but I will land this in order to fix the internal issues. |
Couldn't a fix also be to simply remove the core aten tag? |
No, because someone internal is special casing on this operator to do some quantization, but I can follow up to see if I can rewrite it to not case on this operator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM as a temporal fix. Moving forward this should probably be a decomposition rather than a core aten op. @angelayi to follow up on this point.
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
pixel_shuffle is a core aten op (https://pytorch.org/docs/main/torch.compiler_ir.html#core-aten-ir) so we should not decompose it. #118239 added a decomp for it which is causing an internal test failure (https://www.internalfb.com/intern/test/281475090561210/) which cases on the pixel_shuffle operator. Pull Request resolved: #118921 Approved by: https://github.com/SherlockNoMad, https://github.com/lezcano
@angelayi any updates on this one? |
ping @angelayi |
Summary: pytorch#118239 added a decomposition for pixel_shuffle. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in pytorch#118921 Test Plan: CI Differential Revision: D53766709
Sorry for the delay! We fixed the internal use case, so we can remove it from the core aten list now |
Great! Thank you! @isuruf mind reverting this PR? |
Oh, I reverted it in #119899 |
Summary: #118239 added a decomposition for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in #118921. Test Plan: CI Differential Revision: D53766709 Pull Request resolved: #119899 Approved by: https://github.com/peterbell10, https://github.com/lezcano
Summary: pytorch#118239 added a decomposition for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in pytorch#118921. Test Plan: CI Differential Revision: D53860966
Summary: #118239 added a decomposition for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in #118921. Test Plan: CI Differential Revision: D53860966 Pull Request resolved: #120092 Approved by: https://github.com/ydwu4
pixel_shuffle is a core aten op
(https://pytorch.org/docs/main/torch.compiler_ir.html#core-aten-ir) so we should not decompose it.
#118239 added a decomp for it which is causing an internal test failure
(https://www.internalfb.com/intern/test/281475090561210/) which cases on the pixel_shuffle operator.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler