8000 [decomp] Remove pixel_shuffle from core aten decomps by angelayi · Pull Request #118921 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[decomp] Remove pixel_shuffle from core aten decomps #118921

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

angelayi
Copy link
Contributor
@angelayi angelayi commented Feb 1, 2024

Copy link
pytorch-bot bot commented Feb 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118921

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1f02229 with merge base 6692f2c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@lezcano
Copy link
Collaborator
lezcano commented Feb 2, 2024

What's the rationale for this function to be a core aten op? The initial rationale was that it is "difficult / impossible to write a decomp for it" and this clearly not the case.
Should we revise this one and potentially remove it from the core aten ops?

pixel_shuffle is a core aten op
(https://pytorch.org/docs/main/torch.compiler_ir.html#core-aten-ir) so
we should not decompose it.

#118239 added a decomp for it
which is causing an internal test failure
(https://www.internalfb.com/intern/test/281475090561210/) which cases on
the pixel_shuffle operator.
@angelayi angelayi force-pushed the angelayi/remove_pixel branch from eecb6d6 to 1f02229 Compare February 2, 2024 17:50
@facebook-github-bot
8000 Copy link
Contributor

@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@angelayi
Copy link
Contributor Author
angelayi commented Feb 2, 2024

Should we revise this one and potentially remove it from the core aten ops?

Sure, I'm not opposed to removing it from the core aten ops, but I will land this in order to fix the internal issues.

@lezcano
Copy link
Collaborator
lezcano commented Feb 3, 2024

Couldn't a fix also be to simply remove the core aten tag?
If you want to land this one first that's alright, but could you follow up with one removing this function from the core ATen ops and adding it as a decomp as an example, so that we know how to do this in the future?

@angelayi
Copy link
Contributor Author
angelayi commented Feb 3, 2024

Couldn't a fix also be to simply remove the core aten tag?

No, because someone internal is special casing on this operator to do some quantization, but I can follow up to see if I can rewrite it to not case on this operator.

Copy link
Collaborator
@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM as a temporal fix. Moving forward this should probably be a decomposition rather than a core aten op. @angelayi to follow up on this point.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorch-bot bot pushed a commit that referenced this pull request Feb 8, 2024
pixel_shuffle is a core aten op
(https://pytorch.org/docs/main/torch.compiler_ir.html#core-aten-ir) so we should not decompose it.

#118239 added a decomp for it which is causing an internal test failure
(https://www.internalfb.com/intern/test/281475090561210/) which cases on the pixel_shuffle operator.

Pull Request resolved: #118921
Approved by: https://github.com/SherlockNoMad, https://github.com/lezcano
@lezcano
Copy link
Collaborator
lezcano commented Feb 9, 2024

@angelayi any updates on this one?

@lezcano
Copy link
Collaborator
lezcano commented Feb 14, 2024

ping @angelayi

angelayi added a commit to angelayi/pytorch that referenced this pull request Feb 14, 2024
Summary: pytorch#118239 added a decomposition for pixel_shuffle. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in pytorch#118921

Test Plan: CI

Differential Revision: D53766709
@angelayi
Copy link
Contributor Author

Sorry for the delay! We fixed the internal use case, so we can remove it from the core aten list now

@lezcano
Copy link
Collaborator
lezcano commented Feb 14, 2024

Great! Thank you! @isuruf mind reverting this PR?

@angelayi
Copy link
Contributor Author

Oh, I reverted it in #119899

pytorchmergebot pushed a commit that referenced this pull request Feb 14, 2024
Summary: #118239 added a decomposition for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We have also fixed the internal use case so that it no longer special cases on pixel_shuffle, allowing us to revert the changes in #118921.

Test Plan: CI

Differential Revision: D53766709

Pull Request resolved: #119899
Approved by: https://github.com/peterbell10, https://github.com/lezcano
angelayi added a commit to angelayi/pytorch that referenced this pull request Feb 16, 2024
Summary:

pytorch#118239 added a decomposition
for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We
have also fixed the internal use case so that it no longer special cases on
pixel_shuffle, allowing us to revert the changes in
pytorch#118921.

Test Plan: CI

Differential Revision: D53860966
pytorchmergebot pushed a commit that referenced this pull request Feb 20, 2024
Summary:
#118239 added a decomposition
for pixel_shuffle, so pixel_shuffle no longer needs to be a Core ATen Op. We
have also fixed the internal use case so that it no longer special cases on
pixel_shuffle, allowing us to revert the changes in
#118921.

Test Plan: CI

Differential Revision: D53860966

Pull Request resolved: #120092
Approved by: https://github.com/ydwu4
@github-actions github-actions bot deleted the angelayi/remove_pixel branch March 16, 2024 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants
0