-
Notifications
You must be signed in to change notification settings - Fork 24.2k
[MTIA Aten Backend] Migrate "_unsafe_view" and "view" ops from out-of-tree to pytorch in-tree #153670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This appears to be a diff that was exported from phabricator, but the PR author does not have sufficient permissions to run CI. @andyanwang, please do step 2 of internal wiki to get write access so you do not need to get CI approvals in the future. If you think this is a mistake, please contact the Pytorch Dev Infra team. |
|
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153670
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 44086b6 with merge base 3aa8477 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D74672464 |
…-tree to pytorch in-tree (pytorch#153670) Summary: # Context The MTIA New Aten Backend work is essentially to move MTIA operators from pytorch out-of-tree to in-tree, with following benefits: 1. Avoid duplicate code copied from pytorch, e.g. view ops implementation, util functions. 2. Utilize TensorIterator and structured kernel codegen, avoid manual implementation of broadcasting, dtype casting, asserting, etc. 3. Eliminate MTIA's own codegen flow, which is unnecessary complexity. 4. Overall make MTIA's aten backend more pytorch native. Differential Revision: D74672464
5bb6aca
to
8b59023
Compare
This pull request was exported from Phabricator. Differential Revision: D74672464 |
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
❌ 🤖 pytorchbot command failed:
|
@pytorchbot label "topic: not user facing" |
…-tree to pytorch in-tree (pytorch#153670) Summary: # Context The MTIA New Aten Backend work is essentially to move MTIA operators from pytorch out-of-tree to in-tree, with following benefits: 1. Avoid duplicate code copied from pytorch, e.g. view ops implementation, util functions. 2. Utilize TensorIterator and structured kernel codegen, avoid manual implementation of broadcasting, dtype casting, asserting, etc. 3. Eliminate MTIA's own codegen flow, which is unnecessary complexity. 4. Overall make MTIA's aten backend more pytorch native. Differential Revision: D74672464
8b59023
to
44086b6
Compare
This pull request was exported from Phabricator. Differential Revision: D74672464 |
Summary:
Context
The MTIA New Aten Backend work is essentially to move MTIA operators from pytorch out-of-tree to in-tree, with following benefits:
Differential Revision: D74672464