-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Introduce aoti_call_delegate HOP #145630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce aoti_call_delegate HOP #145630
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/145630
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (4 Unrelated Failures)As of commit 133a4e8 with merge base 5a527fa ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D68359391 |
78919e2
to
60a629c
Compare
Summary: Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering. I introduce a new HOP to address this. The schema is following ``` aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor]) ``` There are a few problems exposed by HOP - AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`. - As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail. Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test Reviewed By: zhxchen17 Differential Revision: D68359391
This pull request was exported from Phabricator. Differential Revision: D68359391 |
60a629c
to
aae8b65
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
Summary: Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering. I introduce a new HOP to address this. The schema is following ``` aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor]) ``` There are a few problems exposed by HOP - AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`. - As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail. Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test Reviewed By: zhxchen17 Differential Revision: D68359391
aae8b65
to
4f8e4e0
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
4f8e4e0
to
769110e
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
7682b7a
to
243c3e3
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D68359391 |
243c3e3
to
64d65ee
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
AOTI_LOWERED_MODULE = "AOTInductorEPModule" | ||
|
||
|
||
class AOTICallDelegate(HigherOrderOperator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it valuable to make this op more general? Like couldn't it also work for MTIA too?
64d65ee
to
db69de9
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
Summary: Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering. I introduce a new HOP to address this. The schema is following ``` aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor]) ``` There are a few problems exposed by HOP - AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`. - As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail. Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test Reviewed By: zhxchen17 Differential Revision: D68359391
db69de9
to
65c6c2d
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
65c6c2d
to
3014224
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
Summary: Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering. I introduce a new HOP to address this. The schema is following ``` aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor]) ``` There are a few problems exposed by HOP - AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`. - As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail. Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test Reviewed By: zhxchen17 Differential Revision: D68359391
3014224
to
cfb5471
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
Summary: Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering. I introduce a new HOP to address this. The schema is following ``` aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor]) ``` There are a few problems exposed by HOP - AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`. - As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail. Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test Reviewed By: zhxchen17 Differential Revision: D68359391
cfb5471
to
133a4e8
Compare
This pull request was exported from Phabricator. Differential Revision: D68359391 |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering.
I introduce a new HOP to address this.
The schema is following
There are a few problems exposed by HOP
original_gm
stateful, and bypassing the serialization fororiginal_gm
.Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test
Reviewed By: zhxchen17
Differential Revision: D68359391