8000 Is it possible to add a parameter in torch.onnx.export to skip the prim::PythonOp subgraph process when exporting the autograd function? · Issue #90263 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Is it possible to add a parameter in torch.onnx.export to skip the prim::PythonOp subgraph process when exporting the autograd function? #90263

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cslearner opened this issue Dec 6, 2022 · 3 comments
Assignees
Labels
module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@cslearner
Copy link

🚀 The feature, motivation and pitch

Torch 1.13 adds a new feature: Inlined prim::PythonOp for Autograd Function Export (#74765). When tracing, all operations in the autograd function will be recorded as a subgraph of autograd function node. However, this subgraph seems to be useless if a custom symbolic function is implemented in autograd function defination or registered by register_custom_symbolic_op, in which cases the autograd function will be exported as one single node.
In my cases, some operations in my custom autograd functions may also be incompatible with this subgraph process or the following passes. Is it possible that a parameter is added in torch.onnx.export interface to control this subgraph trace or just skip subgraph tracing when a symbolic function is registered by register_custom_op_symbolic or implemented in the autograd function defination?

Alternatives

No response

Additional context

No response

@bdhirsh bdhirsh added module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Dec 7, 2022
@cslearner
Copy link
Author

I try to set GLOBALS.in_onnx_export = False to skip subgraph generation in autograd function when exporting onnx. However, this value is always true in C++ code. Since is_in_onnx_export is a function in python, is this line actually py::module::import("torch.onnx.__init__").attr("is_in_onnx_export")(), which missing the () at the end now?

@thiagocrepaldi
Copy link
Collaborator
thiagocrepaldi commented Feb 27, 2023

@cslearner prim::PythonOp is a direct consequence of having operator_export_type=ONNX_FALLTHROUGH. I haven't tested, but have you tried using operator_export_type=ONNX or operator_export_type=ONNX_ATEN_FALLBACK instead?

@shubhambhokare1 do you think it would be possible to replace prim::pythonOp by com.microsoft::PythonOp? ORT introduced com.microsoft::PythonOp for training purposes, but maybe this feature could be adapted to also work for inference. @pengwa and @wschin, what do you think?

@pengwa
Copy link
pengwa commented Feb 28, 2023

I think the ask here is to have an option to disable autograd function inline in torch exporter. Is that possible to do that? (I had the question too :))

I guess they might not use ORT for training or inferencing. @thiagocrepaldi

@abock abock added this to ONNX Jun 14, 2023
@github-project-automation github-project-automation bot moved this to Inbox in ONNX Jun 14, 2023
@justinchuby justinchuby closed this as not planned Won't fix, can't repro, duplicate, stale Mar 13, 2025
@github-project-automation github-project-automation bot moved this from Inbox to Done in ONNX Mar 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Status: Done
Development

No branches or pull requests

6 participants
0