-
Notifications
You must be signed in to change notification settings - Fork 24.3k
[ONNX] Inline prim::PythonOp for Autograd Function Export #74765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Inline prim::PythonOp for Autograd Function Export #74765
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 4f9ce2c (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
f9ae28e
to
562e715
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Shubham! This will make much better user experience when exporting autograd Function.
please rebase with master again to resolve conflict |
687300f
to
6b4be40
Compare
Please rebase with master to resolve conflict |
11ae6e2
to
81c2eb4
Compare
@@ -590,17 +626,26 @@ static void _trace_post_record( | |||
auto unpacked = graph->createTupleUnpack(node->output())->insertAfter(node); | |||
node = unpacked; | |||
} | |||
|
|||
std::vector<torch::jit::Node*> subgraph_trace_outputs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping
please rebase with master then check onnx CI, failures may be related.
|
08876fa
to
c0c4af5
Compare
torch/csrc/jit/passes/onnx/pattern_conversion/autograd_function_process.cpp
Outdated
Show resolved
Hide resolved
torch/csrc/jit/passes/onnx/pattern_conversion/autograd_function_process.cpp
Show resolved
Hide resolved
torch/csrc/jit/passes/onnx/pattern_conversion/autograd_function_process.cpp
Show resolved
Hide resolved
namespace torch { | ||
namespace jit { | ||
|
||
void convertSubgraphToSubBlock(Block* block) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: wonder if we can create subblock directly for autograd node? Might affect other existing jit passes maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will create an issue to optimize this conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please comment with link to issue if possible...
8310353
to
4f9ce2c
Compare
@albanD Rebased, all CI is green except |
Any updates @albanD? |
@malfet please help import and land... I think it's dragging too long and @shubhambhokare1 might need another rebase now after it was rebased last time for merge. |
@malfet has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge -f "Internal changes are OK" |
@pytorchbot successfully started a merge job. Check the current status here |
…74765) Summary: Add flag (inline_autograd) to enable inline export of model consisting of autograd functions. Currently, this flag should only be used in TrainingMode.EVAL and not for training. An example: If a model containing ``autograd.Function`` is as follows ``` class AutogradFunc(torch.autograd.Function): staticmethod def forward(ctx, i): result = i.exp() result = result.log() ctx.save_for_backward(result) return result ``` Then the model is exported as ``` graph(%0 : Float): %1 : Float = ^AutogradFunc(%0) return (%1) ``` If inline_autograd is set to True, this will be exported as ``` graph(%0 : Float): %1 : Float = onnx::Exp(%0) %2 : Float = onnx::Log(%1) return (%2) ``` If one of the ops within the autograd module is not supported, that particular node is exported as is mirroring ONNX_FALLTHROUGH mode Fixes: #61813 Pull Request resolved: #74765 Approved by: https://github.com/BowenBao, https://github.com/malfet Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/95d873855e6b5a7b44e102d3aec81d6db3215c0f Original Phabricator Test Plan: Imported from GitHub, without a `Test Plan:` line. Reviewed By: george-qi, kit1980 Differential Revision: D37738323 fbshipit-source-id: 03ff75a809403b134c2a545952706cbeac8d0065
Add flag (inline_autograd) to enable inline export of model consisting of autograd functions. Currently, this flag should only be used in TrainingMode.EVAL and not for training.
An example:
If a model containing
autograd.Function
is as followsThen the model is exported as
If inline_autograd is set to True, this will be exported as
If one of the ops within the autograd module is not supported, that particular node is exported as is mirroring ONNX_FALLTHROUGH mode
Fixes: #61813