Custom autograd functions are not inlined when export mode is ONNX_ATEN_FALLBACK #85027
Labels
module: onnx
Related to torch.onnx
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
I am working with a partner, who are using operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK to export a greater number of models without error with the trade off of running inference with the PyTorch aten library included.
They are trying to add support for models that contain custom autograd functions, and have been testing with PyTorch nightly (since this PR went in: #74765).
What we would like to see is:
When operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK and
All functionality is supported by ONNX
then any custom autograd functions are inlined and the graph is exported with all ONNX ops
When there are aten ops in the custom autograd function
then any custom autograd functions are inlined and the graph is exported with the aten ops
What we observe currently:
operator_export_type=OperatorExportTypes.ONNX
If the functionality in the custom autograd function is supported by ONNX ops then the graph is exported to ONNX correctly.
Export fails if the custom autograd has an aten op with the error message “Could not export operator aten:XXX”
operator_export_type=OperatorExportTypes.ONNX_FALLTHROUGH
The custom autograd functions is exported as a PythonOp node whether or not the functionality of the custom autograd functions is supported by ONNX.
operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK
The export fails with the error “Couldn’t export Python operator ”
Small sample here demonstrating the above findings: https://github.com/natke/custom_autograd/blob/main/model.py.
Versions
The text was updated successfully, but these errors were encountered: