-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Description
🐛 Describe the bug
I’m attempting to export my model to ONNX using opset version 18, but I’m encountering an error related to the aten::linalg_inv
operator not being supported. I’ve tried multiple opset versions and updated all relevant packages, but the issue persists.
The error message I receive is:
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::linalg_inv' to ONNX opset version 18 is not supported.
Environment Details:
- Torch version: 2.4.1+cu124
- Torchvision version: 2.4.1+cu124
- ONNX version: 1.16.2
- ONNXRuntime version: 1.19.2 (including GPU support)
- Python version: [Your Python version]
- OS: [Your operating system]
Steps to Reproduce:
- Define a model using the
torch.linalg.inv()
operation. - Attempt to export the model to ONNX using
torch.onnx.export
. - The export fails due to the unsupported
aten::linalg_inv
operator.
Here’s a sample of the export code I’m using:
input_names = ['source', 'target_frame', 'source_head_segmap', 'seg_target', 'source_head_blend']
output_names = ['output']
torch.onnx.export(
reconstruction_module,
(source, target_frame, source_head_segmap, seg_target, source_head_blend),
onnx_output_path,
export_params=True,
opset_version=18,
do_constant_folding=True,
input_names=input_names,
output_names=output_names,
verbose=True
)
Expected Behavior:
The model should export successfully to ONNX format, supporting the linalg_inv
operation.
Actual Behavior:
The export fails with the following error message:
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::linalg_inv' to ONNX opset version 18 is not supported.
What I’ve Tried:
- Tested opset versions 11 through 18.
- Updated all related packages (torch, torchvision, onnx, onnxruntime, onnxruntime-gpu).
- Searched for potential workarounds or alternatives, but found no solutions that maintain the model’s functionality.
Request:
Could you please provide support for the aten::linalg_inv
operator in ONNX export, or suggest a potential workaround that retains the matrix inversion functionality?
Thank you for your time and assistance!
Versions
torch : 2.4.1+cu124
torchvision : 2.4.1+cu124
onnxruntime : 1.19.2
onnxruntime-gpu : 1.19.2
onnx : 1.16.2
windows: 11
cuda: 12.5
gpu: 4060