-
Notifications
You must be signed in to change 8000 notification settings - Fork 24.2k
onnx export error #107922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Please share the onnx version you use. ONNX Script requires a onnx version >= 1.14. Thanks! |
windows: |
linux: |
Could you install the latest onnx (1.15) and onnxscript (uninstall onnxscript-preview first)? Thanks! |
Traceback (most recent call last): |
pip install --upgrade onnxscript |
pip install packaging. We will fix this in the next release |
PS F:\ai_export> pip install --upgrade onnxscript The above exception was the direct cause of the following exception: Traceback (most recent call last): |
after install, it still not work |
after install, it still not work |
dynamo doesn’t support Windows yet. I suggest using WSL or if you like hack the check_if_dynamo_supported function above to always return True |
Looks like duplicate of #112844 |
pip install --upgrade onnxscript packaging The above exception was the direct cause of the following exception: Traceback (most recent call last): |
use wsl still not work |
Please check the function call as the signature has changed:
There are also known issues reported in #112844 |
python ./main.py from user code: Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information The above exception was the direct cause of the following exception: Traceback (most recent call last): |
import torch class DataCov(nn.Module):
def export(): if name == 'main': |
{ |
@WangHHY19931001 Hi, Did you sloved this error? |
it still error /home/yafeng_wang/miniconda3/envs/onnx_export/lib/python3.12/site-packages/torch/onnx/_internal/exporter.py:137: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op. The above exception was the direct cause of the following exception: Traceback (most recent call last): |
{ |
Set following environment variable.
https://github.com/pytorch/pytorch/blob/main/torch/onnx/__init__.py#L473 |
still not work |
As my checking, dynamo export works, but next code in PyTorch tutorials do not work, because all code there supposed to use legacy exporter. |
We are in the process of updating the tutorials. For now, please test with |
Yes, as I said export works. I could export my model. |
PyTorch ONNX Conversion Report
Error messagesNo errors Exported programExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, b_transform_0_spectrogram_window: "f32[1536]", b_transform_0_mel_scale_fb: "f32[769, 128]", x1: "f32[1, 1, 576000]"):
# File: /home/dingli/miniconda3/envs/ai_mos_export_onnx/lib/python3.12/site-packages/torchaudio/transforms/_transforms.py:110 in forward, code: return F.spectrogram(
view: "f32[1, 576000]" = torch.ops.aten.view.default(x1, [-1, 576000]); x1 = None
view_1: "f32[1, 1, 576000]" = torch.ops.aten.view.default(view, [1, 1, 576000]); view = None
reflection_pad1d: "f32[1, 1, 577536]" = torch.ops.aten.reflection_pad1d.default(view_1, [768, 768]); view_1 = None
view_2: "f32[1, 577536]" = torch.ops.aten.view.default(reflection_pad1d, [1, 577536]); reflection_pad1d = None
unfold: "f32[1, 751, 1536]" = torch.ops.aten.unfold.default(view_2, -1, 1536, 768); view_2 = None
mul: "f32[1, 751, 1536]" = torch.ops.aten.mul.Tensor(unfold, b_transform_0_spectrogram_window); unfold = b_transform_0_spectrogram_window = None
_fft_r2c: "c64[1, 751, 769]" = torch.ops.aten._fft_r2c.default(mul, [2], 0, True); mul = None
transpose: "c64[1, 769, 751]" = torch.ops.aten.transpose.int(_fft_r2c, 1, 2); _fft_r2c = None
view_3: "c64[1, 1, 769, 751]" = torch.ops.aten.view.default(transpose, [1, 1, 769, 751]); transpose = None
abs_1: "f32[1, 1, 769, 751]" = torch.ops.aten.abs.default(view_3); view_3 = None
pow_1: "f32[1, 1, 769, 751]" = torch.ops.aten.pow.Tensor_Scalar(abs_1, 2.0); abs_1 = None
# File: /home/dingli/miniconda3/envs/ai_mos_export_onnx/lib/python3.12/site-packages/torchaudio/transforms/_transforms.py:412 in forward, code: mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
transpose_1: "f32[1, 1, 751, 769]" = torch.ops.aten.transpose.int(pow_1, -1, -2); pow_1 = None
matmul: "f32[1, 1, 751, 128]" = torch.ops.aten.matmul.default(transpose_1, b_transform_0_mel_scale_fb); transpose_1 = b_transform_0_mel_scale_fb = None
transpose_2: "f32[1, 1, 128, 751]" = torch.ops.aten.transpose.int(matmul, -1, -2); matmul = None
return (transpose_2,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.BUFFER: 3>, arg=TensorArgument(name='b_transform_0_spectrogram_window'), target='transform.0.spectrogram.window', persistent=True), InputSpec(kind=<InputKind.BUFFER: 3>, arg=TensorArgument(name='b_transform_0_mel_scale_fb'), target='transform.0.mel_scale.fb', persistent=True), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='transpose_2'), target=None)])
Range constraints: {} ONNX model<
ir_version=10,
opset_imports={'pkg.onnxscript.torch_lib.common': 1, '': 18, 'pkg.onnxscript.torch_lib': 1},
producer_name='pytorch',
producer_version='2.7.0.dev20250107+cpu',
domain=None,
model_version=None,
>
graph(
name=main_graph,
inputs=(
%"wav_data"<FLOAT,[1,1,576000]>
),
outputs=(
%"ans"<FLOAT,[1,1,128,751]>
),
initializers=(
%"transform.0.spectrogram.window"<FLOAT,[1536]>,
%"transform.0.mel_scale.fb"<FLOAT,[769,128]>
),
) {
0 | # node_Constant_0
%"val_0"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[2]>(array([ -1, 576000]), name=None)}
1 | # node_Cast_1
%"val_1"<?,?> ⬅️ ::Cast(%"val_0") {to=INT64}
2 | # node_Reshape_2
%"view"<FLOAT,[1,576000]> ⬅️ ::Reshape(%"wav_data", %"val_1") {allowzero=0}
3 | # node_Constant_3
%"val_2"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[3]>(array([ 1, 1, 576000]), name=None)}
4 | # node_Cast_4
%"val_3"<?,?> ⬅️ ::Cast(%"val_2") {to=INT64}
5 | # node_Reshape_5
%"view_1"<FLOAT,[1,1,576000]> ⬅️ ::Reshape(%"view", %"val_3") {allowzero=0}
6 | # node_Constant_6
%"val_4"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[2]>(array([768, 768]), name=None)}
7 | # node_aten_reflection_pad1d_7
%"reflection_pad1d"<FLOAT,[1,1,577536]> ⬅️ pkg.onnxscript.torch_lib::aten_reflection_pad1d(%"view_1", %"val_4")
8 | # node_Constant_8
%"val_5"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[2]>(array([ 1, 577536]), name=None)}
9 | # node_Cast_9
%"val_6"<?,?> ⬅️ ::Cast(%"val_5") {to=INT64}
10 | # node_Reshape_10
%"view_2"<FLOAT,[1,577536]> ⬅️ ::Reshape(%"reflection_pad1d", %"val_6") {allowzero=0}
11 | # node__aten_unfold_onnx_11
%"unfold"<FLOAT,[1,751,1536]> ⬅️ pkg.onnxscript.torch_lib::_aten_unfold_onnx(%"view_2") {dim=1, size=1536, step=768, target_end=751, perm=[0, 1, 2]}
12 | # node_Mul_12
%"mul"<FLOAT,[1,751,1536]> ⬅️ ::Mul(%"unfold", %"transform.0.spectrogram.window")
13 | # node_Constant_13
%"val_7"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[1]>(array([-1]), name=None)}
14 | # node_Unsqueeze_14
%"val_8"<?,?> ⬅️ ::Unsqueeze(%"mul", %"val_7")
15 | # node_Constant_15
%"val_9"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[1]>(array([0]), name=None)}
16 | # node_Unsqueeze_16
%"val_10"<?,?> ⬅️ ::Unsqueeze(%"val_8", %"val_9")
17 | # node_DFT_17
%"val_11"<?,?> ⬅️ ::DFT(%"val_10") {axis=3, inverse=False, onesided=True}
18 | # node_Squeeze_18
%"_fft_r2c"<FLOAT,[1,751,769,2]> ⬅️ ::Squeeze(%"val_11", %"val_9")
19 | # node_Shape_19
%"val_12"<?,?> ⬅️ ::Shape(%"val_8") {start=0}
20 | # node_Constant_20
%"val_13"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[1]>(array([2]), name=None)}
21 | # node_Gather_21
%"val_14"<?,?> ⬅️ ::Gather(%"val_12", %"val_13") {axis=0}
22 | # node_ReduceProd_22
%"val_15"<?,?> ⬅️ ::ReduceProd(%"val_14") {keepdims=0, noop_with_empty_axes=0}
23 | # node_CastLike_23
%"val_16"<?,?> ⬅️ ::CastLike(%"val_15", %"_fft_r2c")
24 | # node_Transpose_24
%"transpose"<FLOAT,[1,769,751,2]> ⬅️ ::Transpose(%"_fft_r2c") {perm=[0, 2, 1, 3]}
25 | # node_Constant_25
%"val_17"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[4]>(array([ 1, 1, 769, 751]), name=None)}
26 | # node_aten_view_complex_26
%"view_3"<FLOAT,[1,1,769,751,2]> ⬅️ pkg.onnxscript.torch_lib::aten_view_complex(%"transpose", %"val_17")
27 | # node_ReduceL2_27
%"abs_1"<FLOAT,[1,1,769,751]> ⬅️ ::ReduceL2(%"view_3", %"val_7") {keepdims=False, noop_with_empty_axes=0}
28 | # node_Constant_28
%"val_18"<?,?> ⬅️ ::Constant() {value=Tensor<FLOAT,[]>(array(2., dtype=float32), name=None)}
29 | # node_Pow_29
%"pow_1"<FLOAT,[1,1,769,751]> ⬅️ ::Pow(%"abs_1", %"val_18")
30 | # node_Transpose_30
%"transpose_1"<FLOAT,[1,1,751,769]> ⬅️ ::Transpose(%"pow_1") {perm=[0, 1, 3, 2]}
31 | # node_aten_matmul_31
%"matmul"<FLOAT,[1,1,751,128]> ⬅️ pkg.onnxscript.torch_lib::aten_matmul(%"transpose_1", %"transform.0.mel_scale.fb")
32 | # node_Transpose_32
%"ans"<FLOAT,[1,1,128,751]> ⬅️ ::Transpose(%"matmul") {perm=[0, 1, 3, 2]}
return %"ans"<FLOAT,[1,1,128,751]>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib::aten_reflection_pad1d(
inputs=(
%"self"<?,?>,
%"padding"<?,?>
),
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"int64_0_1d"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_0_1d')}
1 | # n1
%"int64_1_1d"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_1_1d')}
2 | # n2
%"int64_0_1d_0"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_0_1d_0')}
3 | # n3
%"start"<?,?> ⬅️ ::Slice(%"padding", %"int64_0_1d", %"int64_1_1d", %"int64_0_1d_0")
4 | # n4
%"int64_1_1d_1"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_1_1d_1')}
5 | # n5
%"int64_2_1d"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_2_1d')}
6 | # n6
%"int64_0_1d_2"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_0_1d_2')}
7 | # n7
%"end"<?,?> ⬅️ ::Slice(%"padding", %"int64_1_1d_1", %"int64_2_1d", %"int64_0_1d_2")
8 | # n8
%"tmp"<?,?> ⬅️ ::Constant() {value_ints=[0]}
9 | # n9
%"tmp_3"<?,?> ⬅️ ::Constant() {value_ints=[0]}
10 | # n10
%"padding_onnx"<?,?> ⬅️ ::Concat(%"tmp", %"start", %"tmp_3", %"end") {axis=0}
11 | # n11
%"return_val"<?,?> ⬅️ ::Pad(%"self", %"padding_onnx") {mode=reflect}
return %"return_val"<?,?>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib::_aten_unfold_onnx(
inputs=(
%"self"<?,?>
),
attributes={
dim: UNDEFINED,
size: UNDEFINED,
step: UNDEFINED,
target_end: UNDEFINED,
perm: UNDEFINED
}
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"tmp"<?,?> ⬅️ ::Constant() {value_int=RefAttr('value_int', INT, ref_attr_name='dim')}
1 | # n1
%"tmp_0"<?,?> ⬅️ ::Constant() {value_ints=[-1]}
2 | # n2
%"dims"<?,?> ⬅️ ::Reshape(%"tmp", %"tmp_0")
3 | # n3
%"seq_result"<?,?> ⬅️ ::SequenceEmpty()
4 | # n4
%"i"<?,?> ⬅️ ::Constant() {value_int=0}
5 | # n5
%"target_end"<?,?> ⬅️ ::Constant() {value_int=RefAttr('value_int', INT, ref_attr_name='target_end')}
6 | # n6
%"target_end_cast"<?,?> ⬅️ ::CastLike(%"target_end", %"i")
7 | # n7
%"cond"<?,?> ⬅️ ::Less(%"i", %"target_end_cast")
8 | # n8
%
10000
span>"seq_result_8"<?,?>, %"i_9"<?,?> ⬅️ ::Loop(None, %"cond", %"seq_result", %"i") {body=
graph(
name=loop_body,
inputs=(
%"infinite_loop"<INT64,[]>,
%"cond"<BOOL,[]>,
%"seq_result_1"<?,?>,
%"i_2"<?,?>
),
outputs=(
%"cond_out"<BOOL,[]>,
%"seq_result_4"<?,?>,
%"i_5"<?,?>
),
) {
0 | # n0
%"step"<?,?> ⬅️ ::Constant() {value_int=RefAttr('value_int', INT, ref_attr_name='step')}
1 | # n1
%"step_cast"<?,?> ⬅️ ::CastLike(%"step", %"i_2")
2 | # n2
%"tmp_3"<?,?> ⬅️ ::Mul(%"i_2", %"step_cast")
3 | # n3
%"int64_m1_1d"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[1]>(name='int64_m1_1d')}
4 | # n4
%"starts"<?,?> ⬅️ ::Reshape(%"tmp_3", %"int64_m1_1d")
5 | # n5
%"size"<?,?> ⬅️ ::Constant() {value_int=RefAttr('value_int', INT, ref_attr_name='size')}
6 | # n6
%"size_cast"<?,?> ⬅️ ::CastLike(%"size", %"starts")
7 | # n7
%"ends"<?,?> ⬅️ ::Add(%"starts", %"size_cast")
8 | # n8
%"slice_result"<?,?> ⬅️ ::Slice(%"self", %"starts", %"ends", %"dims")
9 | # n9
%"slice_result_float32"<?,?> ⬅️ ::Cast(%"slice_result") {to=1}
10 | # n10
%"seq_result_4"<?,?> ⬅️ ::SequenceInsert(%"seq_result_1", %"slice_result_float32")
11 | # n11
%"int64_1"<?,?> ⬅️ ::Constant() {value=TensorProtoTensor<INT64,[]>(name='int64_1')}
12 | # n12
%"int64_1_cast"<?,?> ⬅️ ::CastLike(%"int64_1", %"i_2")
13 | # n13
%"i_5"<?,?> ⬅️ ::Add(%"i_2", %"int64_1_cast")
14 | # n14
%"target_end_6"<?,?> ⬅️ ::Constant() {value_int=RefAttr('value_int', INT, ref_attr_name='target_end')}
15 | # n15
%"target_end_6_cast"<?,?> ⬅️ ::CastLike(%"target_end_6", %"i_5")
16 | # n16
%"cond_7"<?,?> ⬅️ ::Less(%"i_5", %"target_end_6_cast")
17 | # n17
%"cond_out"<BOOL,[]> ⬅️ ::Identity(%"cond_7")
return %"cond_out"<BOOL,[]>, %"seq_result_4"<?,?>, %"i_5"<?,?>
}}
9 | # n9
%"concat_result"<?,?> ⬅️ ::ConcatFromSequence(%"seq_result_8") {axis=RefAttr('axis', INT, ref_attr_name='dim'), new_axis=1}
10 | # n10
%"result"<?,?> ⬅️ ::Transpose(%"concat_result") {perm=RefAttr('perm', INTS, ref_attr_name='perm')}
11 | # n11
%"return_val"<?,?> ⬅️ ::CastLike(%"result", %"self")
return %"return_val"<?,?>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib::aten_view_complex(
inputs=(
%"self"<?,?>,
%"size"<?,?>
),
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"size_0"<?,?> ⬅️ ::Cast(%"size") {to=7}
1 | # n1
%"tmp"<?,?> ⬅️ ::Constant() {value_ints=[2]}
2 | # n2
%"complex_size"<?,?> ⬅️ ::Concat(%"size_0", %"tmp") {axis=0}
3 | # n3
%"return_val"<?,?> ⬅️ ::Reshape(%"self", %"complex_size")
return %"return_val"<?,?>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib::aten_matmul(
inputs=(
%"self"<?,?>,
%"other"<?,?>
),
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"return_val"<?,?> ⬅️ ::MatMul(%"self", %"other")
return %"return_val"<?,?>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib.common::Rank(
inputs=(
%"input"<?,?>
),
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"tmp"<?,?> ⬅️ ::Shape(%"input")
1 | # n1
%"return_val"<?,?> ⬅️ ::Size(%"tmp")
return %"return_val"<?,?>
}
<
opset_imports={'': 18},
>
def pkg.onnxscript.torch_lib.common::IsScalar(
inputs=(
%"input"<?,?>
),
outputs=(
%"return_val"<?,?>
),
) {
0 | # n0
%"tmp"<?,?> ⬅️ ::Shape(%"input")
1 | # n1
%"tmp_0"<?,?> ⬅️ ::Size(%"tmp")
2 | # n2
%"tmp_1"<?,?> ⬅️ ::Constant() {value_int=0}
3 | # n3
%"return_val"<?,?> ⬅️ ::Equal(%"tmp_0", %"tmp_1")
return %"return_val"<?,?>
} AnalysisPyTorch ONNX Conversion Analysis Model InformationThe model has 0 parameters and 99968 buffers (non-trainable parameters). defaultdict(<class 'int'>, {}) Number of buffers per dtype: defaultdict(<class 'int'>, {torch.float32: 99968}) Inputs:
Outputs:
The FX graph has 18 nodes in total. Number of FX nodes per op:
Of the call_function nodes, the counts of operators used are:
ONNX Conversion InformationAll operators in the model have registered ONNX decompositions. Decomposition comparisonOps exist only in the ExportedProgram before decomposition: Ops exist only in the ExportedProgram after decomposition: |
Collecting environment information... OS: Ubuntu 24.04.1 LTS (x86_64) Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime) CPU: Versions of relevant libraries: |
still has some error with test: test code : def test_data_cov_onnx(onnx_path):
sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
providers = [
'CUDAExecutionProvider',
'DmlExecutionProvider',
'CPUExecutionProvider'
]
session = ort.InferenceSession(onnx_path, sess_options,
providers=providers)
src_wav = torch.randn((1, 1, 48000 * 12))
ort_inputs = {session.get_inputs()[0].name: src_wav.numpy(), }
ort_outs = session.run(None, ort_inputs)
ort_outs = ort_outs[0]
ort_outs = torch.from_numpy(ort_outs)
model = DataCov()
model.eval()
deal_1 = model(src_wav)
print(f'Torch Output Shape: {deal_1.shape}, ONNX Output Shape: {ort_outs.shape}')
print(f'Torch Output Min/Max: {torch.min(deal_1)}, {torch.max(deal_1)}')
print(f'ONNX Output Min/Max: {torch.min(ort_outs)}, {torch.max(ort_outs)}')
print(f'Torch Output Mean/Std: {torch.mean(deal_1)}, {torch.std(deal_1)}')
print(f'ONNX Output Mean/Std: {torch.mean(ort_outs)}, {torch.std(ort_outs)}')
np.testing.assert_allclose(deal_1.detach().numpy(), ort_outs.detach().numpy(), rtol=1e-02, atol=1e-04) onnx export code: def export_datacov_onnx(path):
model = DataCov()
model.eval()
src_wav = torch.randn((1, 1, 48000 * 12), requires_grad=True)
input_names = ["wav_data"]
output_names = ["ans"]
args = (src_wav,)
torch.onnx.export(
model,
args,
path,
export_params=True,
opset_version=19,
do_constant_folding=True,
verbose=False,
input_names=input_names,
output_names=output_names,
dynamo=True,
report=True
)
onnx_model = onnx.load(path)
onnx.checker.check_model(onnx_model) model: class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1) |
🐛 Describe the bug
code:
linux error:
windows error:
Versions
linux versions:
windows version:
The text was updated successfully, but these errors were encountered: