8000 torch.onnx.export causes floating point exception with core dump for empty slice assignment · Issue #110056 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

torch.onnx.export causes floating point exception with core dump for empty slice assignment #110056

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jiwoong-choi opened this issue Sep 26, 2023 · 6 comments
Labels
low priority We're unlikely to get around to doing this in the near future module: onnx Related to torch.onnx OSS contribution wanted PR from open source contributors welcome to solve this issue. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@jiwoong-choi
Copy link

🐛 Describe the bug

A simple slice assignment called inside a module causes a floating point exception when the module is exported into the ONNX via torch.onnx.export.

  • The full error message on a Ubuntu server is Floating point exception (core dumped).
  • Note that I was able to reproduce this behavior only on the Ubuntu server. The behavior on Macbook M1 Pro was just a few warning messages without failure. See the last section for more details.

The code for reproducing the error

Let's call it repro.py

import io
import torch


class SliceAssignZeros(torch.nn.Module):
    def forward(self, x: torch.Tensor):
        x[1:-1] = 0
        return x


def main():
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument('-s', '--size', type=int, default=2)
    args = parser.parse_args()

    x = torch.ones(args.size)
    model = SliceAssignZeros()
    print(f'input: {x}')
    print(f'output: {model(x)}')
    
    with io.BytesIO() as f:
        torch.onnx.export(model, (x, ), f)


if __name__ == '__main__':
    main()

Steps to reproduce

1. Set up environment

See the details about the environment in the Versions section.

conda create -n issue python=3.10 -y
conda activate issue
conda install pytorch -c pytorch -y
pip install onnx

2. Run the following command

python repro.py -s 2

NOTE: If you give any positive integer value to other than 2 to the -s flag, the script runs normally.

The Output

In my opinion, expected behavior are:

  1. the model is exported into ONNX normally; or
  2. torch warns (or throws an exception) with message indicating that the index range used for slice assignment is empty.

However, the error message on Ubuntu wasn't helpful to find out the root cause of the core dump. (Empty slice assignment doesn't look very relevant to floating point exception in my opinion.)
It was nice that the code worked on MacOS, but the warning message was still not very helpful.

Ubuntu 22.04

input: tensor([1., 1.])
output: tensor([1., 1.])
Floating point exception (core dumped)

MacOS Ventura 13.4.1

input: tensor([1., 1.])
output: tensor([1., 1.])
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py:306: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/utils.py:689: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/Users/choijiwoong/miniconda3/envs/issue/lib/python3.10/site-packages/torch/onnx/utils.py:1186: UserWarning: ComputeShapeFromReshape(), shape_ratio overflows, skip shape inference. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:495.)
  _C._jit_pass_onnx_graph_shape_type_inference(
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Versions

Ubuntu server

(issue) $ python collect_env.py 
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: 
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000

Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   43 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0-31
Vendor ID:                       AuthenticAMD
Model name:                      AMD Ryzen Threadripper PRO 3955WX 16-Cores
CPU family:                      23
Model:                           49
Thread(s) per core:              2
Core(s) per socket:              16
Socket(s):                       1
Stepping:                        0
Frequency boost:                 enabled
CPU max MHz:                     4402.7339
CPU min MHz:                     2200.0000
BogoMIPS:                        7786.12
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization:                  AMD-V
L1d cache:                       512 KiB (16 instances)
L1i cache:                       512 KiB (16 instances)
L2 cache:                        8 MiB (16 instances)
L3 cache:                        64 MiB (4 instances)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[conda] blas                      1.0                         mkl  
[conda] mkl                       2023.1.0         h213fc3f_46343  
[conda] numpy                     1.26.0                   pypi_0    pypi
[conda] pytorch                   2.0.1              py3.10_cpu_0    pytorch
[conda] pytorch-mutex             1.0                         cpu    pytorch

(issue) $ conda env export
name: issue
channels:
  - pytorch
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=5.1=1_gnu
  - blas=1.0=mkl
  - bzip2=1.0.8=h7b6447c_0
  - ca-certificates=2023.08.22=h06a4308_0
  - filelock=3.9.0=py310h06a4308_0
  - gmp=6.2.1=h295c915_3
  - gmpy2=2.1.2=py310heeb90bb_0
  - intel-openmp=2023.1.0=hdb19cb5_46305
  - jinja2=3.1.2=py310h06a4308_0
  - ld_impl_linux-64=2.38=h1181459_1
  - libffi=3.4.4=h6a678d5_0
  - libgcc-ng=11.2.0=h1234567_1
  - libgomp=11.2.0=h1234567_1
  - libstdcxx-ng=11.2.0=h1234567_1
  - libuuid=1.41.5=h5eee18b_0
  - markupsafe=2.1.1=py310h7f8727e_0
  - mkl=2023.1.0=h213fc3f_46343
  - mpc=1.1.0=h10f8cd9_1
  - mpfr=4.0.2=hb69a4c5_1
  - mpmath=1.3.0=py310h06a4308_0
  - ncurses=6.4=h6a678d5_0
  - networkx=3.1=py310h06a4308_0
  - openssl=3.0.11=h7f8727e_2
  - pip=23.2.1=py310h06a4308_0
  - python=3.10.13=h955ad1f_0
  - pytorch=2.0.1=py3.10_cpu_0
  - pytorch-mutex=1.0=cpu
  - readline=8.2=h5eee18b_0
  - setuptools=68.0.0=py310h06a4308_0
  - sqlite=3.41.2=h5eee18b_0
  - sympy=1.11.1=py310h06a4308_0
  - tbb=2021.8.0=hdb19cb5_0
  - tk=8.6.12=h1ccaba5_0
  - typing_extensions=4.7.1=py310h06a4308_0
  - tzdata=2023c=h04d1e81_0
  - wheel=0.38.4=py310h06a4308_0
  - xz=5.4.2=h5eee18b_0
  - zlib=1.2.13=h5eee18b_0
  - pip:
      - numpy==1.26.0
      - onnx==1.14.1
      - protobuf==4.24.3
prefix: /home/jiwoongchoi/anaconda3/envs/issue

MacOS laptop

(issue) $ python collect_env.py 
Collecting environ
8000
ment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.1
Libc version: N/A

Python version: 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Pro

Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[conda] numpy                     1.26.0                   pypi_0    pypi
[conda] pytorch                   2.0.1                  py3.10_0    pytorch

(issue) $ conda env export
name: issue
channels:
  - pytorch
  - defaults
dependencies:
  - bzip2=1.0.8=h620ffc9_4
  - ca-certificates=2023.08.22=hca03da5_0
  - filelock=3.9.0=py310hca03da5_0
  - gmp=6.2.1=hc377ac9_3
  - gmpy2=2.1.2=py310h8c48613_0
  - jinja2=3.1.2=py310hca03da5_0
  - libcxx=14.0.6=h848a8c0_0
  - libffi=3.4.4=hca03da5_0
  - markupsafe=2.1.1=py310h1a28f6b_0
  - mpc=1.1.0=h8c48613_1
  - mpfr=4.0.2=h695f6f0_1
  - mpmath=1.3.0=py310hca03da5_0
  - ncurses=6.4=h313beb8_0
  - networkx=3.1=py310hca03da5_0
  - openssl=3.0.11=h1a28f6b_2
  - pip=23.2.1=py310hca03da5_0
  - python=3.10.13=hb885b13_0
  - pytorch=2.0.1=py3.10_0
  - readline=8.2=h1a28f6b_0
  - setuptools=68.0.0=py310hca03da5_0
  - sqlite=3.41.2=h80987f9_0
  - sympy=1.11.1=py310hca03da5_0
  - tk=8.6.12=hb8d0fd4_0
  - typing_extensions=4.7.1=py310hca03da5_0
  - tzdata=2023c=h04d1e81_0
  - wheel=0.38.4=py310hca03da5_0
  - xz=5.4.2=h80987f9_0
  - zlib=1.2.13=h5a0b063_0
  - pip:
      - numpy==1.26.0
      - onnx==1.14.1
      - protobuf==4.24.3
prefix: /Users/choijiwoong/miniconda3/envs/issue
@albanD albanD added module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 26, 2023
@github-project-automation github-project-automation bot moved this to Inbox in ONNX Sep 26, 2023
@jiwoong-choi
Copy link
Author

I've tested the same code (command: python repro.py -s 2) with torch==2.1.2 and torch==2.2.1.
Both versions gave the error message:

RuntimeError: minus_one_pos != -1 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/jit/passes/onnx/shape_type_inference.cpp":537, please report a bug to PyTorch. There are no examples for shape_has_zero = true && minus_one_pos == -1.

@justinchuby
Copy link
Collaborator
justinchuby commented Mar 27, 2024

For new use cases, we recommend the torch.onnx.dynamo() exporting path. PR accepted but it would be unlikely to be picked up soon. Thanks!

@justinchuby justinchuby added low priority We're unlikely to get around to doing this in the near future OSS contribution wanted PR from open source contributors welcome to solve this issue. labels Mar 27, 2024
@ustczhouyu
Copy link

I've tested the same code (command: python repro.py -s 2) with torch==2.1.2 and torch==2.2.1. Both versions gave the error message:

RuntimeError: minus_one_pos != -1 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/jit/passes/onnx/shape_type_inference.cpp":537, please report a bug to PyTorch. There are no examples for shape_has_zero = true && minus_one_pos == -1.

have you solved this issue? I get this error too.

@jiwoong-choi
Copy link
Author

@ustczhouyu
You need to find and fix the part of your model that causes the error.
For example, in my case, exactly this line in the model is making vacuous slice-assignment when either of h or w is an empty slice object.
The workaround that I made is to add a guard for the problematic case:

            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    # slicing with empty range can cause `torch.onnx.export` failure
                    if (h.stop or H) > (h.start or 0) and (w.stop or W) > (w.start or 0):
                        img_mask[:, h, w, :] = cnt
                   cnt += 1

The other way is to use torch.onnx.dynamo_export as @justinchuby said. As this issue is marked as low priority, it looks like PyTorch team is making more efforts to torch.onnx.dynamo_export rather than torch.onnx.export.

@YixuanSeanZhou
Copy link

@jiwoong-choi , do you have any tips on how did you pinpoint the line that causes the problem? When i checked my export stack trace, i was not able to easily identify the problematic op

  File "external/pypi__torch_2_5_1_cu121_x86_64/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "external/pypi__torch_2_5_1_cu121_x86_64/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: minus_one_pos != -1 INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/onnx/shape_type_inference.cpp":537, please report a bug to PyTorch. There are no examples for shape_has_zero = true && minus_one_pos == -1.

@jiwoong-choi
Copy link
Author

@YixuanSeanZhou There's no easy way to find the line causing the problem. What I usually do is a manual binary search-like method:

  1. First, remove the last half of your model's forward implementation, returning some intermediate tensors.
  2. Rerun the export with the truncated model. If the problem persists, the problematic line must be in the first half of your model code. Otherwise, it must be in the last half of your model code.
  3. Repeat this process until you are left with just a few lines of code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
low priority We're unlikely to get around to doing this in the near future module: onnx Related to torch.onnx OSS contribution wanted PR from open source contributors welcome to solve this issue. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Status: Inbox
Development

No branches or pull requests

5 participants
0