-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Description
🐛 Describe the bug
I'm encountering a segmentation fault when attempting to convert a sparse complex tensor to dense. The crash happens in the complex number addition operator (operator+=) inside PyTorch's implementation.
The crash consistently occurs when converting a sparse tensor with complex128 values to a dense tensor after coalescing. The tensor has extreme values (very large exponentials) which may be contributing to the issue.
Minimal reproduction code:
import torch
# Create indices tensor (3D tensor)
indices = torch.tensor([
[-8109290083833025126, -8961984329693039708, -8290743100148801578],
[5855392221454183905, 2438243064451908769, -4615048816986169106],
[-5549288894191271100, -7619493035809941689, 4719066866956746817]
])
# Create the values tensor with extreme values
values = torch.tensor([
complex(3.8517e-121, -2.4940e-90),
complex(-1.1143e+21, -1.4531e+287),
complex(8.1792e+179, 9.2145e-213)
], dtype=torch.complex128)
# Create the sparse tensor
size = (5, 1, 6)
sparse_tensor = torch.sparse_coo_tensor(indices, values, size, dtype=torch.complex128)
# This works fine
coalesced_tensor = sparse_tensor.coalesce()
# This crashes with segmentation fault
dense_tensor = coalesced_tensor.to_dense()
Output:
Segmentation fault (core dumped)
Some Stack trace (from my tool):
#0 0x7fd55acc201d in c10::complex<double>& c10::complex<double>::operator+=<double>(c10::complex<double> const&) /workspace/pytorch/c10/util/complex.h:216:11
#1 0x7fd55b7e4dd8 in void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long)::operator()(long, long) const /workspace/pytorch/aten/src/ATen/native/sparse/SparseTensorMath.cpp:613:20
#2 0x7fd55b7e552d in void at::parallel_for<void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long)>(long, long, long, void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long) const&)::'lambda'(long, long)::operator()(long, long) const /workspace/pytorch/aten/src/ATen/Parallel-inl.h:36:9
#3 0x7fd55b7e523e in void at::internal::invoke_parallel<void at::parallel_for<void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long)>(long, long, long, void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long) const&)::'lambda'(long, long)>(long, long, long, void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long) const&) (.omp_outlined_debug__) /workspace/pytorch/aten/src/ATen/ParallelOpenMP.h:41:9
#4 0x7fd55b7e565e in void at::internal::invoke_parallel<void at::parallel_for<void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long)>(long, long, long, void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar
848B
const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long) const&)::'lambda'(long, long)>(long, long, long, void at::native::add_dense_sparse_worker_non_hybrid_cpu<c10::complex<double>>(at::Tensor&, c10::Scalar const&, at::Tensor const&, at::Tensor const&, at::Tensor const&)::'lambda'(long, long) const&) (.omp_outlined) /workspace/pytorch/aten/src/ATen/ParallelOpenMP.h:25:1
#5 0x7fd530fdf122 in __kmp_invoke_microtask (/lib/x86_64-linux-gnu/libomp.so.5+0xe1122) (BuildId: b9876f64e9413d635a015d2c7b475ecfbcdaea10)
#6 0x7fd530f471a2 (/lib/x86_64-linux-gnu/libomp.so.5+0x491a2) (BuildId: b9876f64e9413d635a015d2c7b475ecfbcdaea10)
#7 0x7fd530f456a5 (/lib/x86_64-linux-gnu/libomp.so.5+0x476a5) (BuildId: b9876f64e9413d635a015d2c7b475ecfbcdaea10)
#8 0x7fd530fb78b7 (/lib/x86_64-linux-gnu/libomp.so.5+0xb98b7) (BuildId: b9876f64e9413d635a015d2c7b475ecfbcdaea10)
#9 0x7fd5310c9aa3 (/lib/x86_64-linux-gnu/libc.so.6+0x9caa3) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
#10 0x7fd531156a33 in clone (/lib/x86_64-linux-gnu/libc.so.6+0x129a33) (BuildId: 42c84c92e6f98126b3e2230ebfdead22c235b667)
UndefinedBehaviorSanitizer can not provide additional info.
SUMMARY: UndefinedBehaviorSanitizer: SEGV /workspace/pytorch/c10/util/complex.h:216:11 in c10::complex<double>& c10::complex<double>::operator+=<double>(c10::complex<double> const&)
Looking at the stack trace, the crash happens in the complex number addition operator (operator+=) in complex.h:216
. The segmentation fault occurs during the sparse-to-dense conversion process, specifically in the add_dense_sparse_worker_non_hybrid_cpu
function which is called by to_dense().
Colab: https://colab.research.google.com/drive/1isZlXmkYFFp97IOaTWpZ9XJIvuwzopzr?usp=sharing
Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu224.04) 13.3.0exp1~20240731145144.92)
Clang version: 18.1.8 (++20240731025043+3b5b5c1ec4a3-1
CMake version: version 4.0.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9684X 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 2
BogoMIPS: 5099.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 2.3 GiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.15.0
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip