-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Description
The following piece of code fails for cuda
on Nvidia A100 when using 2.7.0+cu128
.
Note that it passes on a AMD MI300 system, and also passes for cpu
.
The convolution is designed such that all elements in the output tensor should be identical (992.0). However, with 2.7.0+cu128
on A100, the last element of the output tensor is 993.0:
AssertionError: Mismatch at index [0,0,64]: 993.0 != 992.0
This is a regression from 2.7.0+cu126
.
Note this test can be flaky, it doesn't always fail.
To recreate:
uv venv --python 3.12.9
uv pip install torch==2.7.0 --index-url https://download.pytorch.org/whl/cu128
source .venv/bin/activate
python script.py
import torch
import torch.nn as nn
def conv1d_test(device):
"""
The test creates an input tensor where each element is (channel+1)*(position+1),
applies a Conv1d with a [-1,0,1] kernel pattern, and verifies all output values
equal the expected result (992.0).
"""
torch.manual_seed(42)
L_in, K, C_in, C_out = (67, 3, 31, 1)
dtype, device = torch.float32, torch.device(device)
# 1. Create input tensor: (1, C_in, L_in) where input[0,c,l] = (c+1)*(l+1)
c_vals = torch.arange(1, C_in + 1, dtype=dtype, device=device).unsqueeze(1)
l_vals = torch.arange(1, L_in + 1, dtype=dtype, device=device).unsqueeze(0)
input_sequence = (c_vals * l_vals).unsqueeze(0)
conv = nn.Conv1d(C_in, C_out, K, bias=False)
kernel = torch.tensor([-1.0, 0.0, 1.0], dtype=dtype, device=device)
conv.weight.data = nn.Parameter(kernel.view(1, 1, K).expand(C_out, C_in, K))
result = conv(input_sequence)
expected_val = 992.0
for i in range(result.shape[2]):
assert abs(result[0, 0, i] - expected_val) <= 1e-6, (
f"Mismatch at index [0,0,{i}]: {result[0, 0, i]} != {expected_val}"
)
print(f"Conv1d test passed on {device}")
if __name__ == "__main__":
conv1d_test("cpu")
conv1d_test("cuda")
Versions
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu122.04) 11.4.0exp1~20240731145000.144)
Clang version: 18.1.8 (++20240731024944+3b5b5c1ec4a3-1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 17 2025, 21:01:58) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.15.0-1074-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7J13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3673.0950
CPU min MHz: 0.0000
BogoMIPS: 4899.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache:
784D
512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy @jerryzh168