-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🐛 Describe the bug
import torch
ln = torch.nn.LayerNorm( 16)
t = torch.nested.nested_tensor( [torch.rand( (2, 8, 16)) for _ in range(4) ], layout=torch.jagged)
print( t.shape) # torch.Size([4, j1, 8, 16])
s1 = ln( t)
s2 = ln( t.transpose( 2, 1))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1727, in _call_impl
return forward_call(*args, **kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 217, in forward
return F.layer_norm(
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nn/functional.py", line 2891, in layer_norm
return handle_torch_function(
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/overrides.py", line 1737, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 302, in __torch_function__
return func(*args, **kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nn/functional.py", line 2900, in layer_norm
return torch.layer_norm(
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nested/_internal/nested_tensor.py", line 286, in __torch_dispatch__
return fn(*args, **kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nested/_internal/ops.py", line 182, in inner
check_schema(schema_str, func, *args, **kwargs)
File "/etc/ecmwf/nfs/dh2_perm_a/nacl/research/obs/lessig-dev-kas-cell-forecast/ai-obs-experimental-transformer/pyenv/lib/python3.10/site-packages/torch/nested/_internal/ops.py", line 119, in check_schema
raise ValueError(
ValueError: NestedTensor native_layer_norm_default(input: jt, normalized_shape: any, weight: any?, bias: any?, eps: any): expected input to be a contiguous jagged layout NestedTensor
Versions
PyTorch version: 2.5.0.dev20240710+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.10 (main, Feb 9 2023, 14:42:48) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] (64-bit runtime)
Python platform: Linux-4.18.0-477.43.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7H12 64-Core Processor
Stepping: 0
CPU MHz: 3299.987
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5200.23
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] flake8==7.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240710+cu124
[pip3] triton==2.3.1
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ