-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🐛 Describe the bug
Nested tensors fail to go through conv2d
convolution = torch.nn.Conv2d(1280, 1280, 3)
set1 = torch.randn(24, 1280, 12, 12)
set2 = torch.randn(8, 1280, 24, 24)
nested = torch.nested.nested_tensor([set1, set2])
out = convolution(example1)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[/hostroot/vgen/temp.ipynb](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/hostroot/vgen/temp.ipynb) Cell 33 line 7
[5](vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d@ssh-remote%2Bai-train-2/hostroot/vgen/temp.ipynb#Y104sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) tik = time.time()
[6](vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d@ssh-remote%2Bai-train-2/hostroot/vgen/temp.ipynb#Y104sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5) for i in range(100):
----> [7](vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d@ssh-remote%2Bai-train-2/hostroot/vgen/temp.ipynb#Y104sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6) out = convolution(nested)
[8](vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d@ssh-remote%2Bai-train-2/hostroot/vgen/temp.ipynb#Y104sdnNjb2RlLXJlbW90ZQ%3D%3D?line=7) tok = time.time()
[9](vscode-notebook-cell://attached-container%2B7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d@ssh-remote%2Bai-train-2/hostroot/vgen/temp.ipynb#Y104sdnNjb2RlLXJlbW90ZQ%3D%3D?line=8) print(tok - tik)
File [/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
[1496](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1496) # If we don't have any hooks, we want to skip the rest of the logic in
[1497](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1497) # this function, and just call forward.
[1498](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1498) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1499](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1499) or _global_backward_pre_hooks or _global_backward_hooks
[1500](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1500) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1501](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1501) return forward_call(*args, **kwargs)
[1502](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1502) # Do not call functions when jit is used
[1503](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/module.py:1503) full_backward_hooks, non_full_backward_hooks = [], []
File [/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:463](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:463), in Conv2d.forward(self, input)
[462](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:462) def forward(self, input: Tensor) -> Tensor:
--> [463](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:463) return self._conv_forward(input, self.weight, self.bias)
File [/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:459](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:459), in Conv2d._conv_forward(self, input, weight, bias)
[455](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:455) if self.padding_mode != 'zeros':
[456](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:456) return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
[457](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:457) weight, bias, self.stride,
[458](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:458) _pair(0), self.dilation, self.groups)
--> [459](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:459) return F.conv2d(input, weight, bias, self.stride,
[460](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7667656e2d657468616e227d-0040ssh-002dremote-002bai-002dtrain-002d2.vscode-resource.vscode-cdn.net/opt/sd/lib/python3.11/site-packages/torch/nn/modules/conv.py:460) self.padding, self.dilation, self.groups)
RuntimeError: Internal error: NestedTensorImpl doesn't support sizes. Please file an issue on https://github.com/pytorch/nestedtensor
Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.35
Python version: 3.11.6 (main, Oct 23 2023, 22:48:54) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.14.322-246.539.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 5999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect