-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Description
🐛 Bug
According to the documentation on nn.MaxPool1d
:
If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.
However, the input is never padded with zeros, either explicitly or implicitly. Doing a bit of source diving I found that the maximum operation initializes with negative infinity, not zero, and only considers entries from the original, unpadded input tensor. Therefore it would be correct to say that the max-pooling operation uses implicit negative infinity padding but not zero-padding.
This appears to be either a bug in the API or documentation (of course PEBCAK is always a possibility).
To Reproduce
Steps to reproduce the behavior:
- Install PyTorch
- Run the following code
>>> import torch
>>> from torch.nn.functional import max_pool1d
>>> x = torch.tensor([[[-1., -2.]]])
>>> max_pool1d(x, kernel_size=2, padding=1)
tensor([[[-1., -2.]]])
>>> max_pool1d(x.cuda(), kernel_size=2, padding=1)
tensor([[[-1., -2.]]], device='cuda:0')
Expected behavior
If zero-padding was being used we would expect the output to be tensor([[[0., 0.]]])
since zero is larger than all the input tensor elements.
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Red Hat Enterprise Linux Workstation release 7.7 (Maipo)
GCC version: (GCC) 6.3.0
CMake version: version 3.11.4
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 430.40
cuDNN version: 7.6.3
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] Could not collect
Additional context
This issue is based on a question originally asked on StackOverflow by user trsvchn.