8000 Max-pooling uses implicit negative infinity padding, not zero-padding as indicated in documentation · Issue #33384 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Max-pooling uses implicit negative infinity padding, not zero-padding as indicated in documentation #33384
@josh-gleason

Description

@josh-gleason

🐛 Bug

According to the documentation on nn.MaxPool1d:

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. dilation controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of what dilation does.

However, the input is never padded with zeros, either explicitly or implicitly. Doing a bit of source diving I found that the maximum operation initializes with negative infinity, not zero, and only considers entries from the original, unpadded input tensor. Therefore it would be correct to say that the max-pooling operation uses implicit negative infinity padding but not zero-padding.

This appears to be either a bug in the API or documentation (of course PEBCAK is always a possibility).

To Reproduce

Steps to reproduce the behavior:

  1. Install PyTorch
  2. Run the following code
>>> import torch
>>> from torch.nn.functional import max_pool1d
>>> x = torch.tensor([[[-1., -2.]]])
>>> max_pool1d(x, kernel_size=2, padding=1)
tensor([[[-1., -2.]]])
>>> max_pool1d(x.cuda(), kernel_size=2, padding=1)
tensor([[[-1., -2.]]], device='cuda:0')

Expected behavior

If zero-padding was being used we would expect the output to be tensor([[[0., 0.]]]) since zero is larger than all the input tensor elements.

Environment

PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: Red Hat Enterprise Linux Workstation release 7.7 (Maipo)
GCC version: (GCC) 6.3.0
CMake version: version 3.11.4

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti

Nvidia driver version: 430.40
cuDNN version: 7.6.3

Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] Could not collect

Additional context

This issue is based on a question originally asked on StackOverflow by user trsvchn.

cc @jlin27 @mruberry @heitorschueroff

Metadata

Metadata

Assignees

Labels

module: docsRelated to our documentation, both in docs/ and docblocksmodule: paddingmodule: poolingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0