8000 np.mean breaks when called on a list of cuda tensors · Issue #32100 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

np.mean breaks when called on a list of cuda tensors #32100

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
epignatelli opened this issue Jan 13, 2020 · 3 comments
Closed

np.mean breaks when called on a list of cuda tensors #32100

epignatelli opened this issue Jan 13, 2020 · 3 comments
Labels
module: numpy Related to numpy support, and also numpy compatibility of our operators small We think this is a small issue to fix. Consider knocking off high priority small issues triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@epignatelli
Copy link
epignatelli commented Jan 13, 2020

🐛 Bug

When using np.mean on a list of torch.tensors, whose device is different from torch.device(type='cpu'), an error is thrown.

The rationale for considering this a bug is that,if pytorch supports np.mean somehow, that should be supported fully.

Copying the tensors back to the cpu removes the error.

To Reproduce

Steps to reproduce the behavior:

np.mean([torch.tensor(1., device=torch.device("cuda")), torch.tensor(2.4, device=torch.device("cuda"))])

Out:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<__array_function__ internals>", line 6, in mean
  File "~/.conda/envs/pytorch/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3257, in mean
    out=out, **kwargs)
  File "~/.conda/envs/pytorch/lib/python3.7/site-packages/numpy/core/_methods.py", line 161, in _mean
    ret = ret.dtype.type(ret / rcount)
AttributeError: 'torch.dtype' object has no attribute 'type'

Expected behavior

The operation should not raise Errors.
Copying the tensors back to the cpu removes the error.

tensor_list = [torch.tensor(1., device=torch.device("cuda")), torch.tensor(2.4, device=torch.device("cuda"))]
np.mean([tensor.cpu() for tensor in tensor_list])

Environment

PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 9.2.148

OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: 
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti

Nvidia driver version: 430.50
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4

Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.1
[pip] torchsummary==1.5.1
[pip] torchvision==0.4.2
[conda] blas                      1.0                         mkl  
[conda] mkl                       2019.4                      243  
[conda] mkl-service               2.3.0            py37he904b0f_0  
[conda] mkl_fft                   1.0.15           py37ha843d7b_0  
[conda] mkl_random                1.1.0            py37hd6b4f25_0  
[conda] pytorch                   1.3.1           py3.7_cuda9.2.148_cudnn7.6.3_0    pytorch
[conda] torchsummary              1.5.1                    pypi_0    pypi
[conda] torchvision               0.4.2                 py37_cu92    pytorch

Additional context

@zhangguanheng66 zhangguanheng66 added module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 14, 2020
@zhangguanheng66
Copy link
Contributor

Reproduce the error. It works with cpu tensors, though.

@zhangguanheng66 zhangguanheng66 added triage review and removed triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 14, 2020
@ezyang ezyang added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triage review labels Jan 27, 2020
@ezyang
Copy link
Contributor
ezyang commented Jan 27, 2020

Numpy doesn't support CUDA operations, so we definitely don't expect this to work. The error message is kind of weird and may be worth investigating further though

@ezyang ezyang added the small We think this is a small issue to fix. Consider knocking off high priority small issues label Jan 27, 2020
@mruberry
Copy link
Collaborator

@ezyang is right that you shouldn't expect this to work, and the error is generated by NumPy, not PyTorch. Think of it as if you're sticking a strange Python object into a NumPy function: the NumPy function goes to look for an attribute but discovers it's not there. Fixing this likely requires changes in NumPy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: numpy Related to numpy support, and also numpy compatibility of our operators small We think this is a small issue to fix. Consider knocking off high priority small issues triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants
0