-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Closed
Labels
module: bc-breakingRelated to a BC-breaking changeRelated to a BC-breaking changemodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorsRelated to numpy support, and also numpy compatibility of our operatorstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
In NumPy std
and var
of complex arrays returns a real result, but torch.std
/var
return a complex tensor with zero imaginary component.
To Reproduce
Need to pass dim=0 needed because of #51127
>>> import torch
>>> torch.rand(100, dtype=torch.complex128).var(dim=0).dtype
torch.complex128
>>> torch.rand(100, dtype=torch.complex128).std(dim=0).dtype
torch.complex128
Compare to NumPy:
>>> import numpy as np
>>> np.random.rand(100).astype(complex).var().dtype
dtype('float64')
>>> np.random.rand(100).astype(complex).std().dtype
dtype('float64')
Environment
I've confirmed it's worked like this since at least PyTorch 1.6.
PyTorch version: 1.8.1
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Pop!_OS 20.10 (x86_64)
GCC version: (Ubuntu 10.2.0-13ubuntu1) 10.2.0
Clang version: 11.0.0-2
CMake version: version 3.16.3
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: GeForce RTX 2060 SUPER
Nvidia driver version: 460.67
cuDNN version: Probably one of the following:
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.4
/usr/lib/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.2
[pip3] torch==1.8.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.1 h6406543_8 conda-forge
[conda] mkl 2020.2 256
[conda] numpy 1.20.2 py38h9894fe3_0 conda-forge
[conda] pytorch 1.8.1 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
cc @ezyang @gchanan @anjali411 @dylanbespalko @mruberry @rgommers @heitorschueroff
mruberry
Metadata
Metadata
Assignees
Labels
module: bc-breakingRelated to a BC-breaking changeRelated to a BC-breaking changemodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorsRelated to numpy support, and also numpy compatibility of our operatorstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module