-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Closed
Labels
module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
symeig
produces wrong eigenvalues on some matrix in torch.float
precision, both on cuda and on cpu.
To Reproduce
I'm not sure about what causes the unstability on some matrices.
Steps to reproduce the behavior:
- Clone this repository
- Execute the notebook to reproduce the behaviour
You can see the output of the notebook here, where float precision gives negative eigenvalues.
Expected behavior
float precision and double precision should give similar results
Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Manjaro Linux
GCC version: (GCC) 9.1.0
CMake version: version 3.15.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 430.26
cuDNN version: /usr/lib/libcudnn.so.7.6.1
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] torch==1.2.0
[pip3] torch-cluster==1.4.2
[pip3] torch-geometric==1.3.0
[pip3] torch-scatter==1.3.1
[pip3] torch-sparse==0.4.0
[pip3] torch-spline-conv==1.1.0
[pip3] torchvision==0.4.0
[conda] Could not collect
Additional context
The matrix is (theoretically) known to have positive eigenvalues and the first one should be (close to) zero.
Metadata
Metadata
Assignees
Labels
module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module