8000 Symeig unstable in float precision · Issue #24466 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Symeig unstable in float precision #24466
@lucmos

Description

@lucmos

🐛 Bug

symeig produces wrong eigenvalues on some matrix in torch.float precision, both on cuda and on cpu.

To Reproduce

I'm not sure about what causes the unstability on some matrices.

Steps to reproduce the behavior:

  1. Clone this repository
  2. Execute the notebook to reproduce the behaviour

You can see the output of the notebook here, where float precision gives negative eigenvalues.

image

Expected behavior

float precision and double precision should give similar results

Environment

PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: Manjaro Linux
GCC version: (GCC) 9.1.0
CMake version: version 3.15.1

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 430.26
cuDNN version: /usr/lib/libcudnn.so.7.6.1

Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] torch==1.2.0
[pip3] torch-cluster==1.4.2
[pip3] torch-geometric==1.3.0
[pip3] torch-scatter==1.3.1
[pip3] torch-sparse==0.4.0
[pip3] torch-spline-conv==1.1.0
[pip3] torchvision==0.4.0
[conda] Could not collect

Additional context

The matrix is (theoretically) known to have positive eigenvalues and the first one should be (close to) zero.

cc @vishwakftw @ssnl @jianyuh

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0