8000 torch.sgn: Vectorized path returns different output from Non-Vectorized path for `INF` input · Issue #53958 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

torch.sgn: Vectorized path returns different output from Non-Vectorized path for INF input 8000 #53958

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kshitij12345 opened this issue Mar 13, 2021 · 0 comments
Assignees
Labels
module: complex Related to complex number support in PyTorch module: NaNs and Infs Problems related to NaN and Inf handling in floating point triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@kshitij12345
Copy link
Collaborator
kshitij12345 commented Mar 13, 2021

This behavior leads to failure in testing for CPU where nan's are tested for equality with strictness.

import torch

t = torch.tensor([complex(0, float('inf'))] * 4)
out_non_vec = torch.sgn(t)
ref = torch.tensor([complex(float('nan'), float('nan'))] * 4)

print(out_non_vec)
are_equal, msg = torch.testing._compare_tensors_internal(out_non_vec, ref, equal_nan=True, rtol=0, atol=0)
print("EQUAL:", are_equal)

t = torch.tensor([complex(0, float('inf'))] * 9)
out_vec = torch.sgn(t)
ref = torch.tensor([complex(float('nan'), float('nan'))] * 9)

print(out_vec)
are_equal, msg = torch.testing._compare_tensors_internal(out_vec, ref, equal_nan=True, rtol=0, atol=0)
print("EQUAL:", are_equal)

Output:

tensor([nan+nanj, nan+nanj, nan+nanj, nan+nanj])
EQUAL: True
tensor([0.+nanj, 0.+nanj, 0.+nanj, 0.+nanj, 0.+nanj, 0.+nanj, 0.+nanj, 0.+nanj, nan+nanj])
EQUAL: False

cc @ezyang @anjali411 @dylanbespalko @mruberry

@kshitij12345 kshitij12345 added module: complex Related to complex number support in PyTorch module: NaNs and Infs Problems related to NaN and Inf handling in floating point labels Mar 13, 2021
@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 14, 2021
@lezcano lezcano self-assigned this Apr 20, 2023
lezcano added a commit that referenced this issue Apr 20, 2023
ev-br found in
Quansight-Labs/numpy_pytorch_interop#117 (comment)
that the precision of `abs()` for large values in the vectorised case is less-than-good.
This PR fixes this issue. While doing that, we are able to comment out a
few tests on extremal values.

Fixes #53958 #48486

cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: complex Related to complex number support in PyTorch module: NaNs and Infs Problems related to NaN and Inf handling in floating point triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants
0