torch.ldexp
incorrectly returns infinity if exp
is larger than log2 of the max representable number
#133265
Labels
module: half
Related to float16 half-precision floats
module: type promotion
Related to semantics of type promotion
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Gives
While both input 2**-10 and output 2**10 are in the fp16 range. Same happens with other dtypes when
exp
is such that2**exp
is out of range.Note that
Gives the correct result of
tensor([1024.], device='cuda:0', dtype=torch.float16)
Versions
cc @nairbv @mruberry
The text was updated successfully, but these errors were encountered: