torch.ldexp
upcasts 16-bit inputs to 32 bits.
#133264
Labels
module: linear algebra
Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Uh oh!
There was an error while loading. Please reload this page.
🐛 Describe the bug
gives
torch.float32
, but should preserve the input typetorch.float16
and return infinity if going out of range (which is how it behaves ifx
isfloat32
- there is no upcast tofloat64
).Versions
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @lezcano
The text was updated successfully, but these errors were encountered: