Inconsistent overflow handling in torch.clamp_min
between CPU and CUDA for float16 tensors
#153187
Labels
module: edge cases
Adversarial inputs unlikely to occur in practice
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Description
There is an inconsistency in how
torch.clamp_min
handles large values that exceed float16 range between CPU and CUDA implementations:Reproduction
Output
Colab Notebook
A complete reproduction is available in this Colab notebook: https://colab.research.google.com/drive/1Tgb1jxqCO0eAWXDDTufCm5wg9PuhIYkH?usp=sharing
Versions
The text was updated successfully, but these errors were encountered: