You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe to add at least the issue with this/where it comes from. The problem is that not that int64 should be categorized as able to cast safely to float64. I do not think anyone ever thought that is a useful way to do it. However, what was probably desired is that:
np.arange(10, dtype=np.int64) + np.ones(10, np.float64) # just to be clear, both are default types usually
will not error out and instead be able to give a reasonable (floating point) result.
So, the reason for this was probably promotion rather than casting. Now of course, that maybe could be solved in a different way (e.g. as much as I would like to disregard it fully, by the same_kind casting rule).
Just noticed this myself. I get that this is a reasonable trade off for promotion, but it's inconsistent with: np.can_cast(np.int32, np.float32, casting="safe") is False.
In [45]: np.can_cast(np.int64, np.float64, casting="safe")
Out[45]: True
Is that right? Not every int64 can be represented a a float64.
The text was updated successfully, but these errors were encountered: