-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Description
Reproducing code example:
import numpy as np
np.exp(8j * np.arctan(np.array(1, dtype=np.float16))).imag
np.exp(8j * np.arctan(np.array(1, dtype=np.float32))).imag
np.exp(8j * np.arctan(np.array(1, dtype=np.float64))).imag
np.exp(8j * np.arctan(np.array(1, dtype=np.float128))).imag
Output:
-0.0019353059714989746
1.7484556000744883e-07
-2.4492935982947064e-16
1.0033115225336664047e-19
The last one seems inappropriately inexact, more what you would expect from a float80
rather than a float128
. (Yes, the exp
then operates on comples256
but all variants I tried get the same accuracy.) Is the quad precision internally just using the Intel built-in 80 bit floats rather than real 128 bit arithmetics? If so, why waste the space and not provide a float80
data type? Alignment issues would not be a lot worse than for float16, which is provided.
Maybe, since I cannot find a reference to it at https://docs.scipy.org/doc/numpy/user/basics.types.html it is only experimental at this time?
It would be good if the documentation could elaborate on precision of float128
/complex256
I think libraries should be available to do 128 bit precision arithmetics where not supported by hardware, to have most transparent and as expected behaviour for users, even if slow.
Numpy/Python version information:
numpy: 1.17.2
python: 3.7.4