-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
Changeset 8438 breaks ldexp on 64bit/bigendian systems (Trac #1633) #2229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Attachment added by @nstange on 2010-10-11: 06_fix_ldexp_signature.diff |
@charris wrote on 2010-10-11 This looks like my goof in making the commit. I think you are correct here. |
@pv wrote on 2010-10-11 I think this patch is problematic -- it seems it will reintroduce #1464. Not accepting the platform default int type (which is long) is asking for trouble. I don't see really good ways around this. I'd perhaps just check the size of the integer, and emulate the HUGE_VAL return that ldexp normally does. |
@pv wrote on 2010-10-11 "platform default int" -> "Python default int", of course. |
@nstange wrote on 2010-10-11 The problem of #1464 is not with ldexp, but with the fact that numpy cannot convert np.intc to np.int. In the meanwhile: If you ask me, getting wrong results is more serious to scientists than getting error messages. My personal opinion is that the attached diff is less problematic than #1464. On the other side it might break existing python code... Just to be curious: Are casts implemented to the bigger types, that is float->double, intc->int? |
@pv wrote on 2010-10-11 I'm not saying that it should be left as it is now -- I'm just saying that Generally, casts to "bigger" types are automatically done. You can see what via
|
@nstange wrote on 2010-10-11
Why is using numpy a long as default, not an int? There are reasons for why the int's are what they are on every platform, I guess (efficiency, storage size, application of SIMDs, ...). |
@pv wrote on 2010-10-11 Suggested fix: http://github.com/pv/numpy/compare/master...bug/1633-ldexp-long.diff I don't have big-endian machines at hand, so testing is appreciated (although it should work). Thanks!
The easiest solution here seemed to be to extend the ldexp definition to the whole I don't actually know what would be the easiest way to force downcast + raise overflow error. This is not the only place where this int vs. long annoyance has come up. Seems like this could be a useful ufunc feature more generally.
I think the reason why Numpy's default integer type is |
@nstange wrote on 2010-10-12
Just one little note:
I know that this is correct, but the other testcases don't output anything to stdout/err...
I think the way to go here is to make the conversion functions in arraytypes.c check for overflows and possibly throw an
8000
d allow that conversion in PyArray_CanCastSafely. But since this is a major design decision (for implicit conversions users would possibly encounter OverflowErrors they don't understand), it doesn't belong into this bug report, I guess.
|
@rgommers wrote on 2010-10-17 Still an issue on Windows XP 32-bit (not on OS X). With py25:
And with py31:
This with current 1.5.x branch. |
@rgommers wrote on 2010-10-17 Also only a warning printed with 2.6 and 2.7 on Windows. So only 2.5 fails (and 2.4 probably, can't test that now). |
@pv wrote on 2010-10-18 I can reproduce it on Wine w/ Python 2.5 (Python 2.4 distutils doesn't seem to work with numpy distutils on Windows any more). Damnation this combinatorial explosion in platforms and Python versions. |
@rgommers wrote on 2010-10-19 Actually, under Wine is how I tested it. Sorry for not mentioning that all the time, but I've never found a single numpy/scipy issue where Wine and XP were different. When I do, I'll start being more consistent. Testing on Windows is usually done with binaries built with Wine on OS X anyway..... |
@pv wrote on 2010-10-19 Something very strange is going on here:
There seems to be exactly two integer values for which it does not work. And this depends on the Python version. I will need to set up a debugger on a real Windows platform to figure it out (I haven't managed to get debuggers work Wine to work, or Numpy build with debugging symbols to succeed). This may take some time. Moreover, the new code path is not invoked here, since on this setup |
Original ticket http://projects.scipy.org/numpy/ticket/1633 on 2010-10-11 by @nstange, assigned to unknown.
Testsuite fails with:
FAIL: test_ldexp (test_umath.TestLdexp)
Traceback (most recent call last):
File "/pf/m/m222086/xas/solaris10/64/python2/python-2.7-ve0-gcc/lib/python2.7/site-packages/numpy/core/tests/test_umath.py", line 391, in test_ldexp
assert_almost_equal(ncu.ldexp(2., 3), 16.)
File "/pf/m/m222086/xas/solaris10/64/python2/python-2.7-ve0-gcc/lib/python2.7/site-packages/numpy/testing/utils.py", line 463, in assert_almost_equal
raise AssertionError(msg)
AssertionError:
Arrays are not almost equal
ACTUAL: 2.0
DESIRED: 16.0
The issue has been introduced here:
http://projects.scipy.org/numpy/changeset/8438
The reason is in DOUBLE_ldexp (loops.c.src:1122):
const int in2 = *(int *)ip2;
Since ip2 is being cast from (long_) to (int_), and thus in2 (exponent) will only receive the first 4 bytes, that is the 4 most significant bytes on bigendian, that is zero on 64bit.
System's ldexp's exp-argument is always an int, and thus in my opinion, the correct behaviour is to reject any exp-argument we cannot convert to an system's int, that is an np.intc, and adjusting the docs and the testsuite rather than silently giving invalid results.
The attached diff
The text was updated successfully, but these errors were encountered: