-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
Valgrind error in lowlevel_strided_loops.c.src #8922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
how did you trigger this error? |
I ran valgrind on scipy test suite with numpy verison of |
Just by looking at the stack trace seems like something like concatenate([[numpy.int64(3)], [2, 3, 4]]) may trigger this? |
Without further information, this could also be caused by the scipy test suit/code using uninitialized arrays, it would be good to get a working reproducing example. |
There may be something fishy in concatenate... I recently tried to rebase #7450 and started getting some weird errors, where importing numpy would crash, and adding a couple of |
|
This will reproduce the error. Add file a test case for this?
If the array being stacked are not created from
|
Thanks for finding an example! |
hm |
The underlying issue is actually in np.where, when it produces empty arrays the views can point to invalid memory. |
When creating the tuple view of np.where, make sure the arrays point to valid memory when they are empty. Numpy assumes that even empty arrays have valid memory, e.g. in the source stride == 0 assignments. Closes numpygh-8922.
There is no way to guarentee non-numpy functions does not produce empty arrays with invalid pointer. I suggest numpy shall not read any element from size=0 arrays. Maybe its time to add a valgrind test suite? For now we can write tests to assert that numpy functions
with no valgrind errors. |
valgrind is too slow for an automated test, even the compiler based address sanitizers are somewhat on the slow side for numpy. |
Could we do the valdrind test on Circle-CI - for example? Or a buildbot somewhere? |
Collecting memory related tests into a single file for regression tests under valgrind -- it doesn't have to be ran automatically on every push / pr. |
pretty much every test is memory related, there is not much point trying to separate them from the rest. |
That's a fair point (every test is memory related). but valgrind targeted tests are slightly different -- normal python test runs won't be able to trigger them ( they don't even contain an assertion). They only trigger valgrind errors. The current policy seems to be only remember memory related test cases in the issue tracker; which means regressions will be hard to detected. What about extending runtests.py to add a --valgrind mode, which runs a suite of special tests that are for valgrind. The initial suite could be a simple collection of old seg faults on the issue tracker; then the suite will be gradually build up. |
It indeed could be useful to mark issues related to segfaults and uninitialized memory in the testsuite, could be done via decorators that could then be run via nosetests test labels. |
I think it'd be fine to include tests whose primary purpose is to exercise otherwise untested code paths and catch regressions when run under valgrind. I don't know that they need any special ceremony beyond a comment explaining the problem that motivated them. Running valgrind or other sanitizers in an automated way is also an interesting idea, but isn't trivial and should probably be split off into it's own issue. |
We could try to do the automated thing on tagged tests, but even if, I think that before a release someone will still run the full test suit in valgrind anyway, since a huge amount of test could in principle trigger valgrind errors and there are few tests that are obvious candidates (maybe iterator tests, etc. and regressions). I rather think, if you have something that likely causes an error, we will typically check with valgrind anyway, and in all cases will check with valgrind before a release, so as long as there is everything covered in the tests, there is probably little point in trying to think of which parts should be "valgrind" also. |
When creating the tuple view of np.where, make sure the arrays point to valid memory when they are empty. Numpy assumes that even empty arrays have valid memory, e.g. in the source stride == 0 assignments. Closes numpygh-8922.
The line seems to be a 'swap', but why would it access beyond the limit?
It may be a false positive; but I'd raise the alarm here for caution.
The text was updated successfully, but these errors were encountered: