10000 ENH: Add memory leak test by twmr · Pull Request #10302 · numpy/numpy · GitHub
[go: up one dir, main page]

Skip to content

ENH: Add memory leak test #10302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

ENH: Add memory leak test #10302

wants to merge 1 commit into from

Conversation

twmr
Copy link
@twmr twmr commented Dec 30, 2017

Related: #10157

In #10286 @charris suggested to add a new environment variable for (de)activating the valgrind test. I don't know yet how we want to integrate this environment variable into the CI config files.

@ghost
Copy link
ghost commented Jan 1, 2018

IMO find out how to upgrade the debug Python on Travis-CI to 3.5 and then run pytest-leaks.

@twmr
Copy link
Author
twmr commented Jan 1, 2018

AFAIU using pytest-leaks does not help to detect the memory leak reported in #10157, since the leak is in the C-part of numpy. @xoviat have you already used pytest-leaks in other projects?

@twmr
Copy link
Author
twmr commented Jan 1, 2018

Can anyone help me an tell me why the chroot job in https://travis-ci.org/numpy/numpy/jobs/323433566 fails? It seems as if valgrind is not found by subprocess.check_output.

What do you suggest to skip the test when it is run by appveyor? I could add a skip decorator, which checks if valgrind is installed and skips the test if not.

@ghost
Copy link
ghost commented Jan 1, 2018

As I understand pytest-leaks checks the reference count, which would be affected by gh-10157. I major contributor to the project (@pv) wanted to use pytest-leaks, which is why I suggested it.

If we are going to do valgrind though, this is now how we should do it. Instead, we should create another job that runs valgrind on every single test. This is not difficult to do with pytest and would provide much more comprehensive coverage.

@twmr
Copy link
Author
twmr commented Jan 1, 2018

If we are going to do valgrind though, this is now how we should do it. Instead, we should create another job that runs valgrind on every single test. This is not difficult to do with pytest and would provide much more comprehensive coverage.

This sounds like a good idea. I need some assistance with that, since this requires that pytest runs each test transparently in a valgrind subprocess.

@ghost
Copy link
ghost commented Jan 1, 2018

The basic procedure is:

  1. pip install -e .
  2. pytest --collect-only
  3. For each collected test, run valgrind pytest numpy/tests/test_matlib.py::test_empty for example

@ghost
Copy link
ghost commented Jan 1, 2018

Writing a script to do this should not be difficult.

@eric-wieser
Copy link
Member
eric-wieser commented Jan 1, 2018

Writing a script to do this should not be difficult.

This strikes me as in scope for a pytest plugin, which could run as part of the main pytest call

@twmr
Copy link
Author
twmr commented Jan 1, 2018

The basic procedure is:

pip install -e .
pytest --collect-only
For each collected test, run valgrind pytest numpy/tests/test_matlib.py::test_empty for example

@nicoddemus, are you aware of a pytest plugin which runs each test in a subprocess, e.g. (gdb, valgrind, ..) ?

@nicoddemus
Copy link
Contributor

Hmm pytest-xdist?

@pv
Copy link
Member
pv commented Jan 2, 2018 via email

@charris charris changed the title Add memory leak test ENH: Add memory leak test Jan 2, 2018
@mattip
Copy link
Member
mattip commented Oct 16, 2018

How slowly will this run? See scipy/scipy#7800 where it slows down tests significantly

@mattip
Copy link
Member
mattip commented Jan 15, 2019

xref #12718, which was not merged

@mattip mattip added the 57 - Close? Issues which may be closable unless discussion continued label Jan 21, 2019
@mattip
Copy link
Member
mattip commented Jan 21, 2019

I am leaning toward closing this as "too complicated for current CI". Any opposition?

@seberg
Copy link
Member
seberg commented Jan 21, 2019

I am ok with closing, not sure I feel the import numpy leak check is on its own so valuable that it warrents the added complexity.

I ran the numpy test suit in valgrind with leak checking after every test (which is certainly faster then actually starting a new process for every test (although it appeared not 100% reliable – I wonder if a debug python might help with it, although not sure why it should). That took more then the whole night, much longer than just running it all in valgrind. Of course a leak check after every module or so would be enough. But overall it still seems like it might be a lot of long test runs compared to the gain.

One thing I could imagine trying, would be to make my pytest-valgrind thingy more reasonble, e.g. to only check for leaks every module or so. But considering how slow valgrind runs... Even if we want to see if it can work, I think it should run only occasionally or on request, and I am not sure if there even a nice way to set that up.

pytest-leaks may be better (certainly faster, if also slow, although it doesn't get the additional benefit of finding other valgrind errors).

@mattip
Copy link
Member
mattip commented Jan 21, 2019

Closing.

@mattip mattip closed this Jan 21, 2019
@charris
Copy link
Member
charris commented Jan 21, 2019

@seberg if the job could be made easier with a script, maybe add something to tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
01 - Enhancement 05 - Testing 57 - Close? Issues which may be closable unless discussion continued
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants
0