8000 Add initial airspeed velocity (asv) framework by jni · Pull Request #3137 · scikit-image/scikit-image · GitHub
[go: up one dir, main page]

Skip to content

Add initial airspeed velocity (asv) framework #3137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 14, 2018
Merged

Conversation

jni
Copy link
Member
@jni jni commented May 31, 2018

EDIT: Benchmarks at http://pandas.pydata.org/speed/

Description

In order to prevent performance regressions, we should set up asv to benchmark our slowest functions. This branch exists to work on that goal.

Closes #1061

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.

@emmanuelle
Copy link
Member

So I'm thinking about generating the benchmarks instead of writing each of them. I see several possibilities for that:

  • turning every gallery example into a benchmark. The nice thing is that examples correspond to relevant usecases. The downside is that we probably don't want the plotting stuff in the benchmark.
  • do some inference on input type and run functions on random values, like in the benchmarking scripts I wrote for the sprint.

What do you think?

@jni
Copy link
Member Author
jni commented May 31, 2018

@emmanuelle yes this is a very good plan, but now the autogeneration should somehow fit into the asv benchmark format, this is why I pushed things here. =)

@sciunto sciunto changed the title WIP: Add initial asv framework WIP: Add initial airspeed velocity (asv) framework Jun 1, 2018
@codecov-io
Copy link
codecov-io commented Jun 2, 2018

Codecov Report

Merging #3137 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3137   +/-   ##
=======================================
  Coverage   86.08%   86.08%           
=======================================
  Files         338      338           
  Lines       27234    27234           
=======================================
  Hits        23445    23445           
  Misses       3789     3789

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6f2a517...c97f2b4. Read the comment docs.

8000

@TomAugspurger
Copy link
Contributor

Once this is merged, we'll add an entry to https://github.com/TomAugspurger/asv-runner/blob/master/tests/full.yml and it'll start running automatically.

@jni
Copy link
Member Author
jni commented Jun 5, 2018

@scikit-image/core I suggest merging this and allowing (/forcing) subsequent PRs to add benchmarks.

Perhaps we should add a section to the development guide about writing asv benchmarks?

At any rate I'm excited to have this merged! ;)

@jni
Copy link
Member Author
jni commented Jun 5, 2018

@TomAugspurger what happens when benchmarks themselves are added/removed/changed? Can one re-run a whole range of commits on benchmarks committed at the end?

@TomAugspurger
Copy link
Contributor
TomAugspurger commented Jun 5, 2018 via email

@hmaarrfk
Copy link
Member
hmaarrfk commented Jun 6, 2018

@TomAugspurger You are asking for alot. I'm not sure it is possible to build anything older than 0.14 and have the tests pass ;)

@jni
Copy link
Member Author
jni commented Jun 6, 2018

@hmaarrfk tests don't need to pass, only benchmarks.

Copy link
Member
@emmanuelle emmanuelle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for merging. Where will we see the results of the benchmarks?

@TomAugspurger
Copy link
Contributor
TomAugspurger commented Jun 7, 2018 via email

@sciunto sciunto changed the title WIP: Add initial airspeed velocity (asv) framework Add initial airspeed velocity (asv) framework Jun 8, 2018
@soupault
Copy link
Member
soupault commented Jun 9, 2018

@jni could you, please, add a link from the comment above to somewhere (including the release_dev.rst)? I assume, both the contributors and the users might find it useful. Otherwise, lgtm. Thanks!

@emmanuelle
Copy link
Member

@jni if you add the comment requested by @soupault I will merge :-)

@jni
Copy link
Member Author
jni commented Jun 12, 2018

@emmanuelle @soupault done! =)

@TomAugspurger how does asv work with PRs? The specific feature that I'm interested in is the ability to run even a new benchmark (provided by the PR) on the master branch and the PR tip (or the merge commit) before merging. Is this easy enough to do?

@TomAugspurger
Copy link
Contributor
TomAugspurger commented Jun 12, 2018 via email

@jni
Copy link
Member Author
jni commented Jun 12, 2018

@TomAugspurger that sounds like a fun challenge. And @Carreau has done the hard work with https://github.com/MeeseeksBox/MeeseeksDev. Can you give me a bit of background about what the bot would need to do? ie if I have a head commit and the master branch it split from, how do I:

  • grab the benchmarks and asv conf from the head commit
  • run the benchmarks on both
  • publish the results somewhere

?

@TomAugspurger
Copy link
Contributor

@jni As far as running goes, yes I think you've got it. You should be able to clone the PR and run asv continuous -f 1.1 HEAD master and that'll give you a text output that can be published on the issue. The bot could optionally take a -b flag to limit it to a certain set of benchmarks.

I think the main concern is scheduling these so that we aren't running these on-demand runs with

a.) eachother
b.) the nightly benchmarks

The benchmark machine is using Airflow right now to do the scheduling. Airflow supports external triggers, so this should be possible. I can help out with the airflow stuff if you want.

@jni
Copy link
Member Author
jni commented Jun 12, 2018

@emmanuelle if we merge this soon we can beat the 4th anniversary of #1061. =P

@hmaarrfk
Copy link
Member

@jni
image

@jni
Copy link
Member Author
jni commented Jun 12, 2018

@hmaarrfk I know 😂 😭 😂

@@ -7,6 +7,8 @@
- [ ] Clean style in [the spirit of PEP8](https://www.python.org/dev/peps/pep-0008/)
- [ ] [Docstrings for all functions](https://github.com/numpy/numpy/blob/master/doc/example.py)
- [ ] Gallery example in `./doc/examples` (new features only)
- [ ] Benchmark in `./benchmarks`, if your changes aren't covered by an
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want a benchmark for every new piece of functionality?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stefanv as with all of these tick marks, they are at the reviewers' discretion. =) The idea should certainly be considered for all PRs. And the idea is to build a suite over time. So anything that touches e.g. dilation, gaussian filters, watershed, etc should add a benchmark, and that benchmark should be run before and after the change, so that skimage moves towards speed as a whole.

@stefanv stefanv merged commit 49f964e into scikit-image:master Jun 14, 2018
@stefanv
Copy link
Member
stefanv commented Jun 14, 2018

Thank you, Juan!

@jni jni mentioned this pull request Jun 19, 2018
9 tasks
@hmaarrfk
Copy link
Member
F438 hmaarrfk commented Jul 10, 2018

@jni good job on the asv PR.

For those looking to install asv locally, we need to wait either for 0.2.2 or install with pip and git

pip install git+https://github.com/airspeed-velocity/asv.git#egg=asv

@jni
Copy link
Member Author
jni commented Aug 16, 2018

@TomAugspurger none of the benchmarks added after this PR appear on the ASV server! Any ideas why? I looked at benchmarks/__init__.py and asv.conf.json but there's nothing there to suggest that new benchmarks shouldn't be run. Is there some asv cache that needs clearing when new benchmarks are added?

@jni jni deleted the asv branch August 16, 2018 00:18
@jni
Copy link
Member Author
jni commented Aug 16, 2018

@TomAugspurger
Copy link
Contributor

I think my 2 year old hit the power button on the benchmark machine, and I didn't notice till last night 😆 It should be catching up now, and we've had a talk about touching Daddy's stuff.

@jni
Copy link
Member Author
jni commented Aug 17, 2018

@TomAugspurger I feel like this is a longer term problem of not downloading from master. New benchmarks were added weeks ago...

@jni
Copy link
Member Author
jni commented Aug 17, 2018

Oh although I just reread your statement that you think it's a longer term problem on your end also. =P Ok let's see how we go... =)

@TomAugspurger
Copy link
Contributor

Found the problem. When we initially set this up, I manually made the change adding a benchmarks/__init__.py on the ASV machine. But I never cleared that change, so the git pull wasn't working for scikit-image. Fixed that, and am re-running.

@TomAugspurger
Copy link
Contributor
TomAugspurger commented Aug 17, 2018 via email

@jni
Copy link
Member Author
jni commented Aug 23, 2018

🎉 thanks @TomAugspurger!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants
0