10000 Replace morphology.local_maxima with faster flood-fill based Cython version by lagru · Pull Request #3022 · scikit-image/scikit-image · GitHub
[go: up one dir, main page]

Skip to content

Replace morphology.local_maxima with faster flood-fill based Cython version #3022

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 35 commits into from
Jun 19, 2018

Conversation

lagru
Copy link
Member
@lagru lagru commented Apr 22, 2018

This is a follow-up to #3016 that demonstrates a first draft of alternative algorithms proposes an alternative flood-fill based algorithm in Cython for maxima detection in images (and ND arrays). You can find a demo and benchmarks here.

Description

This PR is meant to be a place for discussion first and is not necessarily meant to be merged.

The current state is work in progress and I'm sure there is room for improvement. However I felt that I have reached a point where it would be useful to start a discussion and review process. There are some open points that will / could have a large impact on the functions performance and implementation which I list below or as review comments. Furthermore I felt a little out of my depth concerning some C/Cython related stuff. Especially while implementing the re-sizable buffer. So if you see any bad practices please point them out! I'm eager to improve.

For a description of the algorithms itself please have a look at the docstrings and code comments. Including it here wouldn't make much sense as these are still WIP.

Oh, and just because it isn't said enough: Thank you guys for your work and time. scikit-image is a great tool and library!

Checklist

I'll try to add talking points and arguments you bring up here.

General decision points

  • (A) Can plateaus that border the image edge be maxima? What if the entire image has only one value?
    --> I'll just make this toggle-able. For now I'll set the default to exclude maxima on borders.

Intuitively I would assume that local maxima are detected according to the "mathematical" definition which demands smaller values on all sides of a maximum. This would mean that maxima can't border the image edge and a constant image has no maximum.

[...] don't return maxima on the edges. However, it would be easy to add an option to include them by using padding.

  • (B) Should the function return index positions (like feature.peak_local_max) or a "boolean" array with the same shape as the image (like morphology.local_maxima)?
    --> Return array of 1's and 0's. Optionally add a parameter to change to indices.

[...] we should return whatever is more natural for the algorithm (in this case, a boolean array), and provide utility functions to convert between them.

  • (C) Which variant of the algorithm should be used? Benchmarks here...
    --> It seems that algorithm A (prefiltering the array in one direction) has a better performance for all test cases. So I'll prefer that one for now.

  • (D) Is this a candidate for SciPy's ndimage module or for scikit-image? If the latter, which module?
    --> I'll post and ask on the SciPy mailing list when I'm confident with the progress.

Even if it is integrated to scipy, it makes sense (IMO) to integrate temporarily to scikit-image to avoid bump the dependency version to the latest version and to do not wait a scipy release to use internally in scikit-image.

    8000
  • (E) Do you actually want this as an addition to the respective library? 😉

Final tasks

References

For reviewers

(Don't remove the checklist below.)

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.

@jni
Copy link
Member
jni commented Apr 23, 2018

Thanks @lagru! Fantastic to get identical results to local_maxima with a 10-20x speed advantage!

To answer your decision points, from my perspective:

  • A. This is my view: don't return maxima on the edges. However, it would be easy to add an option to include them by using padding.
  • B. I think we should return whatever is more natural for the algorithm (in this case, a boolean array), and provide utility functions to convert between them.
  • C. This I don't have a strong view on. Algo B is simpler, so I lean towards it, but algo A does appear to have a speed advantage in all cases. But, see my comments later.
  • D. This is a good fit in scikit-image, either in feature or morphology. However it might make sense in scipy.ndimage, but I can't comment on that.
  • E. Erm... Yes. =P

A major point in this review is that this function and these algorithms would make complete sense in n dimensions, rather than restricting to 2D, and also with varying levels of connectivity (4- or 8-connected in 2D, 6-, 18-, or 26-connected in 3D, though I prefer the terms 1-connected, 2-connected, 3-connected for the same). This would be easier for algo B than A, I think.

I've been meaning to write a guide about working in nD vs 2D, but in the meantime, I suggest looking at the morphology.watershed code for an example. The key is to think in indices and neighbours instead of rows/columns and -1/+1 on each.

One potential limiting factor is that, if I remember correctly, we have flood fill in 2D and 3D but not nD. But just 2D/3D is a big improvement over 2D-only.

@lagru
Copy link
Member Author
lagru commented Apr 23, 2018

A major point in this review is that this function and these algorithms would make complete sense in n dimensions, rather than restricting to 2D, and also with varying levels of connectivity (4- or 8-connected in 2D, 6-, 18-, or 26-connected in 3D, though I prefer the terms 1-connected, 2-connected, 3-connected for the same). This would be easier for algo B than A, I think.

I would generally agree. However I have the fear that that would incur noticeable speed penalties for the 2D case.

I've been meaning to write a guide about working in nD vs 2D, but in the meantime, I suggest looking at the morphology.watershed code for an example. The key is to think in indices and neighbours instead of rows/columns and -1/+1 on each.

That seems well documented but a little bit overwhelming to grasp at first glance. I think I get that you actually iterate each pixel on the flattened array. The neighbors are found by using offsets from the current index. I don't yet understand how these offsets are computed without using to much memory. E.g. for a connectivity of 2 in the 2D-case 8 offsets for each pixel (except for the edges) are required. So I would need memory at least 8 times the size of the provided image. This gets even worse for higher dimensions...

@sciunto
Copy link
Member
sciunto commented Apr 23, 2018

Awesome work. Many thanks! I reply to A, B and D

A. I agree with your intuition :)
B. +1 with @jni
D. Even if it is integrated to scipy, it makes sense (IMO) to integrate temporarily to scikit-image to avoid bump the dependency version to the latest version and to do not wait a scipy release to use internally in scikit-image.

@ThomasWalter
Copy link
Contributor

Amazing. That's really cool.

What I do not understand is: why do you get the same results as extrema.local_maxima. Because you assume that a plateau that is higher than any pixel inside the image but touching the border is not a maximum. But in extrema.local_maxima this would be a maximum. So, in principle you should get some differences, no?

Another question I am asking myself: where does this enormous speed gain come from? Is this algorithmic or implementation?

@lagru
Copy link
Member Author
lagru commented Apr 25, 2018

What I do not understand is: why do you get the same results as extrema.local_maxima. Because you assume that a plateau that is higher than any pixel inside the image but touching the border is not a maximum. But in extrema.local_maxima this would be a maximum. So, in principle you should get some differences, no?

(Assuming you mean morphology.local_maxima) In the current implementation (b500723) touching the border is not a disqualifying criterion thus results is identical with local_maxima. According to the discussion around (A) I already have a modification in my local repo that prohibits maxima from touching the border. The performance is about the same.

Another question I am asking myself: where does this enormous speed gain come from? Is this algorithmic or implementation?

I think the implementation in Cython plays definitely a role and local_maxima has a lot more Python around its core function _greyreconstruction.reconstruction_loop. Profiling would perhaps give more insight how much of that is due to the Python overhead and how much time is consumed.
At least in for algorithm A there are some algorithmic advantages because the first loop, which considers only one dimension, is relatively efficient and reduces the amount of evaluated pixels for following queue-based algorithm. For some images this can be really noticeable.

@ThomasWalter
Copy link
Contributor
ThomasWalter commented Apr 25, 2018

OK cool, thanks a lot for your comment. Because my point is that if the reconstruction is really suboptimal, it might be worth investigating an alternative implementation irrespective of the detection of local maxima, which you have solved already. Simply because the reconstruction can be used for many things.

By the way, I think it is better to have at least the option to also have maxima that touch the border. Actually, there is no mathematically sound reason for excluding or including them in the first place (I mean, it is just a choice), but for some morphological operators it is important to have all maxima to start with.

@lagru
Copy link
Member Author
lagru commented Apr 25, 2018

@ThomasWalter I should have added that excluding maxima at borders simplifies the implementation of algorithm A and more importantly allows the first loop to exclude more pixels. Especially for "unnatural" images which have large plateaus which may get excluded early this can reduce the evaluation burden for the second queue-based part!

@jni
Copy link
Member
jni commented Apr 26, 2018

@ThomasWalter I'm going from memory/intuition here rather than a full overview of the code, but if I remember correctly, the morphological reconstruction will run a dilation on the full image multiple times, enough to cover any plateaus. On the other hand, this algorithm only needs one pass over the image + some flood filling, so it can potentially be much faster.

Regarding the borders, we can reduce borderless to border-ful by padding with the minimum value. (Though this may not be the fastest/most efficient approach).

@lagru

for a connectivity of 2 in the 2D-case 8 offsets for each pixel (except for the edges) are required. So I would need memory at least 8 times the size of the provided image.

No, you compute the offsets "just in time", when you are observing each pixel/voxel. So you compute an offset array (a 4-, 8-, 6-, 18-, or 26-element array of signed ints) and when you check a pixel you compute its neighbors by adding the pixel coordinate to that array. You do not broadcast all pixel coordinates against the offset array! That would be crazy. =P (In my defense, I wrote this code a very long time ago. =D)

@lagru
Copy link
Member Author
lagru commented Apr 26, 2018

Alright, I got an ND-version of algorithm A to work and it seems like its even slightly faster than the 2D versions for the 2D case. :D I updated the benchmark notebook as well so you can see for yourself.

@lagru lagru changed the title WIP / Demo: New maxima finding algorithm(s) for 2D WIP: New function to find local maxima in N dimensions Apr 26, 2018
# Set edge values of flag-array to 3
without_borders = (slice(1, -1) for _ in range(flags.ndim))
flags = np.pad(flags[tuple(without_borders)], pad_width=1,
mode="constant", constant_values=3)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There must be a better / faster way to do this right? But I can't think of a way to get a view or slice of only the "edge" values (the inverted of without_borders).


# TODO prevent integer overflow!
self._buffer_size *= 2
new_buffer_ptr = <QueueItem*>realloc(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect that there is a constant or variable that stores the maximal possible size of Py_ssize_t but I can't find it.

@lagru
Copy link
Member Author
lagru commented Apr 27, 2018

Some additional question:

  • Is the morphology module the right place for the ND-version?
  • Should this replace the existing function local_maxima or be an alternative? There are several consequences:
    • If we replace the old function we would have to accept that peaks can border the array edge by default otherwise we would break the API.
    • If we don't replace the function... how do we name the new one to be distinct. Ideally there should be a clear recommendation for which function to prefer.
  • The queue and many of the changes to make the algorithm n-dimensional were inspired by the watershed algorithm. So there is some overlap in functionality.
    • E.g. do you desire to merge my queue and the functionality in heap_general.pxi. However there are some important differences between the two "data structures" (heap considers insertion time and my queue can be restored) so I'm hesitant to call this a good idea.
  • peak_local_max features several arguments that can be used to restrict the set of found maxima. Should I keep my new function pure (do only one thing and do it good) or should I think about integrating arguments like a height condition, area, etc. into the function?
    • I'm undecided on this. E.g. a height condition could be used to speed up the algorithm which is a plus in my book. But I think other things should be provided by separate well defined functions if so desired.

@soupault soupault added this to the 0.15 milestone Apr 27, 2018
@lagru lagru mentioned this pull request Apr 27, 2018


cdef:
struct Queue:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance we could re-use a data structure already present somewhere else in order to limit the complexity of the code?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see that you've mentioned the Queue of the watershed. A common codebase would be great if it makes sense.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, a common codebase would be ideal (see this comment as well #2876 (comment)). However as already mentioned there are some problematic differences between the two queue implementations and its requirements which I will try to summarize below:

Heap (general_heap.pxi) is a priority queue used by watershed, Queue is used in my code:

  • Multiple iterations: Queue allows multiple iterations via q_restore and preserves a consumed (popped) item internally until cleared explicitly. Heap may overwrite consumed data.
  • Order: Queue uses insertion order while Heap sorts (using the array of pointers) by the return value of a function smaller.
  • A consequence the internal data representation is different as well: Queue 8000 uses only one array to store its items while Heap uses an additional array to store pointers to its items.

@emmanuelle You seem to be the original contributor of Heap so please point out any misunderstandings on my part.

I don't see much overlap here concerning the internal machinery of these "basic" containers. But it may be possible to implement a data container that fulfills the requirements of both Heap and Queue. But would that be wise? I think my code would be slower due to overhead introduced by the sorting machinery and watershed would have to suffer the larger memory footprint due to internal preservation of popped items.

Having said that, if you still think it might be worth it I'll try to make a quick draft of a queue that can be used as a replacement for both. That might be more useful to reach a more informed decision.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No no, a FIFO Queue and a Heap are two different data structures, even if the latter is used to implement Priority Queues. I don't think there is much code to be shared between them and it makes sense to keep them separate.

@stefanv
Copy link
Member
stefanv commented Apr 30, 2018

I don't have anything to add at this point, but want to express my excitement at this conversation happening. Thanks!

@lagru
Copy link
Member Author
lagru commented May 4, 2018

Okay, the new algorithm seems to pass the unit tests of the old local_maxima and local_minima.
Before I start writing tests and proper docstrings I'd like some decisions to be made regarding the API:

  • (F) Should the new algorithm replace the old one in local_maxima? The output seems to be the same for all scenarios I tested (benchmarking and unit tests). If not how should this new function be named?
  • (G) What's the scope of this function to be? Should this just find maxima? Or should additional options like for peak_local_max be integrated?
  • (H) How do we handle code that shares functionality among each other, e.g. new _offset_to_raveled_neighbours was derived from _compute_neighbors in watershed.py.

To F: I would prefer to replace the old slower one. Mainly to avoid ambiguity between 3 maxima finding functions and because I can't think of a more fitting name than local_maxima. This wouldn't need a depreciation treatment because the new one supports all the features of the old one while being faster in all cases. I would also recommend to depreciate local_minima. It seems trivial to me to invert an image and use local_maxima.

To G: I like the current scope: find local maxima depending on connectivity and definition at the edges. Convenient filter options could be provided with another function, e.g. select_maxima, that wraps this one and would provide the means to select the desired maxima out of all found ones. At least that is what I think I should have done with find_peaks in SciPy.

To H: Honestly, I would prefer to worry about that in another PR after the API and scope are clear.

@lagru lagru mentioned this pull request May 6, 2018
@jni
Copy link
Member
jni commented May 8, 2018

@lagru my 2c:

  • F: this function should replace the existing local_maxima.
  • G: long term I would like to see this replace peak_local_max (this would require some processing of plateaus but otherwise is almost a straightforward np.nonzero!) But I agree with you that this PR is not the place to worry about this.
  • H: that question is out of scope for this PR.

I want to keep local_minima. Many functions are trivial to obtain by composition (e.g. rgb2lab is a composition of rgb2xyz and xyz2lab), but they are there because they are common enough that people find it convenient.

@lagru
Copy link
Member Author
lagru commented Jun 2, 2018

I just noticed while updating the profiling notebook that between commits 5ca5a17 and the current state (14e164a) the function got noticeably slower. In some cases up to 30%! I'm currently trying to find the reason and offending commit...

I have difficulties finding the offending commit. It seems like declaring the flag values as module variables in 08da763 caused a noticable slow down which I don't understand. I'd have thought that the compiler would optimize that change away.
Furthermore several other commits (e.g. 4452775) might have introduced small slow-ups. However its hard to pinpoint these because I can no longer reproduce the qualitative results in the current notebook and I have no idea why.

These two commits serve only the purpose to make the code more readable; are they worth the performance hit?

The current implementation however is still magnitudes faster than the old version and still in the same neighborhood as peak_local_max. So it might be enough to leave it at the current state unless somebody has further insights.

@jni
Copy link
Member
jni commented Jun 3, 2018

@lagru I'm running benchmarks on every commit in that range using the machinery in #3137. Generally readability counts a lot in this library, and anyway I think there are probably Cython optimisations to make sure that those variable assignments don't actually cost anything. Once I figure out exactly what broke, I'll report back to see whether there are any possible fixes.

@jni
Copy link
Member
jni commented Jun 3, 2018

@lagru which benchmark specifically are you concerned about? For the random image (1000,1000) it's actually a bit faster after "MAINT: Merge _compute_neighbors and _offsets_to_raveled_neighbors". Here's the results:

· Running 10 total benchmarks (10 commits * 1 environments * 1 benchmarks)
[  0.00%] · For scikit-image commit hash e55bec90:
[  0.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1................................................................................
[  0.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 10.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         26.3±0.1ms
[ 10.00%] · For scikit-image commit hash 260efb8b:
[ 10.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1..............................................................................
[ 10.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 20.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         26.2±0.1ms
[ 20.00%] · For scikit-image commit hash f0f40308:
[ 20.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1...........................................................................
[ 20.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 30.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         26.2±0.1ms
[ 30.00%] · For scikit-image commit hash b8b29ed4:
[ 30.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1.............................................................................
[ 30.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 40.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                        
F438
                26.4±0.09ms
[ 40.00%] · For scikit-image commit hash 0f750342:
[ 40.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1............................................................................
[ 40.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 50.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         26.7±0.2ms
[ 50.00%] · For scikit-image commit hash 509b41bd:
[ 50.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1...........................................................................
[ 50.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 60.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                        20.2±0.09ms
[ 60.00%] · For scikit-image commit hash 119c8d7b:
[ 60.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1...........................................................................
[ 60.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 70.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                        20.2±0.08ms
[ 70.00%] · For scikit-image commit hash fecd7db3:
[ 70.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1.........................................................................
[ 70.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 80.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         20.0±0.2ms
[ 80.00%] · For scikit-image commit hash 0b746e94:
[ 80.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1.........................................................................
[ 80.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[ 90.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                        20.3±0.04ms
[ 90.00%] · For scikit-image commit hash d0b7c80e:
[ 90.00%] ·· Building for conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1...........................................................................
[ 90.00%] ·· Benchmarking conda-py3.6-cython0.28-matplotlib2.2-numpy1.14-scipy1.1
[100.00%] ··· Running benchmarks.SegmentationSuite.time_local_maxima                                         20.0±0.1ms

This is on my branch jni/maxima2d-bench, which required a rebase so the commits don't match, but they start at the equivalent of 5ca5a17.

@jni
Copy link
Member
jni commented Jun 7, 2018

ping @lagru

@lagru
Copy link
Member Author
lagru commented Jun 7, 2018

@jni Sorry that I took so long and thank you for investigating yourself!

@lagru which benchmark specifically are you concerned about? For the random image (1000,1000) it's actually a bit faster after "MAINT: Merge _compute_neighbors and _offsets_to_raveled_neighbors". Here's the results:
[code snippet]

When rerunning the notebook with the benchmarks I get a qualitative difference between the new local_maxima and peak_local_max. This difference was previously negligible but is now very noticeable: e.g. local_maxima got slower compared to peak_local_max for the first 3 benchmarks.

BUT I can't even reproduce the original results of the notebook any longer. My only explanation is that something in the background or an update to a related library changed how the benchmarks perform.

Considering that weird behavior, I'd leave it as it is. If I solve this mystery I can always come bag and apply it.

@jni
Copy link
Member
jni commented Jun 7, 2018

Ok, I agree, this is not worth worrying about at this moment. @scikit-image/core anyone care to review/merge?

@ThomasWalter maybe you would like to review also?

@ThomasWalter
Copy link
Contributor

@jni : please go ahead, I did not really follow. (actually I did not know that I can review, as I do not belong to the core)

@jni
Copy link
Member
jni commented Jun 12, 2018

@ThomasWalter well now you know. =)

@soupault @stefanv @emmanuelle an extra ping for good measure. ;) This is a big PR but the code is very clean after several revisions, so it should not be too tricky to review. Have a look!

Note: I intend to backport to 0.14 so let's keep the Py2 compatibility stuff and take it out in a later PR.

lagru added 2 commits June 16, 2018 16:49
Don't use NOT_MAXIMUM for boolean flag `is_maximum`. This might break
if the value is changed and no longer 0.
Failed due to mismatching display formats on different platforms.
This should alleviate the issue.
@stefanv stefanv merged commit 912d9d2 into scikit-image:master Jun 19, 2018
@stefanv
Copy link
Member
stefanv commented Jun 19, 2018

@lagru This is an exceptional piece of work; I wish all PRs were of such quality!

@stefanv
Copy link
Member
stefanv commented Jun 19, 2018

@meeseeksdev backport to v0.14.x

@lumberbot-app
Copy link
lumberbot-app bot commented Jun 19, 2018

There seem to be a conflict, please backport manually

@lumberbot-app lumberbot-app bot added the Still Needs Manual Backport MrMeeseeks-managed label label Jun 19, 2018
@lagru
Copy link
Member Author
lagru commented Jun 20, 2018

@stefanv That is a really nice and certainly motivating thing to hear.

And in return I want to mention that without your helpful and constructive feedback (special thanks to @jni) this wouldn't have been possible. Thank you.

@lagru lagru deleted the maxima2d branch June 20, 2018 06:39
@lagru lagru mentioned this pull request Jun 27, 2018
7 tasks
111C
@JDWarner JDWarner mentioned this pull request Jul 5, 2018
3 tasks
lagru added a commit to lagru/scikit-image that referenced this pull request Nov 14, 2018
jni added a commit that referenced this pull request Nov 14, 2018
Backport #3022 & #3447: Rewrite of local_maxima with flood-fill & Cython + bugfix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⏩ type: Enhancement Improve existing features Still Needs Manual Backport MrMeeseeks-managed label
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants
0