8000 MAINT: Make the refactor suggested in prepare_index by eric-wieser · Pull Request #8278 · numpy/numpy · GitHub
[go: up one dir, main page]

Skip to content

MAINT: Make the refactor suggested in prepare_index #8278

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jul 18, 2017

Conversation

eric-wieser
Copy link
Member
@eric-wieser eric-wieser commented Nov 15, 2016

Split off from #8276

Extracting the tuple-conversion logic into its own function.

Things I'd like feedback on:

  • This might worsen performance slightly for scalar indices, as it now forces the tuple conversion. Is there an easy way to tell if this is significant? Is the clarity vs efficiency tradeoff acceptable here?
  • Have I messed up my reference counting?
  • Are these functions in the right places?

Update: new approach that:

  • Avoids calling __getitem__ more than once on the provided sequence, if it is converted on a tuple:
    • More performant than master for a[[0, None]]
  • Avoids allocating any new python objects, instead using a tuple-like buffer on the stack

@eric-wieser
Copy link
Member Author
eric-wieser commented Nov 15, 2016

If this is merged, #4434 will need a rebase

* The index might be a multi-dimensional index, but not yet a tuple
* this makes it a tuple in that case.
*
* TODO: Refactor into its own function.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As instructed here

@charris
Copy link
Member
charris commented Dec 3, 2016

@seberg Comments?

if (index == NULL) {
return -1;
index_as_tuple = PySequence_Tuple(index);
if(index_as_tuple == NULL) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Space before (

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(also a few more times)

int ellipsis_pos = -1;

index = prepare_index_tuple(index);
if(index == NULL)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space, plus please add the curly brackets, we always put them in numpy

else {
n = PyTuple_GET_SIZE(index);
n = PyTuple_GET_SIZE(index_as_tuple);
if (n > NPY_MAXDIMS * 2) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit curious, this block can probably be removed? Seems to be just an early error out, and I don't see a reason to optimize error speed ;).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not immediately clear to me where the late error-out is that this is protecting, but I guess I could remove it and see what breaks?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well if you pass in a finished tuple you have to get the same error at some point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is the part that handles finished tuples. The only way you can avoid this check is by passing a tuple subclass, which is probably not tested

@seberg
Copy link
Member
seberg commented Dec 4, 2016

Could be good to see whether it makes a real speed difference to build that tuple (likely not, but then I may actually have timed it at the time and thought it might be nice to have). Then again, I am not sure whether arr[0] being even 10-20% slower is a huge deal.

@seberg
Copy link
Member
seberg commented Dec 4, 2016

Ref counting looks good on first sight, and yes, definitely all in the right place, the only issue may be to check whether doing a bit weirder code for speed may be worth it.

@eric-wieser eric-wieser force-pushed the refactor-prepare-index-only branch from fd4a920 to b32815f Compare December 4, 2016 17:54
@eric-wieser
Copy link
Member Author

Ok, all the style things are fixed

@eric-wieser
Copy link
Member Author
eric-wieser commented Dec 4, 2016

Could be good to see whether it makes a real speed difference to build that tuple (likely not, but then I may actually have timed it at the time and thought it might be nice to have). Then again, I am not sure whether arr[0] being even 10-20% slower is a huge deal.

I think this is basically testable right now (I cannot build my numpy locally from master or my branch) as

>>> x = np.arange(1000)
>>> i = 100
>>> %timeit x[i]
The slowest run took 26.21 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 214 ns per loop

>>> %timeit x[(i,)]
The slowest run took 20.53 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 273 ns per loop

So as high as 30%, assuming this test is valid.

@seberg
Copy link
Member
seberg commented Dec 4, 2016

Hehe, honestly not sure its worth troubling over. If you replace index_as_tuple = Py_BuildValue("(O)", index); with lower level calls, you probably get back already a lot of the difference.

@eric-wieser
Copy link
Member Author

PyTuple_Pack(1, index) would probably be faster then, and not really any less clear?

@seberg
Copy link
Member
seberg commented Dec 4, 2016

Probably, I am not sure, but somewhat thought those pack functions are not the quickest when it comes to micro optimization. Might also be that it does not make a difference at all....

EDIT: Those buildvalue functions

@seberg
Copy link
Member
seberg commented Dec 5, 2016 via email

@eric-wieser
Copy link
Member Author

There's a weird uncommented check on line 190 that seems to decide that lists of length MAX_DIM are not promoted to tuples, despite the fact that doing so would be valid up to 2*MAX_DIM. Is this a backwards compatibility thing? Is this documented somewhere, so that I can add a comment acordingly?

@seberg
Copy link
Member
seberg commented Dec 5, 2016

Well the check is only there for the tuple conversion trick, which I am not about to modify, since I think its a crappy hack in any case. Most the time, it is MAX_DIM anyway, but since you can have things like None which can be there more often, I put 2*MAX_DIM as basically "for sure enough".

@eric-wieser
Copy link
Member Author

Any more thoughts on this?

@seberg
Copy link
Member
seberg commented Jan 5, 2017

Ok, just trying to pass through a few things. Not really, do you know if the slowdown got better now, or do you think we should just not worry about it too much?

@eric-wieser
Copy link
Member Author
eric-wieser commented Jan 5, 2017

I wasn't really able to profile this patch, being unable to build locally - the closest I could get were running on the released version, comparing:

  • i = 2; %timeit x[i]
  • ti = (2,); %timeit x[ti] - profiling of the extra work to iterate the tuple in get_item, if there actually is any
  • i = 2; %timeit x[i,] - including the worst case tuple constructor cost. A C function call should be less expensive than an opcode though.

These all seemed pretty similar, and the timing overhead seems to make the results pretty hard to compare. So I'd say that we shouldn't worry about it. %timeit x[i + i] seems to give worse results than either of the above. and that's going to be very common in code.

@seberg
Copy link
Member
seberg commented Jan 5, 2017

Ohh, you have problems building locally? Doesn't the python runtests.py and python runtests.py --ipython work for you?

OK, just ran it on my computer (python3, on python2 things might be different, because it goes through the C getitem function which directly gets the integer, would have to look up the path myself...).

The new times first:

In [1]: x = np.arange(1000)
In [2]: i = 100
In [3]: %timeit x[i]
10000000 loops, best of 3: 149 ns per loop / 116 ns
In [4]: x = np.arange(1000, dtype=object)
In [5]: %timeit x[i]
10000000 loops, best of 3: 119 ns per loop / 78.2 ns

So, hmmmmmm ;). Can't make up my mind, you are right that the difference is lower then doing something like i+i.

@eric-wieser
Copy link
Member Author
eric-wieser commented Jan 5, 2017

Wait, why is dtype=object faster in both cases?

@seberg
Copy link
Member
seberg commented Jan 5, 2017

Simple, it only has to do an incref and not create the scalar python object from whats inside the array.

@seberg
Copy link
Member
seberg commented Jan 11, 2017

I suppose this: I think we can get away with it, but would prefer to check whether it is not too ugly if we try to avoid it.

F438
@eric-wieser
Copy link
Member Author

Ok, timing is now much better, and actually an improvement when passing tuples

Before this PR:

In [1]: a = np.array([1, 2, 3, 4])

In [2]: %timeit a[0]
The slowest run took 29.59 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 158 ns per loop

In [3]: %timeit a[0,]
The slowest run took 28.01 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 167 ns per loop

After the latest commit:

In [1]: a = np.array([1, 2, 3, 4])

In [2]: %timeit a[0]
The slowest run took 32.66 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 157 ns per loop

In [2]: %timeit a[(0,)]
The slowest run took 26.58 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 158 ns per loop

}
}
}
PyObject **raw_indices[NPY_MAXDIMS*2];
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also allocate these to the length we need, but there doesn't seem to be much precedent for doing that

@eric-wieser
Copy link
Member Author
eric-wieser commented Apr 8, 2017

However, the reference counting seems odd to me, where are the incref's/decref's

GET_ITEM returns a borrow reference, so no reference counting is needed. We just unpack the tuple, and either return a borrowed reference to the tuple itself, or return a borrowed reference to each of its items.

I don't think the reference handling code is any different to what was here originally - in both cases, we borrow everything and increment nothing.

The previous iteration allocated a tuple, so had to decref it when it was done - but here, we don't bother allocating a new sequence object, and just cache the result of GetItem

Copy link
Member
@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, GET_ITEM the (tuple) macro does return a borrowed reference, the sequence function does not (and cannot) do that. Before a new tuple was created which would do the reference counting for us (and increment the reference counts). You have to hold on to the reference, since a custom sequence could return new references (say you got a (x)range(10**3, 10**3+6), then the numbers returned will only have a single reference (at least for all you know).

So no, you will need to do reference counting on your "manual tuple", and I suppose we should/could add a test for it.

if (commit_to_unpack) {
return n;
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just what I think right now (don't worry about it), but I would remove the blank line at least to make the else stick to the if block.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, since this was as much accidental as anything else

@eric-wieser
Copy link
Member Author

Damn, you're very right, and that makes this "manual tuple" a much less reasonable thing to work with

@eric-wieser
Copy link
Member Author
eric-wieser commented Apr 9, 2017

Ok, added reference counting. This is still more performant than master, and definitely has the edge over the same code using normal PyTuple calls

@eric-wieser eric-wieser force-pushed the refactor-prepare-index-only branch from 3c41f62 to 8c4e556 Compare April 9, 2017 11:05
}

/* Passing a tuple subclass - needs to handle errors */
if (PyTuple_Check(index)) {
Copy link
Member Author
@eric-wieser eric-wieser Apr 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code here is subtly different to what we had before.

Calling PySequence_Tuple(index) invokes __iter__, whereas this invokes __getitem__.

So tuple subclasses that implement those methods inconsistently now behave differently. For instance:

class Plus1Tuple(tuple):
    def __getitem__(self, i):
        return super().__getitem__(i) + 1
    # oops, we forgot `__iter__`, and inherited `tuple.__iter__`, which
    # does not fall back on __getitem__

gives:

  • Before: a[PlusOneTuple([1, 2, 3])]a[1, 2, 3] (!)
  • After: a[PlusOneTuple([1, 2, 3])]a[2, 3, 4]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't you just remove this whole block and replace it with commit_to_unpack=1?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, plus a check for too many indices.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with the changed behaviour I think, a tuple subclass should really be OK even with just using PyTuple_GET_ITEM to be honest, otherwise it should be considered badly broken.

Copy link
Member Author
@eric-wieser eric-wieser Jul 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should just bite the bullet here and call tuple(tuplesubclass), since efficiency isn't important for this weird case

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I suppose we might just as well put whatever was there before, it won't make the code any uglier and speed really does not matter. But I no need to add a test or so (or if, put a comment that it is fine to break it). This is too strange to really worry about.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, changed to call tuple(tuplesubclass), which makes our life a lot easier

Copy link
Contributor
@mhvk mhvk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looked at this more out of curiosity than anything else, so only some clarification requests.

* borrowed reference.
* @param result An empty buffer of 2*NPY_MAXDIMS PyObject* to write each
* index component to. The references written are new..
* This function will in some cases clobber the array beyond
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comments are very helpful, but here maybe be even more explicit and say "beyond the number of items returned".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do - I was struggling to phrase this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I've removed that remark entirely, as it made more sense under the description of the return value:

 * @returns        The number of items in `result`, or -1 if an error occured.
 *                 The entries in `result` at and beyond this index should be
 *                 assumed to contain garbage, even if they were initialized
 *                 to NULL, so are not safe to Py_XDECREF.

}

/*
* For some reason, anything that's long but not too long is turned into
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment confuses me: is it totally unclear why this is done at all? I'd guess that here the point is that it is known that unpacking will fail (well, modulo, the factor of 2), but that one should not preclude a very long list of, e.g., integers. If that is not the case, what would fail if one removed this? Should it be deprecated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it totally unclear why this is done at all?

It's totally unclear to me why the 2 is missing. As a result, x[[None] * 20] and x[(None,) * 20] mean the same thing, yet x[[None] * 40] and x[(None,) * 40] mean different things (yet neither error).

Of course, someone might be relying on x[[None] * 40] meaning x[[None] * 40],], so it's too late to fix it.

The rationale behind the 2*NPY_MAXDIMS limit elsewhere is that the result is limited to this many dimensions - at best, you can use an int to remove every existing dimension, and then None to add them back again - so the longest valid index is assumed to be (0,)*NPY_MAXDIMS + (None,)*NPY_MAXDIMS.

That's not really true either, as it ought to be legal to add an Ellipsis in there too...

Should it be deprecated?

Arguably everything from this point down should be deprecated, as in #4434

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, a bit of a mess. But I think for someone reading the code later, it may be useful to explicitly mention your last point, i.e., that everything below here should arguably be deprecated (and refer to 4434).

Copy link
Member Author
@eric-wieser eric-wieser Apr 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also done (but higher up than this line)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x[[None] * 40] should error? But yes, there are some awful example such as using lists of lists as index. I would prefer the things like "for some reason" to be replaced with things like "As described previously, for backwards compatibility" in general, it is after all the implementation of the comment just a bit further up (All of this comes down to that Numeric compat thing! Except of course the 2*N which I did because I thought "why not allow a bit longer indices just in case someone is crazy, no harm").

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll improve that wording

@eric-wieser eric-wieser force-pushed the refactor-prepare-index-only branch from 6d21f5a to 9832f05 Compare April 9, 2017 20:47
@eric-wieser
Copy link
Member Author

@seberg: Let me know if the refcounting now looks good, then I'll squash together the 5 most recent commits

* that the longest possible index that is allowed to produce a result is
* `(0,)*np.MAXDIMS + (None,)*np.MAXDIMS`. This assumption turns out to be
* wrong (we could add one more item, an Ellipsis), but we keep it for
* compatibility.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know, the 2* is pretty arbitrary, so you can increment it by one if you like, I just set it as a "high enough" value and yeah, forgot that in principle you can go one higher and still get a valid index.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, there is no limit to the number of valid indices. You can index with True or False as many times as you like, and the dimensionality will only ever increase by one

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(although in practice, indexing with more than 32 causes problems elsewhere)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just thought I would start on this again, don't have much time now so might forget again though, if I do and you want to come back to this, please don't hesitate to ping.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-wieser no, maxdims*2+1 is actually maximu, if you do None/True you add one so you end up with at least that many dims ;).

Copy link
Member Author
@eric-wieser eric-wieser Jul 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> a = np.arange(6).reshape(2, 3)
>>> a[True]
array([[[0, 1, 2],
        [3, 4, 5]]])
>>> a[(True,)*32]
array([[[0, 1, 2],
        [3, 4, 5]]])
>>> a[(True,)*33]
ValueError: Cannot construct an iterator with more than 32 operands (33 were requested)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, comment seems fine to me, could make it "is based ... longest reasonable index" or so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@seberg
Copy link
Member
seberg commented Jul 15, 2017 via email

Copy link
Member
@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I think this code is pretty well tested nowadays, so I am fine with merging this. Eric, maybe you can have a glance over yourself one more time and then ping me and I will merge?

* that the longest possible index that is allowed to produce a result is
* `(0,)*np.MAXDIMS + (None,)*np.MAXDIMS`. This assumption turns out to be
* wrong (we could add one more item, an Ellipsis), but we keep it for
* compatibility.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, comment seems fine to me, could make it "is based ... longest reasonable index" or so.

@@ -139,6 +139,187 @@ PyArray_MapIterSwapAxes(PyArrayMapIterObject *mit, PyArrayObject **ret, int getm
*ret = (PyArrayObject *)new;
}

NPY_NO_EXPORT NPY_INLINE void
multi_DECREF(PyObject **objects, npy_intp n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First wasn't sure I like this, but it seems harmless :).

}

/* Obvious single-entry cases */
if (0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, with those #if formatting it without the 0 is ugly I suppose

Copy link
Member Author
@eric-wieser eric-wieser Jul 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should optimize out anyway. Could be if (0 /* to make macros below easier */

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, its fine, obvious enough, just tried to style nitpick and it didn't work too well ;)

#else
|| PyLong_CheckExact(index)
#endif
|| index == Py_None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brackets?

}

/* Passing a tuple subclass - needs to handle errors */
if (PyTuple_Check(index)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with the changed behaviour I think, a tuple subclass should really be OK even with just using PyTuple_GET_ITEM to be honest, otherwise it should be considered badly broken.

}

/*
* For some reason, anything that's long but not too long is turned into
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x[[None] * 40] should error? But yes, there are some awful example such as using lists of lists as index. I would prefer the things like "for some reason" to be replaced with things like "As described previously, for backwards compatibility" in general, it is after all the implementation of the comment just a bit further up (All of this comes down to that Numeric compat thing! Except of course the 2*N which I did because I thought "why not allow a bit longer indices just in case someone is crazy, no harm").

|| PySequence_Check(tmp_obj)
|| PySlice_Check(tmp_obj)
|| tmp_obj == Py_Ellipsis
|| tmp_obj == Py_None) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I think we usually put brackets, but no big deal

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't agree - we use brackets to make precedence of || and && obvious, but a quick grep shows it faily uncommon to use them to aid reading precedence of ||, && and ==, >=, ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, frankly don't care much, its not *a++ or something...

@eric-wieser eric-wieser force-pushed the refactor-prepare-index-only branch from 7ae8b44 to 68bad6a Compare July 16, 2017 14:11
* allocation, but doesn't need to be a fast path anyway
*/
if (PyTuple_Check(index)) {
PyTupleObject *tup = PySequence_Tuple(index);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you miss the Py_DECREF for tup here, could have done a recursive call as well (first thought you did) instead of refactoring it out.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought refactoring it out was more transparent, but yes, I could have.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also apparently requires a cast because PySequence_Tuple probably returns a PyObject.

@eric-wieser eric-wieser force-pushed the refactor-prepare-index-only branch from bf7a0c4 to c587963 Compare July 16, 2017 14:29
@seberg
Copy link
Member
seberg commented Jul 17, 2017

Should I just squash it some time?

@eric-wieser
Copy link
Member Author

Yep, I think squashing via github into one commit is the best plan. The git history is just clutter, but that means if people really care they can check this PR in its unmodified messy state

@seberg seberg merged commit 08904d7 into numpy:master Jul 18, 2017
@seberg
Copy link
Member
seberg commented Jul 18, 2017

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0