8000 MAINT: Make the refactor suggested in prepare_index by eric-wieser · Pull Request #8278 · numpy/numpy · GitHub
[go: up one dir, main page]

Skip to content

MAINT: Make the refactor suggested in prepare_index #8278

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jul 18, 2017
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fixup! MAINT: Overhaul function to try and increase speed
Improve comments

[ci skip]
  • Loading branch information
eric-wieser committed Apr 9, 2017
commit 9832f05474dc7d45f79fefd04af7a39c1f62a880
17 changes: 13 additions & 4 deletions numpy/core/src/multiarray/mapping.c
Original file line number Diff line number Diff line change
Expand Up @@ -164,14 +164,21 @@ multi_DECREF(PyObject **objects, npy_intp n)
* > or the newaxis object, but not for integer arrays or other embedded
* > sequences.
*
* The rationale for only unpacking `2*NPY_MAXDIMS` items is the assumption
* that the longest possible index that is allowed to produce a result is
* `(0,)*np.MAXDIMS + (None,)*np.MAXDIMS`. This assumption turns out to be
* wrong (we could add one more item, an Ellipsis), but we keep it for
* compatibility.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know, the 2* is pretty arbitrary, so you can increment it by one if you like, I just set it as a "high enough" value and yeah, forgot that in principle you can go one higher and still get a valid index.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, there is no limit to the number of valid indices. You can index with True or False as many times as you like, and the dimensionality will only ever increase by one

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(although in practice, indexing with more than 32 causes problems elsewhere)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just thought I would start on this again, don't have much time now so might forget again though, if I do and you want to come back to this, please don't hesitate to ping.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-wieser no, maxdims*2+1 is actually maximu, if you do None/True you add one so you end up with at least that many dims ;).

Copy link
Member Author
@eric-wieser eric-wieser Jul 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> a = np.arange(6).reshape(2, 3)
>>> a[True]
array([[[0, 1, 2],
        [3, 4, 5]]])
>>> a[(True,)*32]
array([[[0, 1, 2],
        [3, 4, 5]]])
>>> a[(True,)*33]
ValueError: Cannot construct an iterator with more than 32 operands (33 were requested)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, comment seems fine to me, could make it "is based ... longest reasonable index" or so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

9EF9

*
* @param index The index object, which may or may not be a tuple. This is a
* borrowed reference.
* @param result An empty buffer of 2*NPY_MAXDIMS PyObject* to write each
* index component to. The references written are new..
* This function will in some cases clobber the array beyond
* the last item unpacked.
* index component to. The references written are new.
*
* @returns The number of items in `result`, or -1 if an error occured.
* The entries in `result` at and beyond this index should be
* assumed to contain garbage, even if they were initialized
* to NULL, so are not safe to Py_XDECREF.
*/
NPY_NO_EXPORT npy_intp
unpack_indices(PyObject *index, PyObject *result[2*NPY_MAXDIMS])
Expand Down Expand Up @@ -234,7 +241,9 @@ unpack_indices(PyObject *index, PyObject *result[2*NPY_MAXDIMS])

/*
* At this point, we're left with a non-tuple, non-array, sequence:
* typically, a list
* typically, a list. We use some somewhat-arbirary heuristics from here
* onwards to decided whether to treat that list as a single index, or a
* list of indices. It might be worth deprecating this behaviour (gh-4434).
*
* Sequences < NPY_MAXDIMS with any slice objects
* or newaxis, Ellipsis or other arrays or sequences
Expand Down
0