@@ -7,7 +7,10 @@ This release supports Python 2.7 and 3.4 - 3.6.
7
7
Highlights
8
8
==========
9
9
10
- * Operations like `a + b + c ` will create less temporaries on some platforms
10
+ * Reuse of temporaries, operations like ``a + b + c `` will create fewer
11
+ temporaries on some platforms.
12
+ * Inplace operations check if inputs overlap outputs and create temporaries
13
+ to avoid problems.
11
14
12
15
13
16
Dropped Support
@@ -29,9 +32,8 @@ Future Changes
29
32
Compatibility notes
30
33
===================
31
34
32
-
33
35
Error type changes
34
- ~~~~~~~~~~~~~~~~~~
36
+ ------------------
35
37
36
38
* ``numpy.hstack() `` now throws ``ValueError `` instead of ``IndexError `` when
37
39
input is empty.
@@ -41,7 +43,7 @@ Error type changes
41
43
these.
42
44
43
45
Tuple object dtypes
44
- ~~~~~~~~~~~~~~~~~~~
46
+ -------------------
45
47
46
48
Support has been removed for certain obscure dtypes that were unintentionally
47
49
allowed, of the form ``(old_dtype, new_dtype) ``, where either of the dtypes
@@ -50,27 +52,27 @@ is or contains the ``object`` dtype. As an exception, dtypes of the form
50
52
existing use.
51
53
52
54
DeprecationWarning to error
53
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
55
+ ---------------------------
54
56
See Changes section for more detail.
55
57
56
58
* ``partition ``, TypeError when non-integer partition index is used.
57
- * ``NpyIter_AdvancedNew ``, ValueError when `oa_ndim == 0 ` and `op_axes ` is NULL
59
+ * ``NpyIter_AdvancedNew ``, ValueError when `` oa_ndim == 0 `` and `` op_axes ` ` is NULL
58
60
* ``negative(bool_) ``, TypeError when negative applied to booleans.
59
61
* ``subtract(bool_, bool_) ``, TypeError when subtracting boolean from boolean.
60
62
* ``np.equal, np.not_equal ``, object identity doesn't override failed comparison.
61
63
* ``np.equal, np.not_equal ``, object identity doesn't override non-boolean comparison.
62
- * Boolean indexing deprecated behavior dropped. See Changes below for details.
64
+ * Deprecated boolean indexing behavior dropped. See Changes below for details.
63
65
64
66
FutureWarning to changed behavior
65
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67
+ ---------------------------------
66
68
See Changes section for more detail.
67
69
68
70
* ``numpy.average `` preserves subclasses
69
71
* ``array == None `` and ``array != None `` do element-wise comparison.
70
72
* ``np.equal, np.not_equal ``, object identity doesn't override comparison result.
71
73
72
74
dtypes are now always true
73
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
75
+ --------------------------
74
76
75
77
Previously ``bool(dtype) `` would fall back to the default python
76
78
implementation, which checked if ``len(dtype) > 0 ``. Since ``dtype `` objects
@@ -79,7 +81,7 @@ would evaluate to ``False``, which was unintuitive. Now ``bool(dtype) == True``
79
81
for all dtypes.
80
82
81
83
``__getslice__ `` and ``__setslice__ `` have been removed from ``ndarray ``
82
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
84
+ ------------------------------------------------------------------------
83
85
When subclassing np.ndarray in Python 2.7, it is no longer _necessary_ to
84
86
implement ``__*slice__ `` on the derived class, as ``__*item__ `` will intercept
85
87
these calls correctly.
@@ -90,40 +92,44 @@ obvious exception of any code that tries to directly call
90
92
this case, ``.__getitem__(slice(start, end)) `` will act as a replacement.
91
93
92
94
93
- C API
94
- ~~~~~
95
-
96
-
97
95
New Features
98
96
============
99
97
98
+ ``PyArray_MapIterArrayCopyIfOverlap `` added to NumPy C-API
99
+ ----------------------------------------------------------
100
+ Similar to ``PyArray_MapIterArray `` but with an additional ``copy_if_overlap ``
101
+ argument. If ``copy_if_overlap != 0 ``, checks if input has memory overlap with
102
+ any of the other arrays and make copies as appropriate to avoid problems if the
103
+ input is modified during the iteration. See the documentation for more complete
104
+ documentation.
105
+
100
106
Temporary elision
101
107
-----------------
102
- On platforms providing the `backtrace ` function NumPy will now not create
108
+ On platforms providing the `` backtrace ` ` function NumPy will now not create
103
109
temporaries in expression when possible.
104
- For example `d = a + b + c ` is transformed to `d = a + b; d += c ` which can
110
+ For example `` d = a + b + c `` is transformed to `` d = a + b; d += c ` ` which can
105
111
improve performance for large arrays as less memory bandwidth is required to
106
112
perform the operation.
107
113
108
114
``axes `` argument for ``unique ``
109
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
115
+ --------------------------------
110
116
In an N-dimensional array, the user can now choose the axis along which to look
111
117
for duplicate N-1-dimensional elements using ``numpy.unique ``. The original
112
118
behaviour is recovered if ``axis=None `` (default).
113
119
114
120
``np.gradient `` now supports unevenly spaced data
115
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121
+ ------------------------------------------------
116
122
Users can now specify a not-constant spacing for data.
117
123
In particular ``np.gradient `` can now take:
118
124
119
125
1. A single scalar to specify a sample distance for all dimensions.
120
126
2. N scalars to specify a constant sample distance for each dimension.
121
- i.e. `dx `, `dy `, `dz `, ...
127
+ i.e. `` dx `` , `` dy `` , `` dz ` `, ...
122
128
3. N arrays to specify the coordinates of the values along each dimension of F.
123
129
The length of the array must match the size of the corresponding dimension
124
130
4. Any combination of N scalars/arrays with the meaning of 2. and 3.
125
131
126
- This means that, e.g., it is now possible to do the following:
132
+ This means that, e.g., it is now possible to do the following::
127
133
128
134
>>> f = np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float)
129
135
>>> dx = 2.
@@ -135,18 +141,30 @@ This means that, e.g., it is now possible to do the following:
135
141
``np.heaviside `` computes the Heaviside function
136
142
------------------------------------------------
137
143
The new function ``np.heaviside(x, h0) `` (a ufunc) computes the Heaviside
138
- function::
139
-
144
+ function:
145
+ .. code ::
140
146
{ 0 if x < 0,
141
147
heaviside(x, h0) = { h0 if x == 0,
142
148
{ 1 if x > 0.
143
149
150
+ Support for returning arrays of arbitrary dimensions in ``apply_along_axis ``
151
+ ----------------------------------------------------------------------------
152
+ Previously, only scalars or 1D arrays could be returned by the function passed
153
+ to ``apply_along_axis ``. Now, it can return an array of any dimensionality
154
+ (including 0D), and the shape of this array replaces the axis of the array
155
+ being iterated over.
156
+
157
+ ``.ndim `` property added to ``dtype `` to complement ``.shape ``
158
+ --------------------------------------------------------------
159
+ For consistency with ``ndarray `` and ``broadcast ``, ``d.ndim `` is a shorthand
160
+ for ``len(d.shape) ``.
161
+
144
162
145
163
Improvements
146
164
============
147
165
148
166
Partial support for 64-bit f2py extensions with MinGW
149
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
167
+ -----------------------------------------------------
150
168
Extensions that incorporate Fortran libraries can now be built using the free
151
169
MinGW _ toolset, also under Python 3.5. This works best for extensions that only
152
170
do calculations and uses the runtime modestly (reading and writing from files,
@@ -163,45 +181,34 @@ programs with a PySide1/Qt4 front-end.
163
181
.. _issues : https://mingwpy.github.io/issues.html
164
182
165
183
Performance improvements for ``packbits `` and ``unpackbits ``
166
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184
+ ------------------------------------------------------------
167
185
The functions ``numpy.packbits `` with boolean input and ``numpy.unpackbits `` have
168
186
been optimized to be a significantly faster for contiguous data.
169
187
170
188
Fix for PPC long double floating point information
171
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189
+ --------------------------------------------------
172
190
In previous versions of numpy, the ``finfo `` function returned invalid
173
191
information about the `double double `_ format of the ``longdouble `` float type
174
192
on Power PC (PPC). The invalid values resulted from the failure of the numpy
175
- algorithm to deal with the `variable number of digits in the significand
176
- <https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.genprogc/128bit_long_double_floating-point_datatype.htm> `_
177
- that are a feature of PPC long doubles. This release by-passes the failing
193
+ algorithm to deal with the variable number of digits in the significand
194
+ that are a feature of `PPC long doubles `. This release by-passes the failing
178
195
algorithm by using heuristics to detect the presence of the PPC double double
179
196
format. A side-effect of using these heuristics is that the ``finfo ``
180
197
function is faster than previous releases.
181
198
199
+ .. _PPC long doubles : https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.genprogc/128bit_long_double_floating-point_datatype.htm
200
+
182
201
.. _issues : https://github.com/numpy/numpy/issues/2669
183
202
184
203
.. _double double : https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic
185
204
186
- Support for returning arrays of arbitrary dimensionality in `apply_along_axis `
187
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
188
- Previously, only scalars or 1D arrays could be returned by the function passed
189
- to `apply_along_axis `. Now, it can return an array of any dimensionality
190
- (including 0D), and the shape of this array replaces the axis of the array
191
- being iterated over.
192
-
193
- ``.ndim `` property added to ``dtype `` to complement ``.shape ``
194
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
195
- For consistency with ``ndarray `` and ``broadcast ``, ``d.ndim `` is a shorthand
196
- for ``len(d.shape) ``.
197
-
198
205
Better default repr for ``ndarray `` subclasses
199
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206
+ ----------------------------------------------
200
207
Subclasses of ndarray with no ``repr `` specialization now correctly indent
201
208
their data and type lines.
202
209
203
210
More reliable comparisons of masked arrays
204
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
211
+ ------------------------------------------
205
212
Comparisons of masked arrays were buggy for masked scalars and failed for
206
213
structured arrays with dimension higher than one. Both problems are now
207
214
solved. In the process, it was ensured that in getting the result for a
@@ -211,26 +218,26 @@ identical to what one gets by comparing an unstructured masked array and then
211
218
doing ``.all() `` over some axis.
212
219
213
220
np.matrix with booleans elements can now be created using the string syntax
214
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221
+ ---------------------------------------------------------------------------
215
222
``np.matrix `` failed whenever one attempts to use it with booleans, e.g.,
216
223
``np.matrix('True') ``. Now, this works as expected.
217
224
218
225
More ``linalg `` operations now accept empty vectors and matrices
219
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
226
+ ----------------------------------------------------------------
220
227
All of the following functions in ``np.linalg `` now work when given input
221
- arrays with a 0 in the last two dimensions: `det``, ``slogdet ``, ``pinv ``,
228
+ arrays with a 0 in the last two dimensions: `` det ``, ``slogdet ``, ``pinv ``,
222
229
``eigvals ``, ``eigvalsh ``, ``eig ``, ``eigh ``.
223
230
224
231
``argsort `` on masked arrays takes the same default arguments as ``sort ``
225
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232
+ -------------------------------------------------------------------------
226
233
By default, ``argsort `` now places the masked values at the end of the sorted
227
234
array, in the same way that ``sort `` already did. Additionally, the
228
235
``end_with `` argument is added to ``argsort ``, for consistency with ``sort ``.
229
236
Note that this argument is not added at the end, so breaks any code that
230
237
passed ``fill_value `` as a positional argument.
231
238
232
239
Bundled version of LAPACK is now 3.2.2
233
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
240
+ --------------------------------------
234
241
NumPy comes bundled with a minimal implementation of lapack for systems without
235
242
a lapack library installed, under the name of ``lapack_lite ``. This has been
236
243
upgraded from LAPACK 3.0.0 (June 30, 1999) to LAPACK 3.2.2 (June 30, 2010). See
@@ -245,7 +252,7 @@ Changes
245
252
=======
246
253
247
254
Ufunc behavior for overlapping inputs
248
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
255
+ -------------------------------------
249
256
250
257
Operations where ufunc input and output operands have memory overlap
251
258
produced undefined results in previous Numpy versions, due to data
@@ -262,51 +269,51 @@ not be made even if the arrays overlap, if it can be deduced copies
262
269
are not necessary. As an example,``np.add(a, b, out=a)`` will not
263
270
involve copies.
264
271
265
- To illustrate a previously undefined operation
272
+ To illustrate a previously undefined operation::
266
273
267
- >>> x = np.arange(16 ).astype(float )
268
- >>> np.add(x[1 :], x[:- 1 ], out = x[1 :])
274
+ >>> x = np.arange(16).astype(float)
275
+ >>> np.add(x[1:], x[:-1], out=x[1:])
269
276
270
- In Numpy 1.13.0 the last line is guaranteed to be equivalent to
277
+ In Numpy 1.13.0 the last line is guaranteed to be equivalent to::
271
278
272
- >>> np.add(x[1 :].copy(), x[:- 1 ].copy(), out = x[1 :])
279
+ >>> np.add(x[1:].copy(), x[:-1].copy(), out=x[1:])
273
280
274
- A similar operation with simple non-problematic data dependence is
281
+ A similar operation with simple non-problematic data dependence is::
275
282
276
- >>> x = np.arange(16 ).astype(float )
277
- >>> np.add(x[1 :], x[:- 1 ], out = x[:- 1 ])
283
+ >>> x = np.arange(16).astype(float)
284
+ >>> np.add(x[1:], x[:-1], out=x[:-1])
278
285
279
286
It will continue to produce the same results as in previous Numpy
280
287
versions, and will not involve unnecessary temporary copies.
281
288
282
- The change applies also to in-place binary operations, for example:
289
+ The change applies also to in-place binary operations, for example::
283
290
284
- >>> x = np.random.rand(500 , 500 )
285
- >>> x += x.T
291
+ >>> x = np.random.rand(500, 500)
292
+ >>> x += x.T
286
293
287
294
This statement is now guaranteed to be equivalent to ``x[...] = x + x.T ``,
288
295
whereas in previous Numpy versions the results were undefined.
289
296
290
297
``average `` now preserves subclasses
291
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
298
+ ------------------------------------
292
299
For ndarray subclasses, ``numpy.average `` will now return an instance of the
293
300
subclass, matching the behavior of most other numpy functions such as ``mean ``.
294
301
As a consequence, also calls that returned a scalar may now return a subclass
295
302
array scalar.
296
303
297
304
``array == None `` and ``array != None `` do element-wise comparison
298
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
305
+ ------------------------------------------------------------------
299
306
Previously these operations returned scalars ``False `` and ``True `` respectively.
300
307
301
308
``np.equal, np.not_equal `` for object arrays ignores object identity
302
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
309
+ --------------------------------------------------------------------
303
310
Previously, these functions always treated identical objects as equal. This had
304
311
the effect of overriding comparison failures, comparison of objects that did
305
312
not return booleans, such as np.arrays, and comparison of objects where the
306
313
results differed from object identity, such as NaNs.
307
314
308
315
Boolean indexing changes
309
- ~~~~~~~~~~~~~~~~~~~~~~~~
316
+ ------------------------
310
317
* Boolean array-likes (such as lists of python bools) are always treated as
311
318
boolean indexes.
312
319
@@ -322,7 +329,7 @@ Boolean indexing changes
322
329
``array(1)[array(True)] `` gives ``array([1]) `` and not the original array.
323
330
324
331
``np.random.multivariate_normal `` behavior with bad covariance matrix
325
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
332
+ ---------------------------------------------------------------------
326
333
327
334
It is now possible to adjust the behavior the function will have when dealing
328
335
with the covariance matrix by using two new keyword arguments:
@@ -336,14 +343,14 @@ with the covariance matrix by using two new keyword arguments:
336
343
the behavior used on previous releases.
337
344
338
345
``assert_array_less `` compares ``np.inf `` and ``-np.inf `` now
339
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346
+ -------------------------------------------------------------
340
347
Previously, ``np.testing.assert_array_less `` ignored all infinite values. This
341
348
is not the expected behavior both according to documentation and intuitively.
342
349
Now, -inf < x < inf is considered ``True `` for any real number x and all
343
350
other cases fail.
344
351
345
352
``offset `` attribute value in ``memmap `` objects
346
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
353
+ ------------------------------------------------
347
354
The ``offset `` attribute in a ``memmap `` object is now set to the
348
355
offset into the file. This is a behaviour change only for offsets
349
356
greater than ``mmap.ALLOCATIONGRANULARITY ``.
0 commit comments