10000 Merge pull request #8847 from charris/update-1.13.0-notes · juliantaylor/numpy@1e8143c · GitHub
[go: up one dir, main page]

Skip to content

Commit 1e8143c

Browse files
authored
Merge pull request numpy#8847 from charris/update-1.13.0-notes
DOC: Preliminary edit of 1.13.0 release notes.
2 parents c25db39 + 5b01421 commit 1e8143c

File tree

1 file changed

+73
-66
lines changed

1 file changed

+73
-66
lines changed

doc/release/1.13.0-notes.rst

Lines changed: 73 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,10 @@ This release supports Python 2.7 and 3.4 - 3.6.
77
Highlights
88
==========
99

10-
* Operations like `a + b + c` will create less temporaries on some platforms
10+
* Reuse of temporaries, operations like ``a + b + c`` will create fewer
11+
temporaries on some platforms.
12+
* Inplace operations check if inputs overlap outputs and create temporaries
13+
to avoid problems.
1114

1215

1316
Dropped Support
@@ -29,9 +32,8 @@ Future Changes
2932
Compatibility notes
3033
===================
3134

32-
3335
Error type changes
34-
~~~~~~~~~~~~~~~~~~
36+
------------------
3537

3638
* ``numpy.hstack()`` now throws ``ValueError`` instead of ``IndexError`` when
3739
input is empty.
@@ -41,7 +43,7 @@ Error type changes
4143
these.
4244

4345
Tuple object dtypes
44-
~~~~~~~~~~~~~~~~~~~
46+
-------------------
4547

4648
Support has been removed for certain obscure dtypes that were unintentionally
4749
allowed, of the form ``(old_dtype, new_dtype)``, where either of the dtypes
@@ -50,27 +52,27 @@ is or contains the ``object`` dtype. As an exception, dtypes of the form
5052
existing use.
5153

5254
DeprecationWarning to error
53-
~~~~~~~~~~~~~~~~~~~~~~~~~~~
55+
---------------------------
5456
See Changes section for more detail.
5557

5658
* ``partition``, TypeError when non-integer partition index is used.
57-
* ``NpyIter_AdvancedNew``, ValueError when `oa_ndim == 0` and `op_axes` is NULL
59+
* ``NpyIter_AdvancedNew``, ValueError when ``oa_ndim == 0`` and ``op_axes`` is NULL
5860
* ``negative(bool_)``, TypeError when negative applied to booleans.
5961
* ``subtract(bool_, bool_)``, TypeError when subtracting boolean from boolean.
6062
* ``np.equal, np.not_equal``, object identity doesn't override failed comparison.
6163
* ``np.equal, np.not_equal``, object identity doesn't override non-boolean comparison.
62-
* Boolean indexing deprecated behavior dropped. See Changes below for details.
64+
* Deprecated boolean indexing behavior dropped. See Changes below for details.
6365

6466
FutureWarning to changed behavior
65-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67+
---------------------------------
6668
See Changes section for more detail.
6769

6870
* ``numpy.average`` preserves subclasses
6971
* ``array == None`` and ``array != None`` do element-wise comparison.
7072
* ``np.equal, np.not_equal``, object identity doesn't override comparison result.
7173

7274
dtypes are now always true
73-
~~~~~~~~~~~~~~~~~~~~~~~~~~
75+
--------------------------
7476

7577
Previously ``bool(dtype)`` would fall back to the default python
7678
implementation, which checked if ``len(dtype) > 0``. Since ``dtype`` objects
@@ -79,7 +81,7 @@ would evaluate to ``False``, which was unintuitive. Now ``bool(dtype) == True``
7981
for all dtypes.
8082

8183
``__getslice__`` and ``__setslice__`` have been removed from ``ndarray``
82-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
84+
------------------------------------------------------------------------
8385
When subclassing np.ndarray in Python 2.7, it is no longer _necessary_ to
8486
implement ``__*slice__`` on the derived class, as ``__*item__`` will intercept
8587
these calls correctly.
@@ -90,40 +92,44 @@ obvious exception of any code that tries to directly call
9092
this case, ``.__getitem__(slice(start, end))`` will act as a replacement.
9193

9294

93-
C API
94-
~~~~~
95-
96-
9795
New Features
9896
============
9997

98+
``PyArray_MapIterArrayCopyIfOverlap`` added to NumPy C-API
99+
----------------------------------------------------------
100+
Similar to ``PyArray_MapIterArray`` but with an additional ``copy_if_overlap``
101+
argument. If ``copy_if_overlap != 0``, checks if input has memory overlap with
102+
any of the other arrays and make copies as appropriate to avoid problems if the
103+
input is modified during the iteration. See the documentation for more complete
104+
documentation.
105+
100106
Temporary elision
101107
-----------------
102-
On platforms providing the `backtrace` function NumPy will now not create
108+
On platforms providing the ``backtrace`` function NumPy will now not create
103109
temporaries in expression when possible.
104-
For example `d = a + b + c` is transformed to `d = a + b; d += c` which can
110+
For example ``d = a + b + c`` is transformed to ``d = a + b; d += c`` which can
105111
improve performance for large arrays as less memory bandwidth is required to
106112
perform the operation.
107113

108114
``axes`` argument for ``unique``
109-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
115+
--------------------------------
110116
In an N-dimensional array, the user can now choose the axis along which to look
111117
for duplicate N-1-dimensional elements using ``numpy.unique``. The original
112118
behaviour is recovered if ``axis=None`` (default).
113119

114120
``np.gradient`` now supports unevenly spaced data
115-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121+
------------------------------------------------
116122
Users can now specify a not-constant spacing for data.
117123
In particular ``np.gradient`` can now take:
118124

119125
1. A single scalar to specify a sample distance for all dimensions.
120126
2. N scalars to specify a constant sample distance for each dimension.
121-
i.e. `dx`, `dy`, `dz`, ...
127+
i.e. ``dx``, ``dy``, ``dz``, ...
122128
3. N arrays to specify the coordinates of the values along each dimension of F.
123129
The length of the array must match the size of the corresponding dimension
124130
4. Any combination of N scalars/arrays with the meaning of 2. and 3.
125131

126-
This means that, e.g., it is now possible to do the following:
132+
This means that, e.g., it is now possible to do the following::
127133
128134
>>> f = np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float)
129135
>>> dx = 2.
@@ -135,18 +141,30 @@ This means that, e.g., it is now possible to do the following:
135141
``np.heaviside`` computes the Heaviside function
136142
------------------------------------------------
137143
The new function ``np.heaviside(x, h0)`` (a ufunc) computes the Heaviside
138-
function::
139-
144+
function:
145+
.. code::
140146
{ 0 if x < 0,
141147
heaviside(x, h0) = { h0 if x == 0,
142148
{ 1 if x > 0.
143149
150+
Support for returning arrays of arbitrary dimensions in ``apply_along_axis``
151+
----------------------------------------------------------------------------
152+
Previously, only scalars or 1D arrays could be returned by the function passed
153+
to ``apply_along_axis``. Now, it can return an array of any dimensionality
154+
(including 0D), and the shape of this array replaces the axis of the array
155+
being iterated over.
156+
157+
``.ndim`` property added to ``dtype`` to complement ``.shape``
158+
--------------------------------------------------------------
159+
For consistency with ``ndarray`` and ``broadcast``, ``d.ndim`` is a shorthand
160+
for ``len(d.shape)``.
161+
144162

145163
Improvements
146164
============
147165

148166
Partial support for 64-bit f2py extensions with MinGW
149-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
167+
-----------------------------------------------------
150168
Extensions that incorporate Fortran libraries can now be built using the free
151169
MinGW_ toolset, also under Python 3.5. This works best for extensions that only
152170
do calculations and uses the runtime modestly (reading and writing from files,
@@ -163,45 +181,34 @@ programs with a PySide1/Qt4 front-end.
163181
.. _issues: https://mingwpy.github.io/issues.html
164182

165183
Performance improvements for ``packbits`` and ``unpackbits``
166-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184+
------------------------------------------------------------
167185
The functions ``numpy.packbits`` with boolean input and ``numpy.unpackbits`` have
168186
been optimized to be a significantly faster for contiguous data.
169187

170188
Fix for PPC long double floating point information
171-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189+
--------------------------------------------------
172190
In previous versions of numpy, the ``finfo`` function returned invalid
173191
information about the `double double`_ format of the ``longdouble`` float type
174192
on Power PC (PPC). The invalid values resulted from the failure of the numpy
175-
algorithm to deal with the `variable number of digits in the significand
176-
<https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.genprogc/128bit_long_double_floating-point_datatype.htm>`_
177-
that are a feature of PPC long doubles. This release by-passes the failing
193+
algorithm to deal with the variable number of digits in the significand
194+
that are a feature of `PPC long doubles`. This release by-passes the failing
178195
algorithm by using heuristics to detect the presence of the PPC double double
179196
format. A side-effect of using these heuristics is that the ``finfo``
180197
function is faster than previous releases.
181198

199+
.. _PPC long doubles: https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.genprogc/128bit_long_double_floating-point_datatype.htm
200+
182201
.. _issues: https://github.com/numpy/numpy/issues/2669
183202

184203
.. _double double: https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic
185204

186-
Support for returning arrays of arbitrary dimensionality in `apply_along_axis`
187-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
188-
Previously, only scalars or 1D arrays could be returned by the function passed
189-
to `apply_along_axis`. Now, it can return an array of any dimensionality
190-
(including 0D), and the shape of this array replaces the axis of the array
191-
being iterated over.
192-
193-
``.ndim`` property added to ``dtype`` to complement ``.shape``
194-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
195-
For consistency with ``ndarray`` and ``broadcast``, ``d.ndim`` is a shorthand
196-
for ``len(d.shape)``.
197-
198205
Better default repr for ``ndarray`` subclasses
199-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206+
----------------------------------------------
200207
Subclasses of ndarray with no ``repr`` specialization now correctly indent
201208
their data and type lines.
202209

203210
More reliable comparisons of masked arrays
204-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
211+
------------------------------------------
205212
Comparisons of masked arrays were buggy for masked scalars and failed for
206213
structured arrays with dimension higher than one. Both problems are now
207214
solved. In the process, it was ensured that in getting the result for a
@@ -211,26 +218,26 @@ identical to what one gets by comparing an unstructured masked array and then
211218
doing ``.all()`` over some axis.
212219

213220
np.matrix with booleans elements can now be created using the string syntax
214-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221+
---------------------------------------------------------------------------
215222
``np.matrix`` failed whenever one attempts to use it with booleans, e.g.,
216223
``np.matrix('True')``. Now, this works as expected.
217224

218225
More ``linalg`` operations now accept empty vectors and matrices
219-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
226+
----------------------------------------------------------------
220227
All of the following functions in ``np.linalg`` now work when given input
221-
arrays with a 0 in the last two dimensions: `det``, ``slogdet``, ``pinv``,
228+
arrays with a 0 in the last two dimensions: ``det``, ``slogdet``, ``pinv``,
222229
``eigvals``, ``eigvalsh``, ``eig``, ``eigh``.
223230

224231
``argsort`` on masked arrays takes the same default arguments as ``sort``
225-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232+
-------------------------------------------------------------------------
226233
By default, ``argsort`` now places the masked values at the end of the sorted
227234
array, in the same way that ``sort`` already did. Additionally, the
228235
``end_with`` argument is added to ``argsort``, for consistency with ``sort``.
229236
Note that this argument is not added at the end, so breaks any code that
230237
passed ``fill_value`` as a positional argument.
231238

232239
Bundled version of LAPACK is now 3.2.2
233-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
240+
--------------------------------------
234241
NumPy comes bundled with a minimal implementation of lapack for systems without
235242
a lapack library installed, under the name of ``lapack_lite``. This has been
236243
upgraded from LAPACK 3.0.0 (June 30, 1999) to LAPACK 3.2.2 (June 30, 2010). See
@@ -245,7 +252,7 @@ Changes
245252
=======
246253

247254
Ufunc behavior for overlapping inputs
248-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
255+
-------------------------------------
249256

250257
Operations where ufunc input and output operands have memory overlap
251258
produced undefined results in previous Numpy versions, due to data
@@ -262,51 +269,51 @@ not be made even if the arrays overlap, if it can be deduced copies
262269
are not necessary. As an example,``np.add(a, b, out=a)`` will not
263270
involve copies.
264271

265-
To illustrate a previously undefined operation
272+
To illustrate a previously undefined operation::
266273

267-
>>> x = np.arange(16).astype(float)
268-
>>> np.add(x[1:], x[:-1], out=x[1:])
274+
>>> x = np.arange(16).astype(float)
275+
>>> np.add(x[1:], x[:-1], out=x[1:])
269276

270-
In Numpy 1.13.0 the last line is guaranteed to be equivalent to
277+
In Numpy 1.13.0 the last line is guaranteed to be equivalent to::
271278

272-
>>> np.add(x[1:].copy(), x[:-1].copy(), out=x[1:])
279+
>>> np.add(x[1:].copy(), x[:-1].copy(), out=x[1:])
273280

274-
A similar operation with simple non-problematic data dependence is
281+
A similar operation with simple non-problematic data dependence is::
275282

276-
>>> x = np.arange(16).astype(float)
277-
>>> np.add(x[1:], x[:-1], out=x[:-1])
283+
>>> x = np.arange(16).astype(float)
284+
>>> np.add(x[1:], x[:-1], out=x[:-1])
278285

279286
It will continue to produce the same results as in previous Numpy
280287
versions, and will not involve unnecessary temporary copies.
281288

282-
The change applies also to in-place binary operations, for example:
289+
The change applies also to in-place binary operations, for example::
283290

284-
>>> x = np.random.rand(500, 500)
285-
>>> x += x.T
291+
>>> x = np.random.rand(500, 500)
292+
>>> x += x.T
286293

287294
This statement is now guaranteed to be equivalent to ``x[...] = x + x.T``,
288295
whereas in previous Numpy versions the results were undefined.
289296

290297
``average`` now preserves subclasses
291-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
298+
------------------------------------
292299
For ndarray subclasses, ``numpy.average`` will now return an instance of the
293300
subclass, matching the behavior of most other numpy functions such as ``mean``.
294301
As a consequence, also calls that returned a scalar may now return a subclass
295302
array scalar.
296303

297304
``array == None`` and ``array != None`` do element-wise comparison
298-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
305+
------------------------------------------------------------------
299306
Previously these operations returned scalars ``False`` and ``True`` respectively.
300307

301308
``np.equal, np.not_equal`` for object arrays ignores object identity
302-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
309+
--------------------------------------------------------------------
303310
Previously, these functions always treated identical objects as equal. This had
304311
the effect of overriding comparison failures, comparison of objects that did
305312
not return booleans, such as np.arrays, and comparison of objects where the
306313
results differed from object identity, such as NaNs.
307314

308315
Boolean indexing changes
309-
~~~~~~~~~~~~~~~~~~~~~~~~
316+
------------------------
310317
* Boolean array-likes (such as lists of python bools) are always treated as
311318
boolean indexes.
312319

@@ -322,7 +329,7 @@ Boolean indexing changes
322329
``array(1)[array(True)]`` gives ``array([1])`` and not the original array.
323330

324331
``np.random.multivariate_normal`` behavior with bad covariance matrix
325-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
332+
---------------------------------------------------------------------
326333

327334
It is now possible to adjust the behavior the function will have when dealing
328335
with the covariance matrix by using two new keyword arguments:
@@ -336,14 +343,14 @@ with the covariance matrix by using two new keyword arguments:
336343
the behavior used on previous releases.
337344

338345
``assert_array_less`` compares ``np.inf`` and ``-np.inf`` now
339-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346+
-------------------------------------------------------------
340347
Previously, ``np.testing.assert_array_less`` ignored all infinite values. This
341348
is not the expected behavior both according to documentation and intuitively.
342349
Now, -inf < x < inf is considered ``True`` for any real number x and all
343350
other cases fail.
344351

345352
``offset`` attribute value in ``memmap`` objects
346-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
353+
------------------------------------------------
347354
The ``offset`` attribute in a ``memmap`` object is now set to the
348355
offset into the file. This is a behaviour change only for offsets
349356
greater than ``mmap.ALLOCATIONGRANULARITY``.

0 commit comments

Comments
 (0)
0