8000 Better Manual Broadcasting for ufuncs · Issue #12900 · numpy/numpy · GitHub
[go: up one dir, main page]

Skip to content
Better Manual Broadcasting for ufuncs #12900
Open
@randolf-scholz

Description

@randolf-scholz

Standard ufuncs like np.add, np.multiply and so on should allow for manual broadcasting. In particular, I suggest adding support of the ufunc axes keyword to the standard math functions listed here. (My apologies if this is already in the works, as far as I can tell it doesn't work in 1.15.4)

The main issue with the status quo is that it makes it unnecessarily complicated and awkward to write routines that a priori do not know the exact shapes of incoming tensors. (but may know it partially, for example it could be known that the incoming tensor has the shape (?,3,3), where the ? could contain any number of additional axes.)

Reproducing code example:

For example a task as simple as "Add vector v to the tensor T along the third axis" is really awkward to do:

import numpy as np
T = np.ones((2,2,3,3))
v = np.arange(3)
# z = T + v  # adds along the last axis, not what we want!
# z = T + v[None, None, : , None]  # does work, but incredibly ugly / hard to read
# z = np.add(T, v, axes=([2],[0]))  # suggested "pythonic" solution
# z = np.add(T, v, broadcasting= 'ijkl,k -> ijkl')  # alternative  einsum-like solution

Of course, once this works even way more complicated statements can be written with ease. For example let's say we want to scale the first and second axis of a Tensor T pointwise by values provided by the transpose of a matrix M:

import numpy as np
T = np.ones((3,3,2,2))
M = np.ones((3,3))*[1,2,3]
# z = T * M  # does not work
# z = T * M.T[:, :, None , None]  # does work, but incredibly ugly / hard to read
# z = np.einsum('ijkl, ji -> ijkl', T, v)  # usable workaround exclusively for `np.multiply`
# z = np.multiply(T, M, axes=([0,1],[1,0]))  # suggested "pythonic" solution
# z = np.multiply(T, M, broadcasting= 'ijkl,ji -> ijkl')))  # alternative  einsum-like solution

In essence, it would be absolutely great to have other standard functions supporting Einstein-Notation like broadcasting such that one could easily implement even complicated tensor statements like T_ijkl = A_ij + C_il*exp(D_km E_ml)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0