ENH enable __imatmul__ for ArrayAPI compatability.#21912
ENH enable __imatmul__ for ArrayAPI compatability.#21912Micky774 wants to merge 5 commits intonumpy:mainfrom
__imatmul__ for ArrayAPI compatability.#21912Conversation
|
I am still hesitant about this, since it seems very unintuitive that I just tried this: In [23]: t = torch.Tensor([[1, 2], [3, 4]])
In [24]: a = t
In [25]: t += t
In [26]: a
Out[26]:
tensor([[2., 4.],
[6., 8.]])
In [27]: a is t
Out[27]: True
In [28]: t @= t
In [29]: a is t
Out[29]: False
In [30]: t
Out[30]:
tensor([[28., 40.],
[60., 88.]])
In [31]: a
Out[31]:
tensor([[2., 4.],
[6., 8.]])IMO, that is not great, since torch fails to implement in-place (maybe for good reasons, since it is often not possible for |
|
OTOH, maybe the speed difference is small and we can optimize the stacked array case at least? Might be nice to confirm, and I wonder if we should tie this to the fixing the stacked array version (which is not quite trivial!). |
|
We discussed this today a bit, and were a bit unsure that this is actually a good match. So I thought I would bring it up again with the data-api: data-apis/array-api#509 |
|
Closing this one because #21120 is much further along. It could be picked up as a new PR of course. |
Enables
__imatmul__, albeit just inefficiently performing the out-of-place operation and copying, to bringndarraycloser to ArrayAPI compatability.