-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Implement np.diff
for single order differences
#50569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
5e99c87
1deba99
3afa2bc
84f28c4
c948f1d
abe47a0
19bb4d4
ca2fc91
1e99f18
c86c91a
49a17ad
de86880
c301c8e
04d4936
ad276dc
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -4106,7 +4106,10 @@ def logcumsumexp(a, axis): | |||
'expected scalar_type Double but found Float'): | ||||
torch.logcumsumexp(b, axis, out=inplace_out) | ||||
|
||||
def test_diff(self, device): | ||||
@dtypes(*torch.testing.get_all_dtypes(include_bfloat16=False, include_half=False)) | ||||
@dtypesIfCPU(*torch.testing.get_all_dtypes(include_bfloat16=False, include_half=True)) | ||||
@dtypesIfCUDA(*torch.testing.get_all_dtypes(include_bfloat16=True, include_half=True)) | ||||
def test_diff(self, device, dtype): | ||||
def _test_diff_numpy(t, dims=None): | ||||
for dim in dims if dims else range(t.dim()): | ||||
actual = torch.diff(t, dim=dim, prepend=t, append=t) | ||||
soulitzer marked this conversation as resolved.
Show resolved
Hide resolved
|
||||
|
@@ -4120,43 +4123,29 @@ def _test_diff_numpy(t, dims=None): | |||
expected = torch.from_numpy(np.diff(np_t, axis=dim, prepend=np_t, append=np_t)) | ||||
self.assertEqual(actual, expected.to(t.dtype)) | ||||
|
||||
scalar = t[(0,) * t.dim()] | ||||
scalar_np = np_t[(0,) * t.dim()] | ||||
|
||||
# test when both prepend and append are scalars | ||||
actual = torch.diff(t, dim=dim, prepend=scalar, append=scalar) | ||||
expected = torch.from_numpy(np.diff(np_t, axis=dim, prepend=scalar_np, append=scalar_np)) | ||||
self.assertEqual(actual, expected.to(t.dtype)) | ||||
|
||||
# test when prepend is a scalar and append is a tensor | ||||
actual = torch.diff(t, dim=dim, prepend=scalar, append=t) | ||||
expected = torch.from_numpy(np.diff(np_t, axis=dim, prepend=scalar_np, append=np_t)) | ||||
self.assertEqual(actual, expected.to(t.dtype)) | ||||
|
||||
shapes = ( | ||||
(1,), | ||||
(1, 5), | ||||
(3, 5), | ||||
(1, 5, 1), | ||||
(2, 3, 5)) | ||||
|
||||
for dt in torch.testing.get_all_dtypes(): | ||||
for shape in shapes: | ||||
contig = _make_tensor(shape, dt, device) | ||||
_test_diff_numpy(contig) | ||||
for shape in shapes: | ||||
contig = _make_tensor(shape, dtype, device) | ||||
_test_diff_numpy(contig) | ||||
|
||||
non_contig = torch.empty(shape + (2, 2), dtype=dt)[..., 0] | ||||
non_contig = non_contig.select(-1, -1) | ||||
non_contig.copy_(contig) | ||||
self.assertTrue(not non_contig.is_contiguous() or shape == (1,)) | ||||
non_contig = torch.empty(shape + (2, 2), dtype=dtype)[..., 0] | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use make_tensor instead of _make_tensor: pytorch/torch/testing/_internal/common_utils.py Line 1482 in 4a2aa0f
Both here on 4152 and on 4149. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've replaced _make_tensor with make_tensor on line 4149, but I don't think we can use make_tensor to produce torch.empty for line 4152. |
||||
non_contig = non_contig.select(-1, -1) | ||||
non_contig.copy_(contig) | ||||
self.assertTrue(not non_contig.is_contiguous() or shape == (1,)) | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Cool testing of both contiguous and discontiguous use cases. |
||||
|
||||
_test_diff_numpy(non_contig) | ||||
_test_diff_numpy(non_contig) | ||||
|
||||
t = torch.ones(2, 3) | ||||
|
||||
# Size of prepend/append needs to be same as t except along given dim | ||||
with self.assertRaises(RuntimeError): | ||||
invalid_prepend = torch.tensor([[0, 1]]) | ||||
invalid_prepend = torch.tensor([[0, 1]], device=device, dtype=dtype) | ||||
t.diff(dim=0, prepend=invalid_prepend) | ||||
|
||||
with self.assertRaisesRegex( | ||||
|
@@ -4165,7 +4154,7 @@ def _test_diff_numpy(t, dims=None): | |||
|
||||
with self.assertRaisesRegex( | ||||
RuntimeError, 'diff requires input that is at least one-dimensional'): | ||||
scalar = torch.tensor(2) | ||||
scalar = torch.tensor(2, device=device, dtype=dtype) | ||||
torch.diff(scalar) | ||||
|
||||
def _test_large_cum_fn_helper(self, x, fn): | ||||
|
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @rgommers for a possible issue with NumPy's signature
np.diff has both prepend and append default to "no value", which is represented by:
https://github.com/numpy/numpy/blob/1de46fe2feee9d3c500a83331ac9b75af5aef947/numpy/_globals.py#L57
The comment indicates it's intended to be used with "deprecated keywords." However, it appears these defaults come from the PR adding the kwargs: numpy/numpy#8206. @rgommers, seems like the NumPy signature would ideally have prepend=None and append=None?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That comment is a little misleading, or at least incomplete.
_NoValue
is mainly used when keywords are added to a function, and the function forwards its keywords somewhere. See for examplenp.sum
which forwards to the.sum
method of its input (which can be atorch.Tensor
). Adding keywords to the numpy function that are not yet present in all other/downstream libraries implementing the same functionality would then break previously working code. I'll go fix the code comment.Wherever you see
_NoValue
, just readNone
.The PR gives this example that would not work with
prepend=None
for object arrays:A little far-fetched, but unfortunately this works:
Object arrays are very annoying.