-
Notifications
You must be signed in to change notification settings - Fork 25.9k
OpInfo: fmod and remainder
#57941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpInfo: fmod and remainder
#57941
Changes from 3 commits
4d1a080
b40e11f
2daee8f
2a8772f
e8637f1
d804adc
9604f48
c22d387
0bb2482
7cf0861
a001581
abdfe1f
c445b0b
bd446ab
4e34135
6cf48ac
f95cba6
a5bccee
726b1eb
8703bfa
5d0d9ce
b4c38e4
0174dd2
70385c9
42d49d2
e074075
b25de8c
125ec77
5856422
a26f041
7684fc6
c40c6fc
92899ba
17480af
572082d
1c26704
6797f4e
6c54bdf
020424b
6db6536
cff930a
70506ff
2e59a1b
2825087
b6ab98c
ff00042
6cb309f
f6bb0b4
174ad54
8076a57
0832347
e170a35
eea02fc
80c8dab
2c3837d
1dea0e8
3ac46f7
2a99968
0f095c9
c3273f7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3274,12 +3274,11 @@ def merge_dicts(*dicts): | |
| r""" | ||
| fmod(input, other, *, out=None) -> Tensor | ||
|
|
||
| Computes the element-wise remainder of division with the remainder having the same | ||
| sign as the dividend :attr:`input`. | ||
| Applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_ | ||
| for floating point tensors, and the modulus operation for integer tensors. The result | ||
| has the same sign as the dividend :attr:`input` and its absolute value | ||
| is less than that of :attr:`other`. | ||
|
|
||
| .. math:: | ||
| \text{{out}}_i = \text{{input}}_i - trunc(\frac{\text{{input}}_i}{\text{{other}}_i}) * \text{{other}}_i | ||
| """ + r""" | ||
| Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`, | ||
| :ref:`type promotion <type-promotion-doc>`, and integer and float inputs. | ||
|
|
||
|
|
@@ -3294,13 +3293,6 @@ def merge_dicts(*dicts): | |
| Complex inputs are not supported. In some cases, it is not mathematically | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Cool note |
||
| possible to satisfy the definition of a modulo operation with complex numbers. | ||
|
|
||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need to explicitly mention this? If yes, in that case we should also add it for remainder? (Though AFAIK, not many operators mention if they support complex dtypes) >>> import torch
>>> t = torch.tensor(1+0j)
>>> torch.fmod(t, torch.tensor(0))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: "fmod_cpu" not implemented for 'ComplexFloat'
>>> torch.remainder(t, torch.tensor(0))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: "remainder_cpu" not implemented for 'ComplexFloat'
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think Ruberry mentioned in his comment here (#57941 (comment)) for adding a note on lack of support for complex tensors.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this note is already covered above which clarifies which inputs are supported.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think, it would be better to confirm this with @mruberry .
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's true that the "supports statement" explains complex tensors are not supported as inputs, but the note explains why they're not supported, which is often helpful. Otherwise users might request complex support or try to add it themselves! @anjali411, what do you think? |
||
| .. seealso:: | ||
|
|
||
| :func:`torch.fmod` truncates (rounded towards zero) the quotient with the | ||
| output having same sign as the dividend :attr:`input` while | ||
| :func:`torch.remainder` rounds (towards the nearest even integer) the quotient | ||
| with the output having same sign as the divisor :attr:`other`. | ||
|
|
||
| Args: | ||
| input (Tensor): the dividend | ||
| other (Tensor or Scalar): the divisor | ||
|
|
@@ -3315,6 +3307,11 @@ def merge_dicts(*dicts): | |
| >>> torch.fmod(torch.tensor([1, 2, 3, 4, 5]), -1.5) | ||
kshitij12345 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| tensor([1.0000, 0.5000, 0.0000, 1.0000, 0.5000]) | ||
|
|
||
| .. seealso:: | ||
|
|
||
| :func:`torch.remainder` which is similar to :func:`torch.fmod` except that if the sign | ||
| of the modulus is different than the sign of the divisor :attr:`other` then the divisor | ||
| is added to the modulus. | ||
| """.format(**common_args)) | ||
|
|
||
| add_docstr(torch.frac, | ||
|
|
@@ -7640,12 +7637,11 @@ def merge_dicts(*dicts): | |
| r""" | ||
| remainder(input, other, *, out=None) -> Tensor | ||
|
|
||
| Computes the element-wise remainder of division with the remainder having the same | ||
| sign as the divisor :attr:`other`. | ||
| Like :func:`torch.fmod` this applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_ | ||
| for floating point tensors and the modulus operation for integer tensors. | ||
| Unlike :func:`torch.fmod`, however, if the sign of the modulus is different | ||
| than the sign of the divisor :attr:`other` then the divisor is added to the modulus. | ||
|
|
||
| .. math:: | ||
| \text{{out}}_i = \text{{input}}_i - round(\frac{\text{{input}}_i}{\text{{other}}_i}) * \text{{other}}_i | ||
| """ + r""" | ||
| Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`, | ||
| :ref:`type promotion <type-promotion-doc>`, and integer and float inputs. | ||
|
|
||
|
|
@@ -7654,13 +7650,6 @@ def merge_dicts(*dicts): | |
| possible to satisfy the definition of a modulo operation with complex numbers. | ||
| See :func:`torch.fmod` for how division by zero is handled. | ||
|
|
||
| .. seealso:: | ||
|
|
||
| :func:`torch.fmod` truncates (rounded towards zero) the quotient with the | ||
| output having same sign as the dividend :attr:`input` while | ||
| :func:`torch.remainder` rounds (towards the nearest even integer) the quotient | ||
| with the output having same sign as the divisor :attr:`other`. | ||
|
|
||
| Args: | ||
| input (Tensor): the dividend | ||
| other (Tensor or Scalar): the divisor | ||
|
|
@@ -7677,9 +7666,9 @@ def merge_dicts(*dicts): | |
|
|
||
| .. seealso:: | ||
|
|
||
| :func:`torch.fmod`, which computes the element-wise remainder of | ||
| division equivalently to the C library function ``fmod()``. | ||
|
|
||
| :func:`torch.fmod` which just computes the modulus for integer inputs and | ||
| applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_ | ||
| for floating point inputs. | ||
| """.format(**common_args)) | ||
|
|
||
| add_docstr(torch.renorm, | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1724,7 +1724,7 @@ def f_retry(*args, **kwargs): | |
|
|
||
| def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, high=None, | ||
| requires_grad: bool = False, noncontiguous: bool = False, | ||
| exclude_values: list = None) -> torch.Tensor: | ||
| exclude_zero: bool = False) -> torch.Tensor: | ||
| """ Creates a random tensor with the given size, device and dtype. | ||
|
|
||
| By default, the tensor's values are in the range [-9, 9] for most dtypes. If low | ||
|
|
@@ -1737,9 +1737,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig | |
| specifies a tensor with a 1 or 0 elements in which case the noncontiguous parameter is ignored because | ||
| it is not possible to create a noncontiguous Tensor with a single element. | ||
|
|
||
| If exclude_values is passed with a list of values (default is empty list), all the matching values | ||
| in the created tensor are replaced with an epsilon value if floating type, [`eps` + `eps`.j] if complex type | ||
| and 1 if integer/boolean type. | ||
| If exclude_zero is passed with True (default is False), all the matching values (with zero) in created tensor are replaced with an epsilon value if floating type, [`eps + `eps`.j] if complex type and 1 if integer/boolean type. | ||
| """ | ||
|
|
||
| assert low is None or low < 9, "low value too high!" | ||
|
|
@@ -1778,7 +1776,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig | |
| result = torch.repeat_interleave(result, 2, dim=-1) | ||
| result = result[..., ::2] | ||
|
|
||
| if exclude_values: | ||
| if exlude_zero: | ||
| if dtype in integral_types() or dtype is torch.bool: | ||
| replace_with = torch.tensor(1, device=device, dtype=dtype) | ||
| elif dtype in floating_types_and(torch.half, torch.bfloat16): | ||
|
|
@@ -1789,11 +1787,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig | |
| real = torch.tensor(torch.finfo(float_dtype).eps, device=device, dtype=dtype) | ||
| imag = torch.tensor(torch.finfo(float_dtype).eps, device=device, dtype=dtype) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will need the same logic as above, so it probably wants to refactor the exclusion code as a helper and apply it to the real and imaginary parts separately Edit: unless we're going with the simpler "exclude_zero," then it's probably not necessary |
||
| replace_with = torch.complex(real, imag) | ||
| for exclude_val in exclude_values: | ||
| if dtype is torch.bool: | ||
| result[result == exclude_val] = replace_with | ||
| else: | ||
| result[result == exclude_val] = exclude_val + replace_with | ||
| result[result == 0] = replace_with | ||
|
|
||
| if dtype in floating_types_and(torch.half, torch.bfloat16) or\ | ||
| dtype in complex_types(): | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great intro