8000 OpInfo: `fmod` and `remainder` by krshrimali · Pull Request #57941 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
4d1a080
Adding OpInfo for fmod, still some failures
krshrimali May 10, 2021
b40e11f
fmod: skipping variant_eager and jit tests for now, needs debugging
krshrimali May 10, 2021
2daee8f
Merging with upstream
krshrimali May 10, 2021
2a8772f
No extra lines please
krshrimali May 10, 2021
e8637f1
Tests for remainder, check variant tests (TODO)
krshrimali May 10, 2021
d804adc
Reorganize sample inputs, add comments
krshrimali May 11, 2021
9604f48
Scalar tensors
krshrimali May 11, 2021
c22d387
Add include_zero param to make_tensor, merge sample_inputs func
krshrimali May 12, 2021
0bb2482
Add include_zero in make_tensor documentation
krshrimali May 12, 2021
7cf0861
Changing the docs, WIP
krshrimali May 12, 2021
a001581
Merge remote-tracking branch 'upstream/master' into origin/fmod-remai…
krshrimali May 12, 2021
abdfe1f
Merge remote-tracking branch 'upstream/master' into origin/fmod-remai…
krshrimali May 12, 2021
c445b0b
Bring back removed lines of codes, minor fixes
krshrimali May 12, 2021
bd446ab
Trailing whitespace removed
krshrimali May 12, 2021
4e34135
use torch.finfo, add support for complex tensors
krshrimali May 12, 2021
6cf48ac
Merge
krshrimali May 12, 2021
f95cba6
Remove fmod and remainder from method_tests
krshrimali May 12, 2021
a5bccee
Docs update for fmod and remainder, add formula + comparison
krshrimali May 13, 2021
726b1eb
Merge branch 'master' into origin/fmod-remainder-opinfo
krshrimali May 13, 2021
8703bfa
bring torch.rand back
krshrimali May 13, 2021
5d0d9ce
Docs fix for tquot and rquot
krshrimali May 13, 2021
b4c38e4
use tuples instead of lists
krshrimali May 13, 2021
0174dd2
Merge conflict resolved
krshrimali May 13, 2021
70385c9
Fixing docs errors
krshrimali May 13, 2021
42d49d2
Flake8 errors fixes
krshrimali May 13, 2021
e074075
Add complex inputs support, exclude_values arg, skip jit variant test…
krshrimali May 13, 2021
b25de8c
Minor fix in if condition for exclude_values
krshrimali May 13, 2021
125ec77
nit
krshrimali May 13, 2021
5856422
Minor change in the line for complex inputs
krshrimali May 13, 2021
a26f041
Don't use mutable datastructure as default arg
krshrimali May 13, 2021
7684fc6
Merge
krshrimali May 13, 2021
c40c6fc
Attempt to solve mypy error
krshrimali May 13, 2021
92899ba
typo fixed, as per review from Kshitij12345
krshrimali May 13, 2021
17480af
Split OpInfos for autodiffed samples, addressed review
krshrimali May 13, 2021
572082d
Remove add_other, add tensor scalar input, add torch.bool as dtype fo…
krshrimali May 13, 2021
1c26704
Address reviews on doc
krshrimali May 17, 2021
6797f4e
autodiffed from autodiffed_args
krshrimali May 17, 2021
6c54bdf
Minor changes in the doc
krshrimali May 17, 2021
020424b
Merge remote-tracking branch 'upstream/master' into origin/fmod-remai…
krshrimali May 17, 2021
6db6536
Remove THC impl for fmod, unused
krshrimali May 18, 2021
cff930a
bring it back
krshrimali May 18, 2021
70506ff
Minor fix in the docs
krshrimali May 18, 2021
2e59a1b
Split doc, as suggested
krshrimali May 18, 2021
2825087
Minor fix, thanks Kshiteej
krshrimali May 19, 2021
b6ab98c
Fix mypy error, source: mypy docs
krshrimali May 19, 2021
ff00042
mypy: ignore assignment added
krshrimali May 19, 2021
6cb309f
Remove THC definition, unused
krshrimali May 19, 2021
f6bb0b4
Docs update
krshrimali May 21, 2021
174ad54
Bring back exclude_zero, as per review
krshrimali May 21, 2021
8076a57
Merge remote-tracking branch 'upstream/master' into origin/fmod-remai…
krshrimali May 21, 2021
0832347
≈Merge branch 'origin/fmod-remainder-opinfo' of https://github.com/kr…
krshrimali May 21, 2021
e170a35
Use exclude_zero, minor typo fix
krshrimali May 21, 2021
eea02fc
Merging conflicts
krshrimali May 24, 2021
80c8dab
Doc update and add fmod, remainder to works_list
krshrimali May 26, 2021
2c3837d
Merge remote-tracking branch 'upstream/master' into origin/fmod-remai…
krshrimali May 26, 2021
1dea0e8
Add fmod and remainder to works_list
krshrimali May 26, 2021
3ac46f7
Complete merge
krshrimali May 26, 2021
2a99968
Testing
krshrimali May 26, 2021
0f095c9
Add remainder.autodiffed and fmod.autodiffed to the list, rollback er…
krshrimali May 26, 2021
c3273f7
minor typo fixed
krshrimali May 26, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 16 additions & 27 deletions torch/_torch_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -3274,12 +3274,11 @@ def merge_dicts(*dicts):
r"""
fmod(input, other, *, out=None) -> Tensor

Computes the element-wise remainder of division with the remainder having the same
sign as the dividend :attr:`input`.
Applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great intro

for floating point tensors, and the modulus operation for integer tensors. The result
has the same sign as the dividend :attr:`input` and its absolute value
is less than that of :attr:`other`.

.. math::
\text{{out}}_i = \text{{input}}_i - trunc(\frac{\text{{input}}_i}{\text{{other}}_i}) * \text{{other}}_i
""" + r"""
Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`,
:ref:`type promotion <type-promotion-doc>`, and integer and float inputs.

Expand All @@ -3294,13 +3293,6 @@ def merge_dicts(*dicts):
Complex inputs are not supported. In some cases, it is not mathematically
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool note

possible to satisfy the definition of a modulo operation with complex numbers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to explicitly mention this? If yes, in that case we should also add it for remainder? (Though AFAIK, not many operators mention if they support complex dtypes)

>>> import torch
>>> t = torch.tensor(1+0j)
>>> torch.fmod(t, torch.tensor(0))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: "fmod_cpu" not implemented for 'ComplexFloat'
>>> torch.remainder(t, torch.tensor(0))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: "remainder_cpu" not implemented for 'ComplexFloat'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Ruberry mentioned in his comment here (#57941 (comment)) for adding a note on lack of support for complex tensors. remainder already has it: https://pytorch.org/docs/stable/generated/torch.remainder.html. I've mimicked the comment for lack of support of complex tensors from remainder to fmod in a very recent commit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this note is already covered above

Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`,
:ref:`type promotion <type-promotion-doc>`, and integer and float inputs.

which clarifies which inputs are supported.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, it would be better to confirm this with @mruberry .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true that the "supports statement" explains complex tensors are not supported as inputs, but the note explains why they're not supported, which is often helpful. Otherwise users might request complex support or try to add it themselves!

@anjali411, what do you think?

.. seealso::

:func:`torch.fmod` truncates (rounded towards zero) the quotient with the
output having same sign as the dividend :attr:`input` while
:func:`torch.remainder` rounds (towards the nearest even integer) the quotient
with the output having same sign as the divisor :attr:`other`.

Args:
input (Tensor): the dividend
other (Tensor or Scalar): the divisor
Expand All @@ -3315,6 +3307,11 @@ def merge_dicts(*dicts):
>>> torch.fmod(torch.tensor([1, 2, 3, 4, 5]), -1.5)
tensor([1.0000, 0.5000, 0.0000, 1.0000, 0.5000])

.. seealso::

:func:`torch.remainder` which is similar to :func:`torch.fmod` except that if the sign
of the modulus is different than the sign of the divisor :attr:`other` then the divisor
is added to the modulus.
""".format(**common_args))

add_docstr(torch.frac,
Expand Down Expand Up @@ -7640,12 +7637,11 @@ def merge_dicts(*dicts):
r"""
remainder(input, other, *, out=None) -> Tensor

Computes the element-wise remainder of division with the remainder having the same
sign as the divisor :attr:`other`.
Like :func:`torch.fmod` this applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_
for floating point tensors and the modulus operation for integer tensors.
Unlike :func:`torch.fmod`, however, if the sign of the modulus is different
than the sign of the divisor :attr:`other` then the divisor is added to the modulus.

.. math::
\text{{out}}_i = \text{{input}}_i - round(\frac{\text{{input}}_i}{\text{{other}}_i}) * \text{{other}}_i
""" + r"""
Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`,
:ref:`type promotion <type-promotion-doc>`, and integer and float inputs.

Expand All @@ -7654,13 +7650,6 @@ def merge_dicts(*dicts):
possible to satisfy the definition of a modulo operation with complex numbers.
See :func:`torch.fmod` for how division by zero is handled.

.. seealso::

:func:`torch.fmod` truncates (rounded towards zero) the quotient with the
output having same sign as the dividend :attr:`input` while
:func:`torch.remainder` rounds (towards the nearest even integer) the quotient
with the output having same sign as the divisor :attr:`other`.

Args:
input (Tensor): the dividend
other (Tensor or Scalar): the divisor
Expand All @@ -7677,9 +7666,9 @@ def merge_dicts(*dicts):

.. seealso::

:func:`torch.fmod`, which computes the element-wise remainder of
division equivalently to the C library function ``fmod()``.

:func:`torch.fmod` which just computes the modulus for integer inputs and
applies C++'s `std::fmod <https://en.cppreference.com/w/cpp/numeric/math/fmod>`_
for floating point inputs.
""".format(**common_args))

add_docstr(torch.renorm,
Expand Down
14 changes: 4 additions & 10 deletions torch/testing/_internal/common_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1724,7 +1724,7 @@ def f_retry(*args, **kwargs):

def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, high=None,
requires_grad: bool = False, noncontiguous: bool = False,
exclude_values: list = None) -> torch.Tensor:
exclude_zero: bool = False) -> torch.Tensor:
""" Creates a random tensor with the given size, device and dtype.

By default, the tensor's values are in the range [-9, 9] for most dtypes. If low
Expand All @@ -1737,9 +1737,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig
specifies a tensor with a 1 or 0 elements in which case the noncontiguous parameter is ignored because
it is not possible to create a noncontiguous Tensor with a single element.

If exclude_values is passed with a list of values (default is empty list), all the matching values
in the created tensor are replaced with an epsilon value if floating type, [`eps` + `eps`.j] if complex type
and 1 if integer/boolean type.
If exclude_zero is passed with True (default is False), all the matching values (with zero) in created tensor are replaced with an epsilon value if floating type, [`eps + `eps`.j] if complex type and 1 if integer/boolean type.
"""

assert low is None or low < 9, "low value too high!"
Expand Down Expand Up @@ -1778,7 +1776,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig
result = torch.repeat_interleave(result, 2, dim=-1)
result = result[..., ::2]

if exclude_values:
if exlude_zero:
if dtype in integral_types() or dtype is torch.bool:
replace_with = torch.tensor(1, device=device, dtype=dtype)
elif dtype in floating_types_and(torch.half, torch.bfloat16):
Expand All @@ -1789,11 +1787,7 @@ def make_tensor(size, device: torch.device, dtype: torch.dtype, *, low=None, hig
real = torch.tensor(torch.finfo(float_dtype).eps, device=device, dtype=dtype)
imag = torch.tensor(torch.finfo(float_dtype).eps, device=device, dtype=dtype)
Copy link
Collaborator
@mruberry mruberry May 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will need the same logic as above, so it probably wants to refactor the exclusion code as a helper and apply it to the real and imaginary parts separately

Edit: unless we're going with the simpler "exclude_zero," then it's probably not necessary

replace_with = torch.complex(real, imag)
for exclude_val in exclude_values:
if dtype is torch.bool:
result[result == exclude_val] = replace_with
else:
result[result == exclude_val] = exclude_val + replace_with
result[result == 0] = replace_with

if dtype in floating_types_and(torch.half, torch.bfloat16) or\
dtype in complex_types():
Expand Down
0