8000 Add typing on `Tensor` dunder methods for binary arithmetic ops by lkct · Pull Request #103394 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Add typing on Tensor dunder methods for binary arithmetic ops #103394

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

lkct
Copy link
Contributor
@lkct lkct commented Jun 11, 2023

Fixes #103393

Example

import torch
a = torch.Tensor()
reveal_type(1 - a)
reveal_type(a ** 2)

Previously:

a.py:3:13: note: Revealed type is "Any"
a.py:4:13: note: Revealed type is "Any"

Now:

a.py:3:13: note: Revealed type is "torch._tensor.Tensor"
a.py:4:13: note: Revealed type is "torch._tensor.Tensor"

@pytorch-bot
Copy link
pytorch-bot bot commented Jun 11, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/103394

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 4d11e4f:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@lkct lkct marked this pull request as draft June 11, 2023 17:48
@lkct lkct marked this pull request as ready for review June 11, 2023 20:45
@@ -70,7 +70,7 @@ def philox_rand_offset(
device_property = torch.cuda.get_device_properties(torch.cuda.current_device())
blocks_per_sm = device_property.max_threads_per_multi_processor // block_size
grid_size = (numel + block_size - 1) // block_size
grid_size = min(grid_size, device_property.multi_processor_count * blocks_per_sm)
grid_size = min(grid_size, device_property.multi_processor_count * blocks_per_sm) # type: ignore[call-overload]
Copy link
Contributor Author
@lkct lkct Jun 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is out of the scope of this PR as I'm not sure why we use Tensor (instead of int) for grid_size here.

The error is no overload for min(Tensor, int). This is because Tensor.__lt__() returns Tensor, incompatible with typeshed's annotation on min, which requires:
https://github.com/python/typeshed/blob/052d2b9f3a9afaa56b86f0c84bc9d8769458d96a/stdlib/_typeshed/__init__.pyi#L53-L54

class SupportsDunderLT(Protocol[_T_contra]):
    def __lt__(self, __other: _T_contra) -> bool: ...

This requirement is introduced in python/typeshed#7093, where they ask __lt__ must return bool but not any other bool-compatible type.

@lkct
Copy link
Contributor Author
lkct commented Jun 11, 2023

@pytorchbot label "topic: not user facing"

...

__ipow__ = ( # noqa: F811
_handle_torch_function_and_wrap_type_error_to_not_implemented( # pass UFMT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why # pass UFMT is needed here but on on line 879)?

Suggested change
_handle_torch_function_and_wrap_type_error_to_not_implemented( # pass UFMT
_handle_torch_function_and_wrap_type_error_to_not_implemented(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, UFMT requires # noqa: F811 moved onto line 888. However, FLAKE8 and RUFF need this noqa on Line 887.
So it's needed to pass UFMT

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is to change _handle_torch_function_and_wrap_type_error_to_not_implemented to a shorter name. Any suggestions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why # pass UFMT is needed here but on on line 879)?

Sorry, seems I didn't positively answer this question. They are different in that line 889 has a # type: ignore[arg-type], but not on 880. Therefore this is added to 888 but not 879.

However, I tried a little more and found out that we can remove the type: ignore after #103376 is landed.

@malfet malfet added release notes: python_frontend python frontend release notes category topic: improvements topic category topic: bug fixes topic category and removed topic: not user facing topic category labels Jun 12, 2023
@malfet
Copy link
Contributor
malfet commented Jun 12, 2023

@lkct if typing change are not really user facing, why making them at all?

@lkct
Copy link
Contributor Author
lkct commented Jun 12, 2023

@lkct if typing change are not really user facing, why making them at all?

Sorry I'm just trying to follow previous examples, e.g. #101464

@lkct lkct requested a review from malfet June 12, 2023 21:15
@lkct
Copy link
Contributor Author
lkct commented Jun 13, 2023

@malfet
Firstly I'd like to ask if this PR also needs some tests, as in #103376 ? If so, let's resolve the problems in that PR first.

Secondly I added some comments to your review. It turns out that the two PRs are not fully decoupled, and things will be easier if that one is landed first.

Finally, based on the above, I decide to mark this PR as draft for now, and reactivate after the landing of #103376.
So Sorry for the trouble!

@lkct lkct marked this pull request as draft June 13, 2023 00:01
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Aug 12, 2023
@github-actions github-actions bot closed this Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
open source release notes: python_frontend python frontend release notes category Stale topic: bug fixes topic category topic: improvements topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Typing missing on arithmetic ops on Tensor
3 participants
0