-
Notifications
You must be signed in to change notification settings - Fork 24.3k
Add typing on Tensor
dunder methods for binary arithmetic ops
#103394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/103394
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit 4d11e4f: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
inplace ops should return `Self`
@@ -70,7 +70,7 @@ def philox_rand_offset( | |||
device_property = torch.cuda.get_device_properties(torch.cuda.current_device()) | |||
blocks_per_sm = device_property.max_threads_per_multi_processor // block_size | |||
grid_size = (numel + block_size - 1) // block_size | |||
grid_size = min(grid_size, device_property.multi_processor_count * blocks_per_sm) | |||
grid_size = min(grid_size, device_property.multi_processor_count * blocks_per_sm) # type: ignore[call-overload] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is out of the scope of this PR as I'm not sure why we use Tensor
(instead of int
) for grid_size
here.
The error is no overload for min(Tensor, int)
. This is because Tensor.__lt__()
returns Tensor
, incompatible with typeshed's annotation on min
, which requires:
https://github.com/python/typeshed/blob/052d2b9f3a9afaa56b86f0c84bc9d8769458d96a/stdlib/_typeshed/__init__.pyi#L53-L54
class SupportsDunderLT(Protocol[_T_contra]):
def __lt__(self, __other: _T_contra) -> bool: ...
This requirement is introduced in python/typeshed#7093, where they ask __lt__
must return bool
but not any other bool-compatible type.
@pytorchbot label "topic: not user facing" |
... | ||
|
||
__ipow__ = ( # noqa: F811 | ||
_handle_torch_function_and_wrap_type_error_to_not_implemented( # pass UFMT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why # pass UFMT
is needed here but on on line 879)?
_handle_torch_function_and_wrap_type_error_to_not_implemented( # pass UFMT | |
_handle_torch_function_and_wrap_type_error_to_not_implemented( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, UFMT requires # noqa: F811
moved onto line 888. However, FLAKE8 and RUFF need this noqa on Line 887.
So it's needed to pass UFMT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option is to change _handle_torch_function_and_wrap_type_error_to_not_implemented
to a shorter name. Any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why
# pass UFMT
is needed here but on on line 879)?
Sorry, seems I didn't positively answer this question. They are different in that line 889 has a # type: ignore[arg-type]
, but not on 880. Therefore this is added to 888 but not 879.
However, I tried a little more and found out that we can remove the type: ignore
after #103376 is landed.
@lkct if typing change are not really user facing, why making them at all? |
@malfet Secondly I added some comments to your review. It turns out that the two PRs are not fully decoupled, and things will be easier if that one is landed first. Finally, based on the above, I decide to mark this PR as draft for now, and reactivate after the landing of #103376. |
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Fixes #103393
Example
Previously:
Now: