-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Make fx.node.map_arg() and .map_aggregate() generic #146248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146248
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 5 Pending, 3 Unrelated FailuresAs of commit d0d7327 with merge base f95bdf5 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/fx/node.py
Outdated
@@ -915,10 +916,10 @@ def map_aggregate(a: Argument, fn: Callable[[Argument], Argument]) -> Argument: | |||
dict.__setitem__(rv, k, map_aggregate(v, fn)) | |||
return rv | |||
elif isinstance(a, slice): | |||
return slice( | |||
return slice( # type: ignore[return-value] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this work, will we need to do typing.overrload to properly handle it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, Slice is now generic, but may require a typing_extensions / typeshed / mypy update: python/typeshed#8647 python/typeshed#11637
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Skylion007 Well, I'm not entirely sure this will work, but it seems to represent what is actually going on!, gives better typing at point of use, and running some tests locally seemed to work FWIW.
(This actually emerged from tweaking some typing elsewhere...)
I'm seeing some tests fail due to backwards compatibility checks which need human approval to tweak. I'll see how the whole thing shakes out before I actually contact a human and ask for lenience... 😬
As for generic slice - well, it seems very hard to deduce what type is actually being sliced on line 919 there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually was referring to the typing of Argument
, there was a comment that the slice in that TypeAlias that slice in the Union should be typed, but it wasn't because of a lack of support. That should be fixed now. it's slice[Argument, Argument, Argument]
:) Link for that comment is here:
Line 48 in f97307f
Argument = Optional[ |
ghstack-source-id: 65d35fb Pull Request resolved: pytorch#146248
ghstack-source-id: 07fd20f Pull Request resolved: pytorch#146248
@Skylion007: of course, that makes sense! Since I just had to look, the word I changed the pull request to be sleeker and also found a neat fix for the backward compatibility test, if it passes the tests I'll take it out of draft. |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
## What's the problem? The popular `fx.node.map_arg()` and `fx.node.map_aggregate()` apply operations recursively on `dict`s, `tuples`, `list`s, etc, and return a new collection of the same type. Unfortunately, their base input type is `Argument`, which is [very unspecific indeed](https://github.com/pytorch/pytorch/blob/5d55a6585d5806c2743e92118e663f5abb261895/torch/fx/node.py#L48-L58): most type information is just thrown away at the call site of either of these functions, as far as the type checker goes. As `torch` moves to a more typed code base, this would force innocent, unsuspecting developers to add logically unnecessary casts or `# type: ignore` statements. ## What's the solution? Making these two `node.map_*` functions generic on the first argument and return type means that type information is preserved for the type checker. (The signature of the other parameter, the function that visits the nodes and subnodes, has not changed, nor should it.) ## Won't it break everything? It doesn't break the type checker - one place needed an extra hint. There have been code breakages, resolved one, at least one new one... we'll see! Pull Request resolved: #146248 Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
## What's the problem? The popular `fx.node.map_arg()` and `fx.node.map_aggregate()` apply operations recursively on `dict`s, `tuples`, `list`s, etc, and return a new collection of the same type. Unfortunately, their base input type is `Argument`, which is [very unspecific indeed](https://github.com/pytorch/pytorch/blob/5d55a6585d5806c2743e92118e663f5abb261895/torch/fx/node.py#L48-L58): most type information is just thrown away at the call site of either of these functions, as far as the type checker goes. As `torch` moves to a more typed code base, this would force innocent, unsuspecting developers to add logically unnecessary casts or `# type: ignore` statements. ## What's the solution? Making these two `node.map_*` functions generic on the first argument and return type means that type information is preserved for the type checker. (The signature of the other parameter, the function that visits the nodes and subnodes, has not changed, nor should it.) ## Won't it break everything? It doesn't break the type checker - one place needed an extra hint. There have been code breakages, resolved one, at least one new one... we'll see! Pull Request resolved: #146248 Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
ghstack-source-id: 5908c9b Pull Request resolved: pytorch/pytorch#146248
## What's the problem? The popular `fx.node.map_arg()` and `fx.node.map_aggregate()` apply operations recursively on `dict`s, `tuples`, `list`s, etc, and return a new collection of the same type. Unfortunately, their base input type is `Argument`, which is [very unspecific indeed](https://github.com/pytorch/pytorch/blob/5d55a6585d5806c2743e92118e663f5abb261895/torch/fx/node.py#L48-L58): most type information is just thrown away at the call site of either of these functions, as far as the type checker goes. As `torch` moves to a more typed code base, this would force innocent, unsuspecting developers to add logically unnecessary casts or `# type: ignore` statements. ## What's the solution? Making these two `node.map_*` functions generic on the first argument and return type means that type information is preserved for the type checker. (The signature of the other parameter, the function that visits the nodes and subnodes, has not changed, nor should it.) ## Won't it break everything? It doesn't break the type checker - one place needed an extra hint. There have been code breakages, resolved one, at least one new one... we'll see! Pull Request resolved: pytorch#146248 Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
What's the problem?
The popular
fx.node.map_arg()
andfx.node.map_aggregate()
apply operations recursively ondict
s,tuples
,list
s, etc, and return a new collection of the same type.Unfortunately, their base input type is
Argument
, which is very unspecific indeed: most type information is just thrown away at the call site of either of these functions, as far as the type checker goes.As
torch
moves to a more typed code base, this would force innocent, unsuspecting developers to add logically unnecessary casts or# type: ignore
statements.What's the solution?
Making these two
node.map_*
functions generic on the first argument and return type means that type information is preserved for the type checker. (The signature of the other parameter, the function that visits the nodes and subnodes, has not changed, nor should it.)Won't it break everything?
It doesn't break the type checker - one place needed an extra hint.
There have been code breakages, resolved one, at least one new one... we'll see!
Stack from ghstack (oldest at bottom):
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @malfet @xuzhao9 @gramster @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi