-
Notifications
You must be signed in to change notification settings - Fork 24.3k
[dynamo] Use the new get_unique_name_wrt
helper when applicable
#146950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146950
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit bae3c97 with merge base 6061664 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
name = candidate_name | ||
break | ||
|
||
name = get_unique_name_wrt(name, self.input_name_to_proxy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to get this to always append a number, like before?
name = candidate_name | ||
break | ||
|
||
name = get_unique_name_wrt(name, self.input_name_to_proxy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to get this to always append a number, like before? I'm not sure that it matters but it's nice for the invariant to be "the name is foobar_i" where i is a number
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah lemme try that, otherwise there's too many tests to update lol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sad -- either way we'll need to update a bunch of test so I added a requires_suffix=False
flag to preserve the existing behavior.
Starting merge as part of PR stack under #147572 |
As title, also see 1. new test `test_nonstrict_trace_on_method` for example. 2. newly added comments for why we need special treatment on methods. Pull Request resolved: #147571 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367, #146950
…`-ed function (#147572) As title. Without this patch we get the following error: Tweaking the `allow_non_fake_inputs` flag on tensor mode doesn't quite work for AOTAutograd, which also needs to fake-tensor-propagate the `nonstrict_trace`-ed function, but that's _after_ Dynamo has handled the `nonstrict_trace` processing and put the `flat_apply(...)` node into the graph. So we can't easily to temporarily enable the `allow_non_fake_inputs` flag on current fake mode, when AOTAutograd processes a `flat_apply` node from Dynamo's `nonstrict_trace` handling. And after discussing with zou3519, I decided to add a global `FakeTensorTLS` that contains a `allow_non_fake_inputs_override` flag, and patch the `nonstrict_trace`-ed function to temporarily tweak this flag during its execution. Pull Request resolved: #147572 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367, #146950, #147571
Summary: This patch removes some duplicated name generation logic in Dynamo. X-link: pytorch/pytorch#146950 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367 Reviewed By: wdvr Differential Revision: D70270821 fbshipit-source-id: cc8a04342adb2056f19709007ef45e3ad6ee34f8
…46950) This patch removes some duplicated name generation logic in Dynamo. Pull Request resolved: #146950 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367
As title, also see 1. new test `test_nonstrict_trace_on_method` for example. 2. newly added comments for why we need special treatment 9E12 on methods. Pull Request resolved: #147571 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367, #146950
…`-ed function (#147572) As title. Without this patch we get the following error: Tweaking the `allow_non_fake_inputs` flag on tensor mode doesn't quite work for AOTAutograd, which also needs to fake-tensor-propagate the `nonstrict_trace`-ed function, but that's _after_ Dynamo has handled the `nonstrict_trace` processing and put the `flat_apply(...)` node into the graph. So we can't easily to temporarily enable the `allow_non_fake_inputs` flag on current fake mode, when AOTAutograd processes a `flat_apply` node from Dynamo's `nonstrict_trace` handling. And after discussing with zou3519, I decided to add a global `FakeTensorTLS` that contains a `allow_non_fake_inputs_override` flag, and patch the `nonstrict_trace`-ed function to temporarily tweak this flag during its execution. Pull Request resolved: #147572 Approved by: https://github.com/zou3519 ghstack dependencies: #146714, #146367, #146950, #147571
…torch#146950) This patch removes some duplicated name generation logic in Dynamo. Pull Request resolved: pytorch#146950 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#146714, pytorch#146367
As title, also see 1. new test `test_nonstrict_trace_on_method` for example. 2. newly added comments for why we need special treatment on methods. Pull Request resolved: pytorch#147571 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#146714, pytorch#146367, pytorch#146950
…`-ed function (pytorch#147572) As title. Without this patch we get the following error: Tweaking the `allow_non_fake_inputs` flag on tensor mode doesn't quite work for AOTAutograd, which also needs to fake-tensor-propagate the `nonstrict_trace`-ed function, but that's _after_ Dynamo has handled the `nonstrict_trace` processing and put the `flat_apply(...)` node into the graph. So we can't easily to temporarily enable the `allow_non_fake_inputs` flag on current fake mode, when AOTAutograd processes a `flat_apply` node from Dynamo's `nonstrict_trace` handling. And after discussing with zou3519, I decided to add a global `FakeTensorTLS` that contains a `allow_non_fake_inputs_override` flag, and patch the `nonstrict_trace`-ed function to temporarily tweak this flag during its execution. Pull Request resolved: pytorch#147572 Approved by: https://github.com/zou3519 ghstack dependencies: pytorch#146714, pytorch#146367, pytorch#146950, pytorch#147571
Stack from ghstack (oldest at bottom):
nonstrict_trace
-ed function #147572nonstrict_trace
on class method #147571get_unique_name_wrt
helper when applicable #146950nonstrict_trace
#146367flat_apply
#146714This patch removes some duplicated name generation logic in Dynamo.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov