-
Notifications
You must be signed in to change notification settings - Fork 24.2k
[dynamo][fx] Don't emit call_function
node to construct NamedTuple
instances for Dynamo and make_fx
tracing
#147145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Al 8000 ready on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/147145
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 7be4215 with merge base f95bdf5 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…` instances for Dynamo and `make_fx` tracing As title. This effectively undoes #49553, for Dynamo and `make_fx` tracing only (for `symbolic_trace` backward compatibility reasons). It heps enforce the invariant that Dynamo and `make_fx` graphs would always contain tensor ops -- so rather than having these `call_function` nodes to construct `NamedTuple`, we inline them directly as instance arguments to the user nodes. ghstack-source-id: d87a554 Pull Request resolved: #147145
torch/fx/proxy.py
Outdated
# Decides whether we represent `NamedTuple` instances into `call_function` | ||
# nodes in the IR, or inline them directly as instance arguments to the user | ||
# nodes. | ||
_proxy_named_tuple: bool | ||
|
||
def __init__(self): | ||
self._proxy_named_tuple = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 not sure what's the best way to do it here, so this is just to get the proof-of-concept working.
""" | ||
Given the output arguments, generates the return statement of the FX function. | ||
Note: The returned statement should not be indented. | ||
""" | ||
return f"return {repr(output_args)}" | ||
return f"return {arg_repr(output_args)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the arg_repr
plumbing is to get this to apply.
Abandoning this stack, see #147530. |
Stack from ghstack (oldest at bottom):
get_unique_name_wrt
helper when applicable #146950nonstrict_trace
#146367flat_apply
#146714init=False
#146713call_function
node to construct dataclass instances for Dynamo andmake_fx
tracing #147152call_function
node to constructNamedTuple
instances for Dynamo andmake_fx
tracing #147145As title. This effectively undoes #49553, for Dynamo and
make_fx
tracing only (for
symbolic_trace
backward compatibility reasons).It heps enforce the invariant that Dynamo and
make_fx
graphs wouldalways contain tensor ops -- so rather than having these
call_function
nodes to construct
NamedTuple
, we inline them directly as instancearguments to the user nodes.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames