-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Fix evaluate_expr to include suppress_guards_tls in cache key #152661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152661
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 77f7f4d with merge base f393ee5 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv [ghstack-poisoned]
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv [ghstack-poisoned]
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv [ghstack-poisoned]
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv [ghstack-poisoned]
) | ||
|
||
@lru_cache(256) | ||
@record_shapeenv_event(save_tracked_fakes=True, name="evaluate_expr") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean is it bad if it stayed _inner_evaluate_expr, whats the motivation for the renaming
""" | ||
|
||
# Add extra state that evaluate_expr() depends on. | ||
suppress_guards_tls = ShapeEnv._suppress_guards_tls() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the outer most API does take shape env as input, wonder if we should have just made
this an instance field instead of global state field.
suppress_guards
and
suppress_guards_stack
@contextmanager
def _suppress_guards(shape_env: ShapeEnv) -> Iterator[None]:
shape_env._suppress_guards_enter()
try:
yield
finally:
shape_env._suppress_guards_exit()
@record_shapeenv_event()
def _suppress_guards_enter(self) -> None:
if not hasattr(TLS, "suppress_guards_stack"):
TLS.suppress_guards_stack = []
old = self._suppress_guards_tls()
TLS.suppress_guards_stack.append(old)
TLS.suppress_guards = True
@record_shapeenv_event()
def _suppress_guards_exit(self) -> None:
old = (
TLS.suppress_guards_stack.pop()
if len(TLS.suppress_guards_stack) > 0
else False
)
TLS.suppress_guards = old
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is used only in one place
# Needed to make sure we don't accidentally specialize any symbols
assert self.fake_tensor_mode.shape_env is not None
env = self.fake_tensor_mode.shape_env
self.stack.enter_context(
torch.fx.experimental.symbolic_shapes._suppress_guards(env)
)
return (
str(CompileContext.current_compile_id()),
inputs,
sizes,
scalars,
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making it an instance var is an orthogonal issue - you'd still need to tell the cache that the variable should participate in the cache key (lru_cache doesn't automatically include instance vars in the cache key)
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv [ghstack-poisoned]
test_modules_can_be_imported is broken on trunk |
Merge startedYour change will be merged while ignoring the following 5 checks: pull / linux-jammy-py3.9-gcc11-mobile-lightweight-dispatch-build / build, pull / linux-jammy-py3.9-gcc11 / test (backwards_compat, 1, 1, ephemeral.linux.2xlarge), inductor / cuda12.6-py3.10-gcc9-sm86 / test (inductor_torchbench, 1, 2, lf.ephemeral.linux.g5.4xlarge.nvidia.gpu), inductor / cuda12.6-py3.10-gcc9-sm86 / test (inductor_torchbench, 2, 2, lf.ephemeral.linux.g5.4xlarge.nvidia.gpu), inductor / unit-test / cuda12.6-py3.10-gcc9-sm86 / test (inductor_cpp_wrapper, 1, 2, ephemeral.linux.g5.4xlarge.nvidia.gpu) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Command
Details for Dev Infra teamRaised by workflow job |
…key" ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value. This came up because #152662 triggered it in the test `test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error` - fixing this caused that test to work again. cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv voznesenskym penguinwu Guobing-Chen XiaobingSuper zhuhaozhe blzheng jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
@pytorchbot merge |
1 similar comment
@pytorchbot merge |
@pytorchbot help |
❌ 🤖 pytorchbot command failed:
Try |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
ShapeEnv.evaluate_expr() behaves differently based on the (tls) global "suppress_guards" - so its cache key needs to include that value.
This came up because #152662 triggered it in the test
test/dynamo/test_exc.py::ExcTests::test_trigger_bisect_on_error
- fixing this caused that test to work again.Stack from ghstack (oldest at bottom):
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames