8000 None on "FakeTensorMode dispatch shouldn't include bypass in exceptio… · pytorch/pytorch@b4bcc38 · GitHub
[go: up one dir, main page]

Skip to content

Commit b4bcc38

Browse files
committed
None on "FakeTensorMode dispatch shouldn't include bypass in exception context"
In the FakeTensor cache when we get a bypass exception while computing the cache key (call this exc_1) we need to dispatch to the original operation. It's possible for the dispatch to the original operation to get its own exception which we want to bubble up to the caller (call this exc_2). If we directly dispatch from within the handler for exc_1 then exc_2 will have a `__context__` of exc_1 - which can cause deviations between cached and non-cached behavior - so we need to be a bit careful when we call the dispatch. Testing: TestAOTExport.test_aot_export_predispatch_outdtype fails before this change and passes after. [ghstack-poisoned]
1 parent a6a794e commit b4bcc38

File tree

1 file changed

+3
-7
lines changed

1 file changed

+3
-7
lines changed

torch/_subclasses/fake_tensor.py

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1436,13 +1436,9 @@ def _cached_dispatch_impl(
14361436
FakeTensorMode.cache_bypasses[e.reason] += 1
14371437

14381438
if key is None:
1439-
# In theory this could be done within the above `except` but then
1440-
# the exception generated within _dispatch_impl() will have a
1441-
# context related to the _BypassDispatchCache - which we don't
1442-
# really want.
1443-
#
1444-
# (Without this TestAOTExport.test_aot_export_predispatch_outdtype
1445-
# fails.)
1439+
# Do this dispatch outside the above except handler so if it
1440+
# generates its own exception there won't be a __context__ caused by
1441+
# the caching mechanism.
14461442
return self._dispatch_impl(func, types, args, kwargs)
14471443

14481444
assert state is not None

0 commit comments

Comments
 (0)
0