-
Notifications
You must be signed in to change notification settings - Fork 24.2k
[dynamic shapes] rewrite slice_forward decomp with guard_or_false #150474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150474
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit 32c5c36 with merge base b70d105 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -760,6 +760,25 @@ def test_avoid_unbacked_substitution(self): | |||
self.assertTrue(expect_true(i0 == 10 - i1)) | |||
self.assertExpectedInline(str(i0), """u0""") | |||
|
|||
def test_unbacked_slice(self): | |||
@torch.compile(backend="eager", fullgraph=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does testing with inductor works now or still broken? I have fixed a couple of issues,
@torch.compile(backend="eager", fullgraph=True) | ||
def fn(xs): | ||
u0, u1, u2 = xs.tolist() | ||
x = torch.empty(u0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add more testing.
I can thing of the following cases
(1)
sizes[dim] =u0
start = u1
end = u2.
(2)
sizes[dim] =u0
start = u0-1
end = u0-4
(3)
sizes[dim] =100
start = u0
end = u0+19
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add test when end is None and start is unbacked and make sure output is not always empty
and we wont hit. torch._check(end_val >= start_val)
@@ -728,23 +728,37 @@ def slice_forward( | |||
start_val = start if start is not None else 0 | |||
end_val = end if end is not None else sys.maxsize # 2^63 - 1 | |||
|
|||
if guard_size_oblivious(start_val < 0): | |||
# negative indexing case | |||
if guard_or_false(start_val < 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we write this as .
the checks bellow are checking different things under different conditions
one checks start_val when its unbacked
and checks start_val +sizes[dim] when start_val is for example -10 and sizes[dim] is unbacked.
if guard_or_false(start_val < 0):
start_val += sizes[dim]
# sizes[dim] is always positive
torch._check(start_val >= 0, "the off-setted start_val {start_val} is assumed to be positive if the assumption is not true please add a torch.check() ")
else:
torch._check(start_val >= 0, "start_val {start_val} is assumed to be positive, if the assumption is not true please add a torch.check()")
```
start_val += sizes[dim] | ||
|
||
if guard_size_oblivious(end_val < 0): | ||
if guard_or_false(end_val < 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need torch check on the assumption here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good over all, left some come
@cgufb has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@@ -728,23 +728,37 @@ def slice_forward( | |||
start_val = start if start is not None else 0 | |||
end_val = end if end is not None else sys.maxsize # 2^63 - 1 | |||
|
|||
if guard_size_oblivious(start_val < 0): | |||
# negative indexing case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need more testing for when end is None.
for example if end is None and start is unabcked what shall we do?
we can assume sys.maxsize<start and check on it we should not always fail!
Uses guard_or_false in place of size-oblivious to assume if not already known that the start/end indices are in-bounds.
Adds torch._checks for this, checking
start_val >= 0, end_val <= sizes[dim], start_val >= end_val
, which helps guarantee that the output size at runtime matches the symbolic expression inend_val - start_val
.Without these checks the reported symbolic size might not match, e.g. if end_val < start_val, eager returns a size-0 tensor but the symbolic size is negative.