8000 [dynamic shapes] rewrite slice_forward decomp with guard_or_false by pianpwk · Pull Request #150474 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[dynamic shapes] rewrite slice_forward decomp with guard_or_false #150474

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

pianpwk
Copy link
Contributor
@pianpwk pianpwk commented Apr 1, 2025

Uses guard_or_false in place of size-oblivious to assume if not already known that the start/end indices are in-bounds.

Adds torch._checks for this, checking start_val >= 0, end_val <= sizes[dim], start_val >= end_val, which helps guarantee that the output size at runtime matches the symbolic expression in end_val - start_val.

Without these checks the reported symbolic size might not match, e.g. if end_val < start_val, eager returns a size-0 tensor but the symbolic size is negative.

Copy link
pytorch-bot bot commented Apr 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150474

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit 32c5c36 with merge base b70d105 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pianpwk pianpwk marked this pull request as ready for review April 2, 2025 21:00
@@ -760,6 +760,25 @@ def test_avoid_unbacked_substitution(self):
self.assertTrue(expect_true(i0 == 10 - i1))
self.assertExpectedInline(str(i0), """u0""")

def test_unbacked_slice(self):
@torch.compile(backend="eager", fullgraph=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does testing with inductor works now or still broken? I have fixed a couple of issues,

@torch.compile(backend="eager", fullgraph=True)
def fn(xs):
u0, u1, u2 = xs.tolist()
x = torch.empty(u0)
Copy link
Contributor
@laithsakka laithsakka May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add more testing.
I can thing of the following cases
(1)
sizes[dim] =u0
start = u1
end = u2.

(2)
sizes[dim] =u0
start = u0-1
end = u0-4

(3)
sizes[dim] =100
start = u0
end = u0+19

Copy link
Contributor
@laithsakka laithsakka May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add test when end is None and start is unbacked and make sure output is not always empty
and we wont hit. torch._check(end_val >= start_val)

@@ -728,23 +728,37 @@ def slice_forward(
start_val = start if start is not None else 0
end_val = end if end is not None else sys.maxsize # 2^63 - 1

if guard_size_oblivious(start_val < 0):
# negative indexing case
if guard_or_false(start_val < 0):
Copy link
Contributor
@laithsakka laithsakka May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we write this as .
the checks bellow are checking different things under different conditions
one checks start_val when its unbacked
and checks start_val +sizes[dim] when start_val is for example -10 and sizes[dim] is unbacked.

if guard_or_false(start_val < 0):
    start_val += sizes[dim]
    # sizes[dim] is always positive 
    torch._check(start_val >= 0, "the off-setted start_val {start_val} is assumed to be positive if the assumption is not true please add a torch.check() ")
else:
   torch._check(start_val >= 0, "start_val {start_val} is assumed to be positive, if the assumption is not true please add a torch.check()")
   ```

< 8000 circle cx="8" cy="8" r="7" stroke="currentColor" stroke-opacity="0.25" stroke-width="2" vector-effect="non-scaling-stroke" fill="none" />

start_val += sizes[dim]

if guard_size_oblivious(end_val < 0):
if guard_or_false(end_val < 0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need torch check on the assumption here ?

Copy link
Contributor
@laithsakka laithsakka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking good over all, left some come

@facebook-github-bot
Copy link
Contributor

@cgufb has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 15, 2025
@@ -728,23 +728,37 @@ def slice_forward(
start_val = start if start is not None else 0
end_val = end if end is not None else sys.maxsize # 2^63 - 1

if guard_size_oblivious(start_val < 0):
# negative indexing case
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need more testing for when end is None.
for example if end is None and start is unabcked what shall we do?
we can assume sys.maxsize<start and check on it we should not always fail!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request release notes: export
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0