10000 [dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims by pianpwk · Pull Request #150127 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims #150127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 35 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
7510a77
init
pianpwk Mar 27, 2025
4630374
assume >= 0
pianpwk Mar 27, 2025
e076ca4
test
pianpwk Mar 27, 2025
ed8ed35
assume numel = prod(shape)
pianpwk Mar 27, 2025
fafaa47
test
pianpwk Mar 27, 2025
fe734e8
workaround
pianpwk Mar 27, 2025
11a25a7
simple case
pianpwk Mar 27, 2025
8a48fad
weird test
pianpwk Mar 27, 2025
ac40d4c
lint
pianpwk Mar 27, 2025
207d3fb
switch to guard_or_true
pianpwk Apr 2, 2025
1790f35
Update __init__.py
pianpwk Apr 2, 2025
1dd4c2b
Update __init__.py
pianpwk Apr 2, 2025
9d35a27
lint
pianpwk Apr 2, 2025
cab5f4a
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 3, 2025
f5b10c4
stash
pianpwk Apr 11, 2025
2eb56d0
Merge branch ' 8000 main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 11, 2025
9604307
reduce to only changing fast path
pianpwk Apr 11, 2025
00031a1
Update fx.experimental.rst
pianpwk Apr 12, 2025
983174e
Update fx.experimental.rst
pianpwk Apr 12, 2025
7c8d113
lint
pianpwk Apr 14, 2025
20749a3
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 14, 2025
5885633
add loop back
pianpwk Apr 14, 2025
18496a9
Update __init__.py
pianpwk Apr 16, 2025
af74f34
Update test_export.py
pianpwk Apr 16, 2025
69c1abe
try
pianpwk Apr 16, 2025
2ac1303
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 17, 2025
ffe4273
comments
pianpwk Apr 17, 2025
fa11fa1
Update __init__.py
pianpwk Apr 17, 2025
85fd59e
Update __init__.py
pianpwk Apr 17, 2025
6d322e6
Update __init__.py
pianpwk Apr 18, 2025
07c6ac9
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 18, 2025
5fa8ce9
test masked linear
pianpwk Apr 19, 2025
c85b5ce
lint
pianpwk Apr 21, 2025
b0db707
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 22, 2025
2bca726
Merge branch 'main' of https://github.com/pytorch/pytorch into pianpw…
pianpwk Apr 22, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
assume >= 0
  • Loading branch information
pianpwk committed Mar 27, 2025
commit 4630374d3d9a229cfdb8dfd1480fcdd30a581002
5 changes: 2 additions & 3 deletions torch/_prims_common/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -932,10 +932,9 @@ def infer_size(shape: ShapeType, numel: int) -> tuple[int, ...]:
if guard_or_false(d == -1):
torch._check(dim is None, lambda: "only one dimension can be inferred")
dim = i
elif d >= 0:
newsize *= d
else:
torch._check(False, lambda: f"invalid shape dimension {d}")
torch._check(d >= 0, lambda: f"invalid shape dimension {d}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add in the error message before landing that if if d is unbaked, then we assumed its not -1.
I see you use lambda here, wonder if we shall print the symbolic d.

Copy link
Contributor Author
@pianpwk pianpwk Apr 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, the lambda can only access the symbolic d anyways, we can't access the actual runtime property.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so you do materialize the lambda during compile with the deferred runtime assertion is created

newsize *= d
if dim is None:
torch._check(
numel == newsize,
Expand Down
Loading
0