8000 Avoid data-dependent errors in NJT tests via capture_scalar_outputs=True by jbschlosser · Pull Request #144588 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Avoid data-dependent errors in NJT tests via capture_scalar_outputs=True #144588

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

jbschlosser
Copy link
Contributor
@jbschlosser jbschlosser commented Jan 10, 2025

Stack from ghstack (oldest at bottom):

Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets torch._dynamo.config.capture_scalar_outputs=True to avoid these, which tends to exercise unbacked SymInt logic and will require torch._check()-related fixes.

Copy link
pytorch-bot bot commented Jan 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/144588

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit abb5e73 with merge base 803017f (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor
@soulitzer soulitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice clean up!

@ops(
[op for op in njt_op_db if op.supports_njt],
allowed_dtypes=(torch.float32,),
)
@torch._dynamo.config.patch(capture_dynamic_output_shape_ops=True)
# needed to avoid "data dependent operator: aten._local_scalar_dense.default"
@torch._dynamo.config.patch(capture_scalar_outputs=True)
Copy link
Contributor
@soulitzer soulitzer Jan 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep saw these changes as part of the narrow PR as well haha

Though one reason for not doing it by default for all ops is if we want to get signal on which ops need which flags to work with compile, but probably not worth the hassle.

Copy link
Contributor Author
@jbschlosser jbschlosser Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what I want to do is have a section of the upcoming NJT docs dedicated to torch.compile-based stuff, and mention the trick to add this flag as needed for data-dependent operators.

Yep saw these changes as part of the narrow PR as well haha

ah yeah, sadly that's part of a different stack. I shouldn't have done that :x

…r_outputs=True"


Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.

[ghstack-poisoned]
…r_outputs=True"


Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.

[ghstack-poisoned]
…r_outputs=True"


Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.

[ghstack-poisoned]
…r_outputs=True"


Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.

[ghstack-poisoned]
…r_outputs=True"


Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.

[ghstack-poisoned]
@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 24, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorch-bot pytorch-bot bot temporarily deployed to upload-benchmark-results January 24, 2025 17:48 Inactive
@pytorch-bot pytorch-bot bot temporarily deployed to upload-benchmark-results January 24, 2025 17:48 Inactive
@pytorch-bot pytorch-bot bot temporarily deployed to upload-benchmark-results January 24, 2025 17:48 Inactive
nWEIdia pushed a commit to nWEIdia/pytorch that referenced this pull request Jan 27, 2025
…rue (pytorch#144588)

Part of my BE project addressing NJT bugs surfaced via OpInfo tests.

There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes.
Pull Request resolved: pytorch#144588
Approved by: https://github.com/soulitzer
ghstack dependencies: pytorch#144586, pytorch#144587
@github-actions github-actions bot deleted the gh/jbschlosser/223/head branch February 24, 2025 02:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0