-
Notifications
You must be signed in to change notification settings - Fork 24.2k
torch.cuda.manual_seed
ignored
#149621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
module: random
Related to random number generation in PyTorch (rng generator)
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Comments
manual_seed op is in dynamo_graph, but it is not in aot and inductor. @bdhirsh @eellison Is it expected, that we do not have random ops in aot and inductor?
|
cc @anijain2305 |
Yea, we should be graph breaking, and I thought we did. see #109109 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
module: random
Related to random number generation in PyTorch (rng generator)
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
When using torch.compile, torch.cuda.manual_seed/torch.cuda.manual_seed_all/torch.cuda.random.manual_seed do not seem to properly enforce reproducibility across multiple calls to a compiled function.
torch.cuda.manual_seed
Code:
Output:
torch.cuda.manual_seed_all
Code:
Output:
torch.cuda.random.manual_seed
Code
Output:
torch.xpu.random.set_rng_state_all
Code:
Output:
torch.xpu.random.manual_seed_all
Code:
Output:
torch.xpu.manual_seed_all
Code:
Output:
Versions
torch 2.6.0
cc @pbelevich @chauhang @penguinwu
The text was updated successfully, but these errors were encountered: