8000 [torch][cuda] fix race condition in cuda initialization by suo · Pull Request #143238 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[torch][cuda] fix race condition in cuda initialization #143238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

suo
Copy link
Member
@suo suo commented Dec 13, 2024

The access to lazy init callbacks (_lazy_seed_tracker and _queued_calls) is not synchronized with the initialization lock.

This exposes us to the following race:

  1. start _lazy_init
  2. take _initialization_lock
  3. flush _queued_calls and run them all
  4. another thread comes in and uses _lazy_call to put something on the queue (in our case, the manual_seed)
  5. original thread finishes initializing, but never runs that call

The access to lazy init callbacks (`_lazy_seed_tracker` and `_queued_calls`) is not synchronized with the initialization lock.

This exposes us to the following race:
1. start `_lazy_init`
2. take `_initialization_lock`
3. flush `_queued_calls` and run them all
4. <another thread> comes in and uses `_lazy_call` to put something on the queue (in our case, the manual_seed)
5. original thread finishes initializing, but never runs that call
@suo suo requested review from eqy and syed-ahmed as code owners December 13, 2024 23:46
Copy link
pytorch-bot bot commented Dec 13, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/143238

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ You can merge normally! (1 Unrelated Failure)

As of commit 3b2a844 with merge base 8621b9f (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@suo suo requested review from ngimel and albanD December 13, 2024 23:47
@suo suo added the release notes: cuda release notes category label Dec 13, 2024
@suo
Copy link
Member Author
suo commented Dec 14, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Dec 14, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

aditew01 pushed a commit to aditew01/pytorch that referenced this pull request Dec 18, 2024
The access to lazy init callbacks (`_lazy_seed_tracker` and `_queued_calls`) is not synchronized with the initialization lock.

This exposes us to the following race:
1. start `_lazy_init`
2. take `_initialization_lock`
3. flush `_queued_calls` and run them all
4. another thread comes in and uses `_lazy_call` to put something on the queue (in our case, the `manual_seed`)
5. original thread finishes initializing, but never runs that call

Pull Request resolved: pytorch#143238
Approved by: https://github.com/ngimel
@github-actions github-actions bot deleted the cuda-init-fix branch January 14, 2025 02:05
@awgu
Copy link
Collaborator
awgu commented Mar 5, 2025

It looks like this PR did not make it into 2.6 release? Could this be considered for a backport candidate for 2.6.1 or something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: cuda release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0