8000 [3/N] Set correct device to CUDA guards by kwen2501 · Pull Request #134357 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[3/N] Set correct device to CUDA guards #134357

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from
19 changes: 7 additions & 12 deletions torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1103,11 +1103,8 @@ void ProcessGroupNCCL::abortCommsFromMap(
for (auto& it : ncclCommsMap) {
auto& devName = it.first;
auto& ncclComm = it.second;
at::cuda::OptionalCUDAGuard gpuGuard;
at::DeviceIndex deviceIndex = getIndexFromDeviceKey(devName);
Copy link
Contributor
@shuqiangzhang shuqiangzhang Aug 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For P2P comms, the deviceIndex could be -1 (invalid), as the keys in the map could be non deviceIndex, but rank to rank numbers. So we indeed need to check if deviceIndex >= 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the information. Let me revert this change here and add the above comments into the code.

if (deviceIndex >= 0) {
gpuGuard.set_index(deviceIndex);
}
at::cuda::OptionalCUDAGuard gpuGuard(deviceIndex);
LOG(INFO) << logPrefix() << "ProcessGroupNCCL destroying ncclComm_ "
<< ncclComm->ncclComm_ << " on CUDA device: " << devName;
ncclComm->ncclCommAbort(abortReason);
Expand Down Expand Up @@ -2132,7 +2129,9 @@ std::shared_ptr<NCCLComm> ProcessGroupNCCL::getNCCLComm(
<< timerDeltaMs << " ms";
}

at::cuda::OptionalCUDAGuard gpuGuard;
// Get the device index
auto deviceIndex = device.index();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: do we need the deviceIndex var? Looks like we just pass device to guard directly

Ok probably we are using it somewhere, you moved the decl from below to above.

at::cuda::OptionalCUDAGuard gpuGuard(device);

// [Group Start/End Note] This is used to ensure that nccl communicator will
// be created before communication primitives are called. Let's look at this
Expand Down Expand Up @@ -2172,10 +2171,6 @@ std::shared_ptr<NCCLComm> ProcessGroupNCCL::getNCCLComm(
rank = p2pRank;
}

// Get the device index
auto deviceIndex = device.index();
gpuGuard.set_index(deviceIndex);

#ifdef NCCL_HAS_COMM_SPLIT
if (options_->split_from) {
TORCH_CHECK(
Expand Down Expand Up @@ -2665,7 +2660,7 @@ c10::intrusive_ptr<Work> ProcessGroupNCCL::collective(
work->stashed_for_allocator_safety_->push_back(input);
}

at::cuda::OptionalCUDAGuard gpuGuard;
at::cuda::OptionalCUDAGuard gpuGuard(device);

if (nanCheck) {
checkForNan(input, ncclStream);
Expand Down Expand Up @@ -2830,7 +2825,7 @@ c10::intrusive_ptr<Work> ProcessGroupNCCL::collectiveCoalesced(
std::make_shared<std::vector<at::Tensor>>(inputs);
}

at::cuda::OptionalCUDAGuard gpuGuard;
at::cuda::OptionalCUDAGuard gpuGuard(device);

// Start event should only be recorded before the ncclGroupStart() (which
// happens inside AutoNcclGroup guard below)
Expand Down Expand Up @@ -3098,7 +3093,7 @@ c10::intrusive_ptr<Work> ProcessGroupNCCL::pointToPoint(
}

// is gpuGuard needed for the if block below, or can i swap them
at::cuda::OptionalCUDAGuard gpuGuard;
at::cuda::OptionalCUDAGuard gpuGuard(device);

// Only check for NaN for send ops, for recv ops `tensor` can be a random
// placeholder
Expand Down
Loading
0