8000 don't set CUDA_MODULE_LOADING (#158712) · pytorch/pytorch@4869f71 · GitHub
[go: up one dir, main page]

Skip to content

Commit 4869f71

Browse files
ngimelpytorchmergebot
authored andcommitted
don't set CUDA_MODULE_LOADING (#158712)
If needed, it'll be set in `_C._cuda_init()`. setenv is not threadsafe, so this can cause segfaults due to getenv/setenv races. Pull Request resolved: #158712 Approved by: https://github.com/eqy
1 parent b4abf41 commit 4869f71

File tree

2 files changed

+0
-7
lines changed

2 files changed

+0
-7
lines changed

test/test_cuda.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6484,11 +6484,6 @@ def test_cuda_autocast_deprecated_warning(self):
64846484
with torch.cuda.amp.autocast():
64856485
_ = torch.ones(10)
64866486

6487-
def test_cuda_module_loading_env(self):
6488-
torch.cuda.init()
6489-
val = os.environ.get("CUDA_MODULE_LOADING", "")
6490-
self.assertEqual(val, "LAZY")
6491-
64926487

64936488
@unittest.skipIf(
64946489
os.environ.get("USE_LEGACY_DRIVER", None) == "1", "Doesn't work with older driver"

torch/cuda/__init__.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -379,8 +379,6 @@ def _lazy_init():
379379
)
380380
# This function throws if there's a driver initialization error, no GPUs
381381
# are found or any other error occurs
382-
if "CUDA_MODULE_LOADING" not in os.environ:
383-
os.environ["CUDA_MODULE_LOADING"] = "LAZY"
384382
torch._C._cuda_init()
385383
# Some of the queued calls may reentrantly call _lazy_init();
386384
# we need to just return without initializing in that case.

0 commit comments

Comments
 (0)
0