8000 remove TORCH_NCCL_AVOID_RECORD_STREAMS,use stashed_for_allocator_safe… · pytorch/pytorch@45ff1f8 · GitHub
[go: up one dir, main page]

Skip to content

Commit 45ff1f8

Browse files
committed
remove TORCH_NCCL_AVOID_RECORD_STREAMS,use stashed_for_allocator_safety_ to save the input ref
1 parent e0ea593 commit 45ff1f8

File tree

3 files changed

+34
-250
lines changed

3 files changed

+34
-250
lines changed

docs/source/cuda_environment_variables.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,6 @@ For more information on CUDA runtime environment variables, see `CUDA Environmen
2525
- If set to ``1``, forces TF32 enablement, overrides ``set_float32_matmul_precision`` setting.
2626
* - ``TORCH_NCCL_USE_COMM_NONBLOCKING``
2727
- If set to ``1``, enables non-blocking error handling in NCCL.
28-
* - ``TORCH_NCCL_AVOID_RECORD_STREAMS``
29-
- If set to ``0``, enables fallback to record streams-based synchronization behavior in NCCL.
3028
* - ``TORCH_CUDNN_V8_API_DEBUG``
3129
- If set to ``1``, sanity check whether cuDNN V8 is being used.
3230

0 commit comments

Comments
 (0)
0