8000 test_graph_cudnn_dropout now passes too · pytorch/pytorch@bdc6d30 · GitHub
[go: up one dir, main page]

Skip to content

Commit bdc6d30

Browse files
committed
test_graph_cudnn_dropout now passes too
1 parent 2293a94 commit bdc6d30

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

c10/cuda/CUDAMallocAsyncAllocator.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ void free(void* ptr) {
270270
"This may be benign, for example, a Python tensor in the capture "
271271
"might happen to shadow (use the same name as) an unrelated temporary "
272272
"tensor from somewhere before capture, pushing the earlier tensor "
273-
"out of scope.\n"
273+
"out of scope. "
274274
"However, if the tensor we're freeing here IS used by the capture, "
275275
"freeing it is an error, and may cause illegal memory accesses or "
276276
"memory corruption during graph replay.");

test/test_cuda.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3711,6 +3711,8 @@ def test_graph_cudnn_dropout(self):
37113711
g.capture_end()
37123712
torch.cuda.current_stream().wait_stream(s)
37133713

3714+
g.replay()
3715+
37143716
y = model(x)
37153717

37163718
@unittest.skipIf((not TEST_CUDA) or

0 commit comments

Comments
 (0)
0