-
Notifications
You must be signed in to change notification settings - Fork 24.8k
[Intel GPU] allow_tf32 for oneDNN backend - XPU part #137570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 6 commits
7cb460c
aa6eb63
ac3000e
7c7f5bd
a7ecf30
e14a980
48d0530
d9ab777
b6e6e4e
5a6f59a
60fb9f7
3bea915
06d1680
bf3c110
8e74b7c
2e0f344
4bc77f8
2c667ac
bc921a5
f0d657e
ab586cd
b91414a
6ce003d
c159f6b
0112cb7
4a6ac2d
ea9e69b
6c87ae6
198cc01
ea08036
913e675
5c6fac3
f95ada4
096fd09
689dadf
5f7d0e7
fb2a6c4
80ea731
bbad26b
cc3eefe
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
9E88
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1264,6 +1264,17 @@ def test_channels_last_ouput_stride(self, device, dtype): | |
# input NHWC, output NHWC | ||
assert_size_stride(out, (2, 512, 7, 7), (25088, 1, 3584, 512)) | ||
|
||
@onlyXPU | ||
def test_mkldnn_allow_tf32_get_set(self, device): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it is now removed in newest commit, thanks. |
||
with torch.backends.mkldnn.flags( | ||
enabled=None, deterministic=None, allow_tf32=False | ||
): | ||
self.assertFalse(torch.backends.mkldnn.allow_tf32) | ||
with torch.backends.mkldnn.flags( | ||
enabled=None, deterministic=None, allow_tf32=True | ||
): | ||
self.assertTrue(torch.backends.mkldnn.allow_tf32) | ||
|
||
|
||
instantiate_device_type_tests( | ||
ZhiweiYan-96 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
TestConvolutionNNDeviceType, globals(), only_for="xpu", allow_xpu=True | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -888,6 +888,25 @@ PyObject* THPModule_setDeterministicAlgorithms( | |
END_HANDLE_TH_ERRORS | ||
} | ||
|
||
PyObject* THPModule_setAllowTF32Mkldnn(PyObject* _unsued, PyObject* arg) { | ||
HANDLE_TH_ERRORS | ||
TORCH_CHECK( | ||
PyBool_Check(arg), | ||
"set_allow_tf32_cublas expects a bool, " | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the reminding, it has been modified |
||
"but got ", | ||
THPUtils_typename(arg)); | ||
at::globalContext().setAllowTF32Mkldnn(arg == Py_True); | ||
Py_RETURN_NONE; | ||
END_HANDLE_TH_ERRORS | ||
} | ||
|
||
PyObject* THPModule_allowTF32Mkldnn(PyObject* _unused, PyObject* noargs) { | ||
if(at::globalContext().allowTF32Mkldnn()) | ||
Py_RETURN_TRUE; | ||
else | ||
Py_RETURN_FALSE; | ||
ZhiweiYan-96 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
} | ||
|
||
PyObject* THPModule_deterministicAlgorithms( | ||
PyObject* _unused, | ||
PyObject* noargs) { | ||
|
@@ -1410,6 +1429,8 @@ static PyMethodDef TorchMethods[] = { // NOLINT | |
{"_set_mkldnn_enabled", THPModule_setUserEnabledMkldnn, METH_O, nullptr}, | ||
558 | {"_get_cudnn_allow_tf32", THPModule_allowTF32CuDNN, METH_NOARGS, nullptr}, | |
{"_set_cudnn_allow_tf32", THPModule_setAllowTF32CuDNN, METH_O, nullptr}, | ||
{"_get_mkldnn_allow_tf32", THPModule_allowTF32Mkldnn, METH_NOARGS, nullptr}, | ||
{"_set_mkldnn_allow_tf32", THPModule_setAllowTF32Mkldnn, METH_O, nullptr}, | ||
{"_get_cudnn_benchmark", THPModule_benchmarkCuDNN, METH_NOARGS, nullptr}, | ||
{"_set_cudnn_benchmark", THPModule_setBenchmarkCuDNN, METH_O, nullptr}, | ||
{"_get_cudnn_deterministic", | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yanbing-j , should we use "onednn" rather than "mkldnn"? Because, I noticed you maintain a huge ghstack to replace mkldnn with onednn.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@EikanWang Is this a new API which conains 'mkldnn'? If so, you should use 'onednn' directly. Otherwise, I will change it to 'onednn' in my ghstack when this PR is merged.
For this new added API, you can use 'onednn' directly. For other existing API usage, you can use 'mkldnn', and I will rebase for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a new API. @ZhiweiYan-96 let's refine the API name by replacing
mkldnn
withonednn
. Meanwhile, pls. link @yanbing-j's PR to help reviewers get the background.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ghstack starts from #133289.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, I will change the word, and thanks for your kindly clarification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have changed word "mkldnn" need to be changed in the PR, thanks for your comments.