-
Notifications
You must be signed in to change notification settings - Fork 24.3k
[BE] Use C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED
#148354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/148354
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit b103835 with merge base aade4fb ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Instead of `#pragma GCC diagnostic ignored "-Wignored-qualifiers"` Also limit the scope to just `Vectorized::map` that has to be declared that way due to sleef function signature definitions that return `const __m256` for AVX2 methods Also delete `#pragma GCC diagnostic pop` from vec256_half and vec256_bfloat16 as it results in an unbalanced pop warning, for push that is defined in vec256_16bit_float, which will be included only once ``` In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7: In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:15: /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_half.h:232:27: warning: pragma diagnostic pop could not pop, no matching push [-Wunknown-pragmas] 232 | #pragma GCC diagnostic pop | ^ 1 warning generated. ``` ghstack-source-id: 8495465 Pull Request resolved: #148354
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As someone unfamiliar with these defines, what makes C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED
better than #pragma GCC diagnostic ignored "-Wignored-qualifiers"
?
|
@pytorchbot merge -f "Claude believes this PR is good" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Yet another regression caused by #146596 that breaks builds if PyTorch is compiled for Android or using NVIDIA GraceHopper systems Not sure why author was trying to change the conditon to begin with Pull Request resolved: #148362 Approved by: https://github.com/izaitsevfb ghstack dependencies: #148354
Instead of `#pragma GCC diagnostic ignored "-Wignored-qualifiers"` Also limit the scope to just `Vectorized::map` that has to be declared that way due to sleef function signature definitions that return `const __m256` for AVX2 methods Also delete `#pragma GCC diagnostic pop` from vec256_half and vec256_bfloat16 as it results in an unbalanced pop warning, for push that is defined in vec256_16bit_float, which will be included only once ``` In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7: In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:15: /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_half.h:232:27: warning: pragma diagnostic pop could not pop, no matching push [-Wunknown-pragmas] 232 | #pragma GCC diagnostic pop | ^ 1 warning generated. ``` Pull Request resolved: pytorch#148354 Approved by: https://github.com/izaitsevfb
…48362) Yet another regression caused by pytorch#146596 that breaks builds if PyTorch is compiled for Android or using NVIDIA GraceHopper systems Not sure why author was trying to change the conditon to begin with Pull Request resolved: pytorch#148362 Approved by: https://github.com/izaitsevfb ghstack dependencies: pytorch#148354
Instead of `#pragma GCC diagnostic ignored "-Wignored-qualifiers"` Also limit the scope to just `Vectorized::map` that has to be declared that way due to sleef function signature definitions that return `const __m256` for AVX2 methods Also delete `#pragma GCC diagnostic pop` from vec256_half and vec256_bfloat16 as it results in an unbalanced pop warning, for push that is defined in vec256_16bit_float, which will be included only once ``` In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7: In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:15: /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_half.h:232:27: warning: pragma diagnostic pop could not pop, no matching push [-Wunknown-pragmas] 232 | #pragma GCC diagnostic pop | ^ 1 warning generated. ``` Pull Request resolved: pytorch#148354 Approved by: https://github.com/izaitsevfb
…48362) Yet another regression caused by pytorch#146596 that breaks builds if PyTorch is compiled for Android or using NVIDIA GraceHopper systems Not sure why author was trying to change the conditon to begin with Pull Request resolved: pytorch#148362 Approved by: https://github.com/izaitsevfb ghstack dependencies: pytorch#148354
Stack from ghstack (oldest at bottom):
CONVERT_NON_VECTORIZED_INIT
invocation #148362C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED
#148354Instead of
#pragma GCC diagnostic ignored "-Wignored-qualifiers"
Also limit the scope to just
Vectorized::map
that has to be declared that way due to sleef function signature definitions that returnconst __m256
for AVX2 methodsAlso delete
#pragma GCC diagnostic pop
from vec256_half and vec256_bfloat16 as it results in an unbalanced pop warning, for push that is defined in vec256_16bit_float, which will be included only oncecc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10