-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Parallelize sort #149505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelize sort #149505
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/149505
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 3 Unrelated FailuresAs of commit 9378a79 with merge base 790f93d ( NEW FAILURES - The following jobs have failed:
UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label "module: cpu" |
@pytorchbot label "ciflow/linux-aarch64" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well spotted, LGTM!
@pytorchbot label "topic: not user facing" |
@nikhil-arm Could you please review ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also a good backport candidate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚢 it
All failures seem unrelated. |
@pytorchbot merge -f "Failures are unrelated" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot cherry-pick --onto release/2.7 -c "critical" |
PR #142391 erroneously used `USE_OMP` instead of `USE_OPENMP`. Pull Request resolved: #149505 Approved by: https://github.com/fadara01, https://github.com/Skylion007 (cherry picked from commit 842d515)
Cherry picking #149505The cherry pick PR is at #149765 and it is recommended to link a critical cherry pick PR with an issue. The following tracker issues are updated: Details for Dev Infra teamRaised by workflow job |
I think critical is a strong statement, but whatever... |
@pytorchbot revert -c nosignal -m "Reverting since this is breaking inductor builds on trunk. More details GH job link HUD commit link" Note that this failure is a bit flaky, but you can see that it started when this PR was merged Also reverting the cherry-pick |
@pytorchbot successfully started a revert job. Check the current status here. |
@annop-w your PR has been successfully reverted. |
This reverts commit 842d515. Reverted #149505 on behalf of https://github.com/ZainRizvi due to Reverting since this is breaking inductor builds on trunk. More details [GH job link](https://github.com/pytorch/pytorch/actions/runs/14000726218/job/39207447863) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/842d51500be144d53f4d046d31169e8f46c063f6) ([comment](#149505 (comment)))
This PR was reopened (likely due to being reverted), so your approval was removed. Please request another review.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
PR pytorch#142391 erroneously used `USE_OMP` instead of `USE_OPENMP`. Pull Request resolved: pytorch#149505 Approved by: https://github.com/fadara01, https://github.com/Skylion007
This reverts commit 842d515. Reverted pytorch#149505 on behalf of https://github.com/ZainRizvi due to Reverting since this is breaking inductor builds on trunk. More details [GH job link](https://github.com/pytorch/pytorch/actions/runs/14000726218/job/39207447863) [HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/842d51500be144d53f4d046d31169e8f46c063f6) ([comment](pytorch#149505 (comment)))
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has 9DA1 dependency on libtbb and we thus decide to avoid that.
Resolve pytorch#149977, pytorch#149979, pytorch#150094. Previously, pytorch#149505 used libstdc++ parallel mode by enabling -D_GLIBCXX_PARALLEL. However, mixing source files compiled with and without parallel mode can lead to undefined behavior (See https://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode_using.html) We switch to using the specific paralell sort from <parallel/algorithm> when compiled with GCC compiler. Note that use of std::execution policy has dependency on libtbb and we thus decide to avoid that.
PR #142391 erroneously used
USE_OMP
instead ofUSE_OPENMP
.cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10