-
Notifications
You must be signed in to change notification settings - Fork 13.6k
OpenMP failure: Assertion failure at kmp_affinity.cpp(3523) #137136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Just to add I have also tried playing with OMP and KMP flags to avoid this assertion somehow, e.g.:
and some other combinations to no avail. |
@llvm/issue-subscribers-openmp Author: Jure Bajic (jbajic)
## Error
Assertion failure at kmp_affinity.cpp(3523): num_avail == (unsigned)__kmp_avail_proc.
OMP: Error #13: Assertion failure at kmp_affinity.cpp(3523).
OMP: Hint Please submit a bug report with this message, compile and run commands used, and machine configuration info including native compiler and operating system versions. Faster response will be obtained by including all program sources. For information on submitting this issue, please see https://github.com/llvm/llvm-project/issues/.
Sources#include <iostream>
#include <omp.h>
int main() {
std::cout << "Program starting..." << std::endl;
// Optional: Set the number of threads programmatically
// omp_set_num_threads(4); // Or use OMP_NUM_THREADS environment variable
// This pragma marks the start of a parallel region.
// The code inside the {} block will be executed by multiple threads.
#pragma omp parallel
{
// Get the unique ID of the current thread
int thread_id = omp_get_thread_num();
// Get the total number of threads executing in this parallel region
int num_threads = omp_get_num_threads();
// Each thread will print its own message
// Using std::cout requires careful synchronization in more complex scenarios,
// but for simple prints like this, it's often okay, though output might interleave.
// Using printf might be slightly safer for interleaved output in simple cases.
#pragma omp critical // Ensures only one thread prints at a time to avoid garbled output
{
std::cout << "Hello from thread " << thread_id
<< " out of " << num_threads << " threads." << std::endl;
}
// Example of work done by each thread (optional)
// #pragma omp for // Could add a parallel loop here if needed
// for(int i=0; i < 5; ++i) {
// printf("Thread %d processing item %d\n", thread_id, i);
command: clang++-19 -fopenmp=libomp openmp.cpp -o openmp
./openmp The same program works with GCC (g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0) EnviromentNow this is a bit more complex. This has started to happen when Circle CI migrated their docker containers to cgroupv2 but only on ARM machines. I am not sure what might be the cause, but here is additional information:
Docker container: cimg/base:2024.0
Additional infooutput from
Output from
Output from all cgroupv2 files related to the current process:
|
Uh oh!
There was an error while loading. Please reload this page.
Error
Assertion failure at kmp_affinity.cpp(3523): num_avail == (unsigned)__kmp_avail_proc.
OMP: Error #13: Assertion failure at kmp_affinity.cpp(3523).
OMP: Hint Please submit a bug report with this message, compile and run commands used, and machine configuration info including native compiler and operating system versions. Faster response will be obtained by including all program sources. For information on submitting this issue, please see https://github.com/llvm/llvm-project/issues/.
Sources
command:
The same program works with GCC (g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0)
Enviroment
Now this is a bit more complex. This has started to happen when Circle CI migrated their docker containers to cgroupv2 but only on ARM machines. I am not sure what might be the cause, but here is additional information:
Machine:
Docker container: cimg/base:2024.0
Clang version:
Additional info
output from
cat /proc/cpuinfo
Output from
lscpu -e
:Output from all cgroupv2 files related to the current process:
The text was updated successfully, but these errors were encountered: