Concurrency and Synchronization in Operating Systems
Concurrency and Synchronization in Operating Systems
1.1 Threads
A thread consists of a program counter, a stack, a set of registers, and a thread ID. Unlike
processes, threads within the same process share the same address space, code, and data, but
each has its own stack for function calls and local variables.
Types of Threads:
o User-Level Threads: Managed by a thread library (e.g., POSIX threads)
without kernel involvement. Faster but limited by single-threaded kernel
scheduling.
o Kernel-Level Threads: Managed by the operating system. Slower due to
kernel overhead but better for utilizing multiple CPUs.
Advantages:
o Faster creation and context switching compared to processes.
o Efficient resource sharing within a process.
o Improved responsiveness (e.g., one thread handles I/O while another
computes).
Disadvantages:
o Shared memory can lead to data inconsistency without proper synchronization.
o A single thread crash may affect the entire process.
2.1 Semaphore
Types:
o Binary Semaphore: Counter is 0 or 1, used for mutual exclusion (similar to a
mutex).
o Counting Semaphore: Counter can take any non-negative value, used to
control access to a finite number of resources.
Operations:
o Wait (P): Decrements the counter; if the counter is 0, the thread blocks.
o Signal (V): Increments the counter, unblocking a waiting thread if any.
Use Case: Managing a pool of resources (e.g., limiting the number of threads
accessing a database).
Advantages: Flexible for controlling multiple resource instances.
Disadvantages: Improper use can lead to deadlocks or starvation.
A mutex is a binary lock that ensures only one thread can access a critical section (a code
segment accessing shared resources) at a time.
Operations:
o Lock: Acquires the mutex; if already locked, the thread blocks.
o Unlock: Releases the mutex, allowing another thread to acquire it.
Properties:
o Only the thread that locks the mutex can unlock it.
o Recursive mutexes allow the same thread to lock multiple times without
deadlocking.
Use Case: Protecting a shared data structure (e.g., a linked list) from concurrent
modifications.
Advantages: Simple and effective for mutual exclusion.
Disadvantages: Limited to single-resource access; prone to priority inversion in real-
time systems.
2.3 Locks
Locks are a general category of synchronization mechanisms, including mutexes and other
variants like read-write locks.
Race Conditions: Occur when multiple threads access shared data without proper
synchronization, leading to unpredictable results.
Priority Inversion: A low-priority thread holding a lock delays a high-priority thread,
mitigated by priority inheritance.
Overhead: Synchronization mechanisms introduce performance costs, requiring
careful design.
Four conditions must hold simultaneously for a deadlock to occur (Coffman conditions):
1. Mutual Exclusion: Resources involved are held in a non-shareable mode (only one
thread can use a resource at a time).
2. Hold and Wait: A thread holding at least one resource is waiting to acquire
additional resources held by others.
3. No Preemption: Resources cannot be forcibly taken from a thread; they must be
released voluntarily.
4. Circular Wait: A set of threads form a circular chain, where each waits for a resource
held by the next.
Mutual Exclusion: Use shareable resources (e.g., read-only data), though not always
feasible.
Hold and Wait: Require threads to request all resources at once or release held
resources before requesting new ones. This can reduce resource utilization.
No Preemption: Allow the operating system to preempt resources, forcing threads to
restart their requests. Complex to implement for all resources.
Circular Wait: Impose a total ordering on resources (e.g., assign numbers to
resources and request them in ascending order). Widely used and effective.
Deadlock avoidance dynamically analyzes resource allocation to ensure a safe state (where at
least one thread can complete). Techniques include:
If prevention and avoidance are impractical, the system can detect and recover from
deadlocks:
Detection:
o Maintain a resource allocation graph and check for cycles periodically.
o Use algorithms to analyze resource usage and identify deadlocked threads.
Recovery:
o Process Termination: Kill one or more threads to break the deadlock. Choose
based on priority or resource usage.
o Resource Preemption: Forcibly take resources from a thread and allocate
them to another, rolling back the preempted thread.
Challenges: Detection introduces overhead; recovery may cause data loss or require
complex rollback mechanisms.
Timeouts: Use timeouts to avoid indefinite waiting, though this may lead to livelock
(threads repeatedly retry without progress).
Design Practices: Minimize shared resources, use high-level synchronization
primitives (e.g., monitors), and test for concurrency bugs.
Conclusion
Concurrency and synchronization are essential for leveraging modern multicore systems and
improving application performance. Threads and Multithreading enable concurrent
execution within a process, enhancing responsiveness and parallelism. Semaphores,
Mutexes, and Locks provide mechanisms to coordinate access to shared resources,
preventing race conditions. Deadlock and its Prevention/Detection address the challenges
of resource contention, ensuring system reliability. Mastering these concepts is crucial for
developing efficient, scalable, and robust concurrent applications. Future topics may include
advanced synchronization techniques, parallel programming models, and concurrency in
distributed systems.