[go: up one dir, main page]

0% found this document useful (0 votes)
452 views5 pages

Concurrency and Synchronization in Operating Systems

The lecture notes cover concurrency and synchronization in operating systems, focusing on threads, semaphores, mutexes, locks, and deadlock management. It discusses the advantages and disadvantages of multithreading, synchronization mechanisms, and the conditions and strategies for deadlock prevention and detection. Mastering these concepts is essential for developing efficient and robust concurrent applications.

Uploaded by

riksohom3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
452 views5 pages

Concurrency and Synchronization in Operating Systems

The lecture notes cover concurrency and synchronization in operating systems, focusing on threads, semaphores, mutexes, locks, and deadlock management. It discusses the advantages and disadvantages of multithreading, synchronization mechanisms, and the conditions and strategies for deadlock prevention and detection. Mastering these concepts is essential for developing efficient and robust concurrent applications.

Uploaded by

riksohom3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture Notes: Concurrency and

Synchronization in Operating Systems


Introduction
Concurrency and synchronization are critical concepts in operating systems, enabling
multiple processes or threads to execute simultaneously while sharing resources efficiently.
Concurrency improves system performance by utilizing multiple CPU cores and overlapping
I/O operations, but it introduces challenges in coordinating access to shared resources. This
lecture note covers three key aspects: Threads and Multithreading, Semaphore, Mutex,
and Locks, and Deadlock and its Prevention/Detection. These topics are essential for
designing robust, efficient, and deadlock-free concurrent systems.

1. Threads and Multithreading


Threads are lightweight units of execution within a process, allowing multiple tasks to run
concurrently while sharing the same memory space. Multithreading is the ability of a process
to manage multiple threads, improving performance and responsiveness.

1.1 Threads

A thread consists of a program counter, a stack, a set of registers, and a thread ID. Unlike
processes, threads within the same process share the same address space, code, and data, but
each has its own stack for function calls and local variables.

 Types of Threads:
o User-Level Threads: Managed by a thread library (e.g., POSIX threads)
without kernel involvement. Faster but limited by single-threaded kernel
scheduling.
o Kernel-Level Threads: Managed by the operating system. Slower due to
kernel overhead but better for utilizing multiple CPUs.
 Advantages:
o Faster creation and context switching compared to processes.
o Efficient resource sharing within a process.
o Improved responsiveness (e.g., one thread handles I/O while another
computes).
 Disadvantages:
o Shared memory can lead to data inconsistency without proper synchronization.
o A single thread crash may affect the entire process.

1.2 Multithreading Models

Multithreading models define how user-level threads map to kernel-level threads:


 Many-to-One: Multiple user threads map to one kernel thread. Simple but cannot
utilize multiple CPUs.
 One-to-One: Each user thread maps to a kernel thread. Supports parallelism but
incurs kernel overhead.
 Many-to-Many: Multiple user threads map to multiple kernel threads, balancing
flexibility and performance.

1.3 Use Cases

 Web Servers: Handle multiple client requests concurrently using threads.


 Parallel Computing: Divide tasks (e.g., matrix multiplication) across threads for
faster execution.
 GUI Applications: Separate threads for user interface and background tasks to
maintain responsiveness.

2. Semaphore, Mutex, and Locks


Synchronization mechanisms ensure that concurrent threads or processes access shared
resources (e.g., memory, files) safely, preventing data corruption and race conditions.
Semaphores, mutexes, and locks are common tools for achieving synchronization.

2.1 Semaphore

A semaphore is a synchronization primitive that maintains a counter to control access to a


shared resource.

 Types:
o Binary Semaphore: Counter is 0 or 1, used for mutual exclusion (similar to a
mutex).
o Counting Semaphore: Counter can take any non-negative value, used to
control access to a finite number of resources.
 Operations:
o Wait (P): Decrements the counter; if the counter is 0, the thread blocks.
o Signal (V): Increments the counter, unblocking a waiting thread if any.
 Use Case: Managing a pool of resources (e.g., limiting the number of threads
accessing a database).
 Advantages: Flexible for controlling multiple resource instances.
 Disadvantages: Improper use can lead to deadlocks or starvation.

2.2 Mutex (Mutual Exclusion)

A mutex is a binary lock that ensures only one thread can access a critical section (a code
segment accessing shared resources) at a time.

 Operations:
o Lock: Acquires the mutex; if already locked, the thread blocks.
o Unlock: Releases the mutex, allowing another thread to acquire it.
 Properties:
o Only the thread that locks the mutex can unlock it.
o Recursive mutexes allow the same thread to lock multiple times without
deadlocking.
 Use Case: Protecting a shared data structure (e.g., a linked list) from concurrent
modifications.
 Advantages: Simple and effective for mutual exclusion.
 Disadvantages: Limited to single-resource access; prone to priority inversion in real-
time systems.

2.3 Locks

Locks are a general category of synchronization mechanisms, including mutexes and other
variants like read-write locks.

 Read-Write Locks: Allow multiple readers or one writer to access a resource,


improving concurrency for read-heavy workloads.
 Spinlocks: Threads busy-wait (spin) instead of blocking, suitable for short critical
sections on multicore systems.
 Use Case: Read-write locks for a shared cache; spinlocks for low-latency systems.
 Advantages: Tailored to specific scenarios (e.g., read-heavy or low-latency).
 Disadvantages: Complexity increases with advanced lock types; spinlocks waste
CPU cycles.

2.4 Synchronization Challenges

 Race Conditions: Occur when multiple threads access shared data without proper
synchronization, leading to unpredictable results.
 Priority Inversion: A low-priority thread holding a lock delays a high-priority thread,
mitigated by priority inheritance.
 Overhead: Synchronization mechanisms introduce performance costs, requiring
careful design.

3. Deadlock and its Prevention/Detection


A deadlock occurs when a set of processes or threads are unable to proceed because each
holds a resource and waits for another resource held by another process/thread in the set.
Deadlocks are a significant challenge in concurrent systems.

3.1 Conditions for Deadlock

Four conditions must hold simultaneously for a deadlock to occur (Coffman conditions):

1. Mutual Exclusion: Resources involved are held in a non-shareable mode (only one
thread can use a resource at a time).
2. Hold and Wait: A thread holding at least one resource is waiting to acquire
additional resources held by others.
3. No Preemption: Resources cannot be forcibly taken from a thread; they must be
released voluntarily.
4. Circular Wait: A set of threads form a circular chain, where each waits for a resource
held by the next.

3.2 Deadlock Prevention

Preventing deadlocks involves breaking one of the four conditions:

 Mutual Exclusion: Use shareable resources (e.g., read-only data), though not always
feasible.
 Hold and Wait: Require threads to request all resources at once or release held
resources before requesting new ones. This can reduce resource utilization.
 No Preemption: Allow the operating system to preempt resources, forcing threads to
restart their requests. Complex to implement for all resources.
 Circular Wait: Impose a total ordering on resources (e.g., assign numbers to
resources and request them in ascending order). Widely used and effective.

3.3 Deadlock Avoidance

Deadlock avoidance dynamically analyzes resource allocation to ensure a safe state (where at
least one thread can complete). Techniques include:

 Banker’s Algorithm: Tracks available and allocated resources, granting requests


only if the system remains in a safe state.
 Limitations: Requires advance knowledge of resource needs and can be
computationally expensive.

3.4 Deadlock Detection and Recovery

If prevention and avoidance are impractical, the system can detect and recover from
deadlocks:

 Detection:
o Maintain a resource allocation graph and check for cycles periodically.
o Use algorithms to analyze resource usage and identify deadlocked threads.
 Recovery:
o Process Termination: Kill one or more threads to break the deadlock. Choose
based on priority or resource usage.
o Resource Preemption: Forcibly take resources from a thread and allocate
them to another, rolling back the preempted thread.
 Challenges: Detection introduces overhead; recovery may cause data loss or require
complex rollback mechanisms.

3.5 Practical Considerations

 Timeouts: Use timeouts to avoid indefinite waiting, though this may lead to livelock
(threads repeatedly retry without progress).
 Design Practices: Minimize shared resources, use high-level synchronization
primitives (e.g., monitors), and test for concurrency bugs.

Conclusion
Concurrency and synchronization are essential for leveraging modern multicore systems and
improving application performance. Threads and Multithreading enable concurrent
execution within a process, enhancing responsiveness and parallelism. Semaphores,
Mutexes, and Locks provide mechanisms to coordinate access to shared resources,
preventing race conditions. Deadlock and its Prevention/Detection address the challenges
of resource contention, ensuring system reliability. Mastering these concepts is crucial for
developing efficient, scalable, and robust concurrent applications. Future topics may include
advanced synchronization techniques, parallel programming models, and concurrency in
distributed systems.

You might also like