[go: up one dir, main page]

0% found this document useful (0 votes)
28 views10 pages

Process Synchronization in Operating Systems

Uploaded by

amiragchandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views10 pages

Process Synchronization in Operating Systems

Uploaded by

amiragchandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Process

Synchronization in
Operating Systems
Understanding how concurrent processes access shared resources
is fundamental to building robust and reliable software systems.
This presentation explores the essential concepts and mechanisms
that ensure data consistency and system stability in multi-process
environments.
Chapter 1

The Challenge of Concurrency


Operating systems frequently manage multiple processes that need to share resources. Without proper
coordination, this concurrent access can lead to unpredictable results and data corruption. Process synchronization
is the solution, ensuring orderly and controlled access to shared data.

1 2 3

Shared Resources Concurrent Access Potential Inconsistencies


Multiple processes often need to When processes run at the same Uncontrolled access can lead to
access the same memory, files, time and interact with shared incorrect data or system
or I/O devices. resources. crashes.
Key Concept

Race Conditions
A race condition occurs when two or more processes access shared
resources concurrently, and the final outcome depends on the specific
order in which the processes execute. This leads to unpredictable and
timing-dependent results, often causing incorrect data or system errors.

Concurrent Access: Multiple processes attempt to read from or write


to a shared resource at the same time.
Execution Order Dependency: The final state of the shared resource
is determined by which process finishes its operation first.
Unpredictable Results: The output is not consistent and varies with
each run, making debugging difficult.
Illustrative Scenario

Race Condition: Counter Example


Consider two processes, P1 and P2, both attempting to increment a shared counter variable initialized to 0. The operation count
= count + 1 is not atomic; it involves three steps: read, increment, and write.

Time Process P1 Process P2 count (initial = 0)

T1 Read count (0) into R1 0

T2 Read count (0) into R2 0

T3 R1 becomes 1 0

T4 R2 becomes 1 0

T5 Write R1 (1) to count 1

T6 Write R2 (1) to count 1

Result: The final value of count is 1 instead of the expected 2, due to the interleaved execution.
Solution Segment

Critical Section
A critical section is a segment of code where a
Entry
process accesses shared resources. The fundamental Prepare to request access

principle is that only one process should be


Critical Section
allowed to execute its critical section at any Exclusive access to resource

given time to ensure atomic operations on shared


Exit
data. This solves the "critical section problem" by
Release the resource
controlling access and preventing race conditions.
Remainder
Noncritical work continues
"The critical section is the heart of concurrent
programming, where shared data receives exclusive
access."
Ensuring Reliability

Critical Section Requirements


To effectively solve the critical section problem, any proposed solution must satisfy three essential requirements:

Mutual Exclusion Progress Bounded Waiting


Only one process can be inside If no process is in its critical There must be a limit on the
its critical section at any given section and some processes number of times other
time. This is the most want to enter, only those processes are allowed to enter
fundamental requirement. processes that are not in their critical sections after a
their remainder section can process has made a request to
participate in deciding enter its critical section and
which process will enter before that request is granted.
next. This prevents This prevents starvation.
indefinite postponement.
Coordination Techniques

Process Synchronization Mechanisms


Various mechanisms have been developed to coordinate concurrent processes, ensure proper execution order, and prevent race
conditions, thereby maintaining data consistency.

Semaphores Mutex Locks Monitors


Integer variables used for signaling and Simple locking mechanisms to ensure High-level constructs that encapsulate
controlling access to resources. exclusive access to critical sections. shared data and operations, with
automatic mutual exclusion.

Condition Variables Spinlocks


Used with monitors or mutexes to allow A type of lock where a process
processes to wait for specific conditions. continuously checks for lock availability,
busy-waiting.
Deep Dive

Semaphores and Mutexes


Semaphores Mutex Locks
Semaphores are integer variables used to control Mutex (Mutual Exclusion) locks are simpler
access to shared resources. They are operated on by synchronization primitives often used for protecting
two atomic functions: wait() (also known as P() or critical sections. Only one process or thread can hold
down()) and signal() (also known as V() or up()). the lock at a time.
Operations: acquire() to lock before entering a
Counting Semaphore: Can take any non-negative critical section, and release() to unlock after exiting.
integer value, useful for managing resources with Exclusive Access: Guarantees that only one process
multiple instances (e.g., a pool of database can execute the critical section guarded by the mutex.
connections).
Lightweight: Often preferred for their simplicity and
Binary Semaphore (Mutex): Can only be 0 or 1, efficiency in multithreaded environments.
primarily used to implement mutual exclusion, acting
as a simple lock.
Advanced Constructs

Monitors and Condition Variable


Monitors Condition Variables
Monitors are high-level synchronization constructs that Condition variables are used in conjunction with
encapsulate shared data along with the procedures monitors or mutexes to enable processes to wait for
(operations) that operate on that data. They ensure specific conditions to become true, avoiding busy-
that only one process is active within the monitor's waiting.
wait(): Releases the lock and blocks the process until
procedures at any given time, providing automatic
another process signals the condition.
mutual exclusion.
Encapsulation: Binds data and methods, preventing signal(): Wakes up one waiting thread.
direct access to shared variables.
broadcast(): Wakes up all threads waiting on the
Simplified Synchronization: Programmers don't condition.
need to manually manage locks for shared data within
the monitor.
Conclusion

Ensuring System Integrity


Effective process synchronization is paramount for maintaining data consistency, ensuring system stability, and enabling correct
behavior in concurrent programming environments. The choice of the right mechanism depends on the specific use case and the
underlying system architecture.

Reliable Software
1

2 Correct Data

3 Stable Systems

4 Synchronization

5 Shared Resources

By understanding and applying these synchronization concepts, software engineers can effectively prevent race conditions and build
robust, reliable concurrent applications.

You might also like