OPERATING SYSTEMS
PROCESS SYNCHRONIZATION
6: Process Synchronization 1
OPERATING SYSTEM
Synchronization
What Is In This Chapter?
• This is about getting processes to coordinate with each
other.
• How do processes work with resources that must be
shared between them?
• How do we go about acquiring locks to protect regions
of memory?
• How is synchronization really used?
6: Process Synchronization 2
Background
• Concurrent access to shared data may result
in data inconsistency
• Maintaining data consistency requires
mechanisms to ensure the orderly execution
of cooperating processes
• Example: producer-consumer type of
problems, bounded-buffer scheme
6: Process Synchronization 3
The Producer - Consumer Problem
A producer process "produces" information "consumed" by a consumer
process. Here are the variables needed to define the problem (bounded
buffer scheme):
#define BUFFER_SIZE 10
typedef struct {
DATA data;
} item;
item buffer[BUFFER_SIZE];
int in = 0; // Location of next input to buffer
int out = 0; // Location of next removal from buffer
int counter = 0; // Number of buffers currently full
Consider the code segments on the next page:
• Does it work?
• Are all buffers utilized?
6: Process Synchronization 4
The Producer - Consumer Problem
PRODUCER
item nextProduced;
#define BUFFER_SIZE 10
typedef struct {
while (TRUE) {
DATA data;
while (counter == BUFFER_SIZE); } item;
buffer[in] = nextProduced; item buffer[BUFFER_SIZE];
in = (in + 1) % BUFFER_SIZE; int in = 0;
counter++; int out = 0;
int counter = 0;
}
CONSUMER
item nextConsumed;
while (TRUE) {
while (counter == 0);
nextConsumed = buffer[out];
producer consumer
out = (out + 1) %
BUFFER_SIZE;
counter--;
}
6: Process Synchronization 5
The Producer - Consumer Problem
PRODUCER
item nextProduced; Both routines are correct separately, but may not
function correctly when executed concurrently
while (TRUE) {
while (counter == Note that variable “counter” is accessed by
BUFFER_SIZE); both consumer and producer
buffer[in] = nextProduced;
in = (in + 1) % The statements counter++ and counter–
BUFFER_SIZE; must therefore be performed atomically
counter++; Atomic operation means an operation that
} completes in its entirety without interruption
CONSUMER The statement “count++” may be implemented
item nextConsumed; in machine language as:
register1 = counter
register1 = register1 + 1
while (TRUE) {
counter = register1
while (counter == 0);
nextConsumed = buffer[out]; The statement “count—” is implemented as:
out = (out + 1) % register2 = counter
BUFFER_SIZE; register2 = register2 – 1
counter--; counter = register2
}
6: Process Synchronization 6
The Producer - Consumer Problem
PRODUCER CONSUMER
“count++” may be implemented as: “count—” is implemented as:
register1 = counter register2 = counter
register1 = register1 + 1 register2 = register2 – 1
counter = register1 counter = register2
If both the producer and consumer attempt to update the buffer concurrently, the assembly
language statements may get interleaved.
There is no guarantee that the two C statements are executed atomically.
Interleaving depends upon how the producer and consumer processes are scheduled.
Assume counter is initially 5. One interleaving of statements is:
TO; Producer Execute register1 = counter register1 = 5
T1; Producer Execute register1 = register1 + 1 register1 = 6
T2; Consumer Execute register2 = counter register2 = 5
T3; Consumer Execute register2 = register2 - 1 register2 = 4
T4; Producer Execute counter = register1 counter = 6
T5; Consumer Execute counter = register2 counter = 4
The value of count may be either 4 or 6, where the correct result should be 5.
This situation causes what is known as RACE CONDITION.
6: Process Synchronization 7
Race Condition
• Race condition: The situation where
several processes access – and
manipulate shared data concurrently. The
final value of the shared data depends
upon which process finishes last.
• To prevent race conditions, concurrent
processes must be synchronized.
The Critical-Section Problem
• n processes all competing to use some
shared data
• Each process has a code segment, called
critical section, in which the shared data is
accessed.
• Problem – ensure that when one process is
executing in its critical section, no other
process is allowed to execute in its critical
section.
Critical Sections
A section of code, common to ‘n’ cooperating processes, in which
the processes may be accessing common variables.
A Critical Section Environment contains:
Entry Section Code requesting entry into the critical section.
Critical Section Code in which only one process can execute at
any one time.
Exit Section The end of the critical section, releasing or allowing
others in.
Remainder Section Rest of the code AFTER the critical section.
6: Process Synchronization 10
CRITICAL SECTION
n processes all competing to use some shared data
Each process has a code segment, called critical section,
in which the shared data is accessed.
Set of instructions that must be controlled so as to allow
exclusive access to one process.
execution of the critical section by processes is mutually
exclusive in time .
Process Synchronization
Cont…
CRITICAL SECTION
Do{ Each process must request
Entry section permission to enter critical
section – ENTRY
critical
SECTION
section
exit section
Critical section may follow
by EXIT SECTION.
reminder
section
Remaining code is
}while(1)
REMAINDER SECTION
GENERAL STRUCTURE
OF A TYPICAL PROCESS Pi
Process Synchronization
Solution to the Critical Section Problem
The critical section must ENFORCE ALL THREE of the following rules:
Mutual Exclusion: No more than one process can execute in its critical section
at one time.
Progress: If no one is in the critical section and someone wants in,
then those processes not in their remainder section must
be able to decide in a finite time who should go in.
Bounded Wait: All requesters must eventually be let into the critical section
1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress. If no process is executing in its critical section and there exist some processes that wish
to enter their critical section, then the selection of the processes that will
enter the critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes.
6: Process Synchronization 13
TWO-PROCESS SOLUTION
Algorithm is only for 2 processes at a time
Processes are
P0
P1
Or can also be represented as Pi and Pj ,
i.e. j=1-i
Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 1
Let the processes share common If process turn == i , Pi is
integer variable turn allowed to execute in critical
Let int turn =0 (or 1) section
But,
Do{
Guarantees mutual
while (turn != i);
exclusion.
Critical section Does not guarantee progress
Turn = j; --- enforces strict alternation
of processes entering CS's
(if Pj decides not to re-
remainder section
enter or crashes outside
}while(1)
CS, then Pi cannot ever get
For Process Pi in).
Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 1
Satisfies mutual exclusion, but not progress
Process 2 cannot enter until Process 1 enters
Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 2
Shared variables
boolean flag[2]; // “interest” bits
initially flag [0] = flag [1] = false.
flag [i] = true P declares interest in entering its critical
i
section
Process Pi // where the “other” process is Pj
do {
flag[i] = true; // declare your own interest
while (flag[ j]) ; //wait if the other guy is interested
critical section
flag [i] = false; // declare that you lost interest
remainder section // allows other guy to enter
} while (1); Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 2
Satisfies mutual exclusion, but not progress requirement.
Both flag[i] could be set concurrently, both will loop
forever
If flag[ i] == flag[ j] == true, then deadlock - no progress
but barring this event, a non-CS guy cannot block you
from entering
Can make consecutive re-entries to CS if other not
interested
Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 3
Combined shared variables & approaches of both the
algorithms 1 & 2
For Process Pi
do {
flag [i] = true; // declare your interest to enter
turn = j; // assume it is the other’s turn-give PJ a chance
while (flag [ j ] and turn == j) ;
critical section
flag [i] = false;
remainder section
} while (1);
Process Synchronization
TWO-PROCESS SOLUTION
-Algorithm 3
Meets all three requirements; solves the critical-section
problem for two processes (the best of all worlds -
almost!).
Turn variable breaks any deadlock possibility of previous
example,
AND prevents “hogging” – Pi setting turn to j gives PJ a
chance after each pass of Pi’s CS
Flag[ ] variable prevents getting locked out if other guy
never re-enters or crashes outside and allows CS
consecutive access other not interested in entering.
Process Synchronization
PETERSON’s SOLUTION Algorithm 3
FLAG TO REQUEST ENTRY:
• Each processes sets a flag to request entry. Then each process toggles a bit to
allow the other in first.
• This code is executed for each process i.
Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false.
flag [i] = true P ready to enter its critical section Are the three Critical Section
i
Requirements Met?
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn == j) ; This is Peterson’s
critical section Solution
flag [i] = false;
remainder section
} while (1); 6: Process Synchronization 21
Synchronization Hardware
• Any solution to the Critical Section problem requires a
simple TOOL or LOCK
• Modern systems provide special hardware instructions
that allow us to test & modify the content of a word
atomically i.e. as a one uninterrupted unit
• TestandSet lock & Semaphores are two such H/W tools
• Uniprocessors – could disable interrupts
• Currently running code would execute without preemption
• Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• Either test memory word and set value
• Or swap contents of two memory words
Better Solution - Semaphores
• The various hardware-based solutions to the CS problem
(using the TestAndSet() and Swap() instructions) are
complicated for application programmers to use.
• To overcome this difficulty, we can use a synchronization
tool called a semaphore.
• A semaphore S is an integer variable that,
• Apart from initialization,
• Is accessed only through two standard atomic operations:
wait () and signal ()
6: Process Synchronization 23
Semaphores
A semaphore is an object that consists of a
counter, a waiting list of processes and two
methods (e.g., functions): signal and
wait.
semaphore
method signal
counter
method wait
waiting list
2
4
Semaphore Method: wait
void wait(sem S)
{
S.count--;
if (S.count < 0) {
add the caller to the waiting list; block();
}
}
After decreasing the counter by 1, if
the counter value becomes negative, then
add the caller to the waiting list, and then
block itself.
2
5
Semaphore Method: signal
void signal(sem S)
{
S.count++;
if (S.count <= 0) {
remove a process P from the waiting list;
resume(P);
}
}
After increasing the counter by 1, if the
new counter value is not positive, then
remove a process P from the waiting list,
resume the execution of process P, and return
2
6
Important Note
S.count--; S.count++;
if (S.count<0) { if (S.count<=0) {
add to list; remove P;
block(); resume(P);
} }
If S.count < 0, abs(S.count) is the number of
waiting processes.
This is because processes are added to the waiting list
only if the counter value is <0 or removed if >0
wait() and signal() must be executed
atomically (i.e., as one uninterruptible unit).
Otherwise, race conditions may occur.
2
7
Three Typical Uses of Semaphores
There are three typical uses of semaphores:
mutual exclusion:
Mutex (i.e., Mutual Exclusion) locks
count-down lock:
Keep in mind that semaphores have a counter.
notification:
Indicate an event has occurred.
28
Solution for Producer-Consumer Problem using
Semaphore based Mutual Exclusion (Lock)
initialization is important
semaphore S = 1;
int
Process 1 Process 2
while count = 0; while (1) {
// (1) {
do something entry // do
S.wait(); S.wait();
something
count++; critical sections
S.signal();
count--; S.signal();
// do something exit // do
} something }
What if the initial value of S is zero?
S is a binary semaphore (similar to a
29
lock).
What if Semaphore is initialized to 3?
(Used as a Counting Semaphore)
semaphore S =
3;
Process 1 Process 2
while (1) { while (1) {
// do something // do something
S.wait();
S.wait();
at most 3 processes can be here!!!
S.signal(); S.signal();
// do something // do something
} }
After three processes pass through wait(),
this
section is locked until a process calls signal().
30
Semaphore used for Notification
semaphore S1 = 1, S2 = 0;
process 1 process 2
while while (1) {
// (1) {
do something // do
S1.wait(); notify S2.wait();
something
cout << “1”; cout << “2”;
S2.signal(); notify S1.signal();
// do something // do
something
} }
Process 1 uses S2.signal() to notify
process 2, indicating “I am done. Please go
ahead.”
The output is 1 2 1 2 1 2 ……
What if S1 = 0 and S2 = 11
What if both S1 and S2 are both 0’s or
PROCESS Semaphores
SYNCHRONIZATION
DEADLOCKS:
· May occur when two or more processes try to get the same multiple resources
at the same time.
P1: P2:
wait(S); wait(Q);
wait(Q); wait(S);
..... .....
signal(S); signal(Q);
signal(Q); signal(S);
· How can this be fixed?
6: Process Synchronization 32
PROCESS Some Interesting
SYNCHRONIZATION Problems
Problem # 1. THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM:
This is the same producer / consumer problem as before. But now we'll do it with signals and
waits. Remember: a wait decreases its argument and a signal increases its argument.
BINARY_SEMAPHORE mutex = 1; // Can only be 0 or 1
COUNTING_SEMAPHORE empty = n; full = 0; // Can take on any integer value
producer: consumer:
do { do {
/* produce an item in nextp */ wait (full);
wait (empty); /* Do action */ wait (mutex);
wait (mutex); /* Buffer guard*/ /* remove an item from buffer to nextc */
/* add nextp to buffer */ signal (mutex);
signal (mutex); signal (empty);
signal (full); /* consume an item in nextc */
} while(TRUE); } while(TRUE);
6: Process Synchronization 33
PROCESS Some Interesting
SYNCHRONIZATION Problems
Problem # 2. THE READERS/WRITERS PROBLEM:
This is the same as the Producer / Consumer problem except - we now can have many
concurrent readers and one exclusive writer.
Locks: are shared (for the readers) and exclusive (for the writer).
Two possible ( contradictory ) guidelines can be used:
• No reader is kept waiting unless a writer holds the lock (the readers have precedence).
• If a writer is waiting for access, no new reader gains access (writer has precedence).
( NOTE: starvation can occur on either of these rules if they are followed rigorously.)
6: Process Synchronization 34
PROCESS Some Interesting
Problems
SYNCHRONIZATION
Problem # 2. THE READERS-WRITERS
Writer:
PROBLEM: do {
wait( wrt );
BINARY_SEMAPHORE wrt = 1;
BINARY_SEMAPHORE mutex = 1; /* writing is performed */
int readcount = 0; signal( wrt );
} while(TRUE);
Reader: WAIT ( S ):
do { while ( S <= 0 );
wait( mutex ); /* Allow 1 reader in entry*/ S = S - 1;
readcount = readcount + 1; SIGNAL ( S ):
S = S + 1;
if readcount == 1 then wait(wrt ); /* 1st reader locks writer */
signal( mutex );
/* reading is performed */
wait( mutex );
readcount = readcount - 1;
if readcount == 0 then signal(wrt ); /*last reader frees writer */
signal( mutex );
} while(TRUE);
6: Process Synchronization 35
PROCESS Some Interesting
SYNCHRONIZATION Problems
Problem # 3. THE DINING PHILOSOPHERS PROBLEM:
5 philosophers with 5 chopsticks sit around a circular table. They each want to eat at
random times and must pick up the chopsticks on their right and on their left.
Clearly deadlock is rampant ( and starvation possible.)
Several solutions are possible:
• Allow only 4 philosophers to be hungry at a time.
• Allow pickup only if both chopsticks are
available. ( Done in critical section )
• Odd # philosopher always picks up left chopstick
1st, even # philosopher always picks up right
chopstick 1st.
6: Process Synchronization 36
THE DINING PHILOSOPHERS PROBLEM
(Semaphore Based)
6: Process Synchronization 37
PROCESS Monitors
SYNCHRONIZATION
High-level synchronization construct that allows the safe sharing of an abstract
data type among concurrent processes.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
} 6: Process Synchronization 38
}
PROCESS Monitors
SYNCHRONIZATION
• To allow a process to wait within the monitor, a condition variable
must be declared, as
condition x, y;
• Condition variable can only be used with the operations wait and
signal.
• The operation
x.wait();
means that the process invoking this operation is suspended
until another process invokes
x.signal();
• The x.signal operation resumes exactly one suspended
process. If no process is suspended, then the signal operation
has no effect.
6: Process Synchronization 39
PROCESS Monitors
SYNCHRONIZATION
Schematic View of a
Monitor
6: Process Synchronization 40
PROCESS Monitors
SYNCHRONIZATION
Monitor With
Condition Variables
6: Process Synchronization 41
PROCESS Monitors
SYNCHRONIZATION Dining Philosophers
monitor dp { Example
enum {thinking, hungry, eating} state[5];
void putdown(int i) {
condition self[5];
state[i] = thinking;
}
// test left & right neighbors
initializationCode() { test((i+4) % 5);
for ( int i = 0; i < 5; i++ ) test((i+1) % 5);
state[i] = thinking; }
}
void test(int i) { void pickup(int i) {
if ( (state[(I + 4) % 5] != eating) && state[i] = hungry;
(state[i] == hungry) && test[i];
(state[(i + 1) % 5] != eating)) { if (state[i] != eating)
state[i] = eating; self[i].wait();
self[i].signal(); }
}
}
6: Process Synchronization 42