Module 3-2
Module 3-2
Module III
Process Synchronization
Race Condition:
“A race condition occurs when multiple processes access and manipulate the
same data concurrently so that the final results depend on the order in which the
access takes place is called as race condition.”
Example:
Consider two process P1 and P2 that share global variable b and c with an initial
value. b=1 and c=2 at some point of execution. P1 executers the statement b=b+c
and at some point, of execution P2 executes the statement c=b+c.
“When more than one processes try to access the same code segment that
segment is known as the critical section.”
The important feature of the system is that, when one process is executing in
its critical section no other process is to be allowed to execute in its critical section.
No two processes are executing in their critical section at the same time.
• In the above diagram, the entry section handles the entry into the critical
section.
• It acquires the resources needed for execution by the process.
• The exit section handles the exit from the critical section.
• It releases the resources and also informs the other processes that the critical
section is free.
• When a process enters a critical section, it must first request access to the
semaphore or mutex associated with the critical section.
• If the resource is available, the process can proceed to execute the critical
section.
• If the resource is not available, the process must wait until it is released by the
process currently executing the critical section.
• Remainder section is optional.
Example:
In push and pop operation on stack accessing a variable called top is nothing
but it’s a critical section.
• Why it is called as critical section means, If push is accessing the top variable
then the pop operation should not access top variable.
• Here accessing a top variable means a critical section.
• Here only one operation of stack can access the variable called top.
By: Prof. Gourambika D. Page 2
Module-III, Process Synchronization
• Means accessing to the critical section must be mutual exclusive i.e. the
critical section should contains only one process at a time.
We have two solutions for the critical section problem:
Any solution to the critical section needs to satisfy the following 3 requirements.
1. Peterson’s Solution:
The algorithm utilizes two main variables, which are a turn and a flag. turn is
an integer variable that indicates whose turn it is to enter the critical section. flag is
an array of Boolean variables. Each element in the array represents the intention of
a process to enter the critical section.
Structure of Process Pi :
do
{
flag[i] = TRUE;
turn = j;
while (flag[j] = =true && turn == [j]);
critical section
flag[i]=FALSE
Remainder Section
} while (TRUE);
By: Prof. Gourambika D. Page 3
Module-III, Process Synchronization
1. Firstly, both processes set their flag variables to indicate that they don't
currently want to enter the critical section. By default, the turn variable is set
to the ID of one of the processes, it can be 0 or 1. This will indicate that it is
initially the turn of that process to enter the critical section.
2. Both processes set their flag variable to indicate their intention to enter the
critical section.
3. Then the process, which is having the next turn, sets the turn variable to its
own ID. This will indicate that it is its turn to enter the critical section. For
example, if the current value of turn is 1, and it is now the turn of Process 0
to enter, Process 0 sets turn = 0.
4. Both processes enter a loop where they will check the flag of the other process
and wait if necessary. The loop continues as long as the following two
conditions are met:
a) The other process has expressed its intention to enter the critical
section (flag[1-processID]==trueforProcessprocessID).
b) It is currently the turn of the other process (turn == 1 - processID).
If both conditions are satisfied, the process waits and yields the CPU
to the other process. This ensures that the other process has an
opportunity to enter the critical section.
5. Once a process successfully exits the waiting loop, then it can enter the critical
section. It can also access the shared resource without interference from the
other process. It can perform any necessary operations or modifications within
this section.
6. After completing its work in the critical section, the process resets its flag
variable. Resetting is required to indicate that this process no longer wants to
enter the critical section (flag[processID] = false). This step ensures that the
process can enter the waiting loop again correctly if needed.
2. Synchronization Hardware:
• The solution to the critical section problem requires a simple tool called -a
Lock.
• Race Conditions are preemented by locking the critical section.
• Process must aquire a lock before entering a Critical-section. It also releases
the lock when it exists the critical section.
Solution to critical-stion problem using locks is shows below
do
aquire lock
critical section
release lock
remainder solution
} while (TRUE):
While (TRUE) ;
• The above structure also shows that how the mutual execution is implemented
with test and set ().
The swap ( )instruction operates on the contens of two words it is defined as
below
Void swap (boolean *a Boolean *b
{
boolean temp =*a ;
*a=*b;
*b= temp ;
Semaphores
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows
Wait
The wait operation decrements the value of its argument S, if it is positive.
If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
1. Counting Semaphores
These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource access, where
the semaphore count is the number of available resources. If the resources
are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.
2. Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is
1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.
• Producers produce a product and consumers consume the product, but both
use of one of the containers each time.
Structure of Producer
do
{
//produce an item in nextp
wait (empty);
wait(mutex);
//add next to buffer
signal(mutex);
signal(full);
}while(TRUE);
Structure of Consumer
do
{
wait (full);
wait(mutex);
//remove an item from buffer + next c
signal(mutex);
signal(empty);
//consume the item in next c
}while(TRUE);
• The Dining Philosopher Problem states that K philosophers are seated around
a circular table with one chopstick between each pair of philosophers.
• There is one chopstick between each philosopher.
• A philosopher may eat if he can pick up the two chopsticks adjacent to him.
• One chopstick may be picked up by any one of its adjacent followers but not
both.
• Here there are two semaphores: Mutex and a semaphore array for the
philosophers.
• Mutex is used such that no two philosophers may access the pickup or put it
down at the same time.
• The array is used to control the behavior of each philosopher.
• But, semaphores can result in deadlock due to programming errors.
Deadlocks
So, if a process is using a resource, an arrow is drawn from the resource node
to the process node. If a process is requesting a resource, an arrow is drawn from the
process node to the resource node.
If there is a cycle in the Resource Allocation Graph and each resource in the
cycle provides only one instance, then the processes will be in deadlock. For
example, if process P1 holds resource R1, process P2 holds resource R2 and process
P1 is waiting for R2 and process P2 is waiting for R1, then process P1 and process
P2 will be in deadlock.
1. Deadlock Prevention
In deadlock prevention the aim is to not let full-fill one of the required
conditions of the deadlock. This can be done by this method:
(i) Mutual Exclusion
We only use the Lock for the non-share-able resources and if the resource is
share- able (like read only file) then we not use the locks here. That ensure that in
case of share -able resource , multiple process can access it at same time. Problem-
Here the problem is that we can only do it in case of share-able resources but in case
of no-share-able resources like printer , we have to use Mutual exclusion.
(ii) Hold and Wait
To ensure that Hold and wait never occurs in the system, we must guarantee that
whenever process request for resource , it does not hold any other resources.
• we can provide the all resources to the process that is required for it’s
execution before starting it’s execution . problem – for example if there are
By: Prof. Gourambika D. Page 16
Module-III, Process Synchronization
three resource that is required by a process and we have given all that resource
before starting execution of process then there might be a situation that
initially we required only two resource and after one hour we want third
resources and this will cause starvation for the another process that wants this
resources and in that waiting time that resource can allocated to other process
and complete their execution.
• We can ensure that when a process request for any resources that time the
process does not hold any other resources. Ex- Let there are three resources
DVD, File and Printer . First the process request for DVD and File for the
copying data into the file and let suppose it is going to take 1 hour and after it
the process free all resources then again request for File and Printer to print
that file.
(iii) No Preemption
If a process is holding some resource and requestion other resources that are
acquired and these resource are not available immediately then the resources that
current process is holding are preempted. After some time process again request for
the old resources and other required resources to re-start.
For example – Process p1 have resource r1 and requesting for r2 that is hold by
process p2. then process p1 preempt r1 and after some time it try to restart by
requesting both r1 and r2 resources.
Problem – This can cause the Live Lock Problem .
Live Lock : Live lock is the situation where two or more processes continuously
changing their state in response to each other without making any real progress.
Example:
• suppose there are two processes p1 and p2 and two resources r1 and r2.
• Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.
• so according to above method- Both p1 and p2 detect that they can’t acquire
second resource, so they release resource that they are holding and then try
again.
• continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired
r2 and requesting to r1 so there is no overall progress still process are changing
there state as they preempt resources and then again holding them. This the
situation of Live Lock.
(iv) Circular Wait:
To remove the circular wait in system we can give the ordering of resources
in which a process needs to acquire.
Ex: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource
acquiring order like the process first need to acquire resource r1 and then resource
r2. so the process that acquired r1 will be allowed to acquire r2 , other process needs
to wait until r1 is free.
This is the Deadlock prevention methods but practically only fourth method is used
as all other three condition removal method have some disadvantages with them .
2. Deadlock Avoidance:
During OS design, OS designers will takes necessary pre causions so that deadlock
can never occurs into the system.
2. Banker’s algorithm
• The Banker’s Algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes an “s-
state” check to test for possible activities, before deciding whether allocation
should be allowed to continue.
• The Banker’s Algorithm is a smart way for computer systems to manage how
programs use resources, like memory or CPU time. It helps prevent situations
where programs get stuck and can not finish their tasks.
• This condition is known as deadlock. By keeping track of what resources each
program needs and what’s available, the banker algorithm makes sure that
programs only get what they need in a safe order. This helps computers run
smoothly and efficiently, especially when lots of programs are running at the
same time.
• We need following data structures to implement :
1. Available: Used to indicate the number of available resources of
each type.
2. Max: An n*m matrix defines the maximum demand of each
process.
3. Allocation : An n*m matrix defines the number of resource are
currently allocated to each process.
4. Need : An n*m matrix indicates the remaining resources need of
each process.
There are two categories of algorithms in banker’s algorithm.
1. Safety Algorithm
2. Resource request algorithm
1. Safety Algorithm
a) Step1 : Initialize work = available, Finish[i] = false for i= 0,1,2,…..n-1.
b) Step2 : Find an index i such that both
Finish[i] == false
Needi ≤ work
If no such I exists, go to step4.
c) Step3 : Work = work + Allocation;
Finish[i] =true
Go to step 2.
d) Step4 : If finish[i] == true for all i, then the system is in safe state.
• Assume that process Pi has requested for some additional resources.
• Now we have to check that
Requesti ≤ need;
Go to step 2
• Requesti ≤ available
Go to step 3
• Allocation = allocation+request
Available = Available – request
Need = Need – request
3. Deadlock detection:
Recovery Strategies:
1. Process Termination:
• One way to recover from a deadlock is to terminate one or more of the
processes involved in the deadlock.
• By releasing the resources held by these terminated processes, the remaining
processes may be able to continue executing.
• However, this approach should be used cautiously, as terminating processes
could lead to loss of data or incomplete transactions.
2. Resource Preemption:
• Resources can be forcibly taken away from one or more processes and
allocated to the waiting processes.
• This approach can break the circular wait condition and allow the system to
proceed.
• However, resource preemption can be complex and needs careful
consideration to avoid disrupting the execution of processes.
3. Process Rollback:
• In situations where processes have checkpoints or states saved at various
intervals, a process can be rolled back to a previously saved state.
• This means that the process will release all the resources acquired after the
saved state, which can then be allocated to other waiting processes.
• Rollback, though, can be resource-intensive and may not be feasible for all
types of applications.