[go: up one dir, main page]

0% found this document useful (0 votes)
5 views23 pages

Module 3-2

Module III discusses process synchronization, focusing on race conditions and the critical section problem, which arises when multiple processes access shared resources. It outlines solutions such as Peterson's algorithm and synchronization hardware, emphasizing the importance of mutual exclusion, progress, and bounded waiting. Additionally, it covers semaphores and classical synchronization problems like the producer-consumer problem and the readers-writers problem.

Uploaded by

samiksha.md20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views23 pages

Module 3-2

Module III discusses process synchronization, focusing on race conditions and the critical section problem, which arises when multiple processes access shared resources. It outlines solutions such as Peterson's algorithm and synchronization hardware, emphasizing the importance of mutual exclusion, progress, and bounded waiting. Additionally, it covers semaphores and classical synchronization problems like the producer-consumer problem and the readers-writers problem.

Uploaded by

samiksha.md20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Module-III, Process Synchronization

Module III
Process Synchronization
Race Condition:

“A race condition occurs when multiple processes access and manipulate the
same data concurrently so that the final results depend on the order in which the
access takes place is called as race condition.”

Example:

Consider two process P1 and P2 that share global variable b and c with an initial
value. b=1 and c=2 at some point of execution. P1 executers the statement b=b+c
and at some point, of execution P2 executes the statement c=b+c.

• Note that the processes update different variables.


• However, the final values/ results of two variables depends on the order in
which two processes executes statements.
• If P1 executes first then final values are b=3 and c=5.
• If P2 executes first then final values are b=4 and c=3.
• To protect against the race condition, we require that the processes to be
synchronized.

Critical Section Problem

“When more than one processes try to access the same code segment that
segment is known as the critical section.”

The critical section contains shared variables or resources that need to be


synchronized to maintain the consistency of data variables.

The important feature of the system is that, when one process is executing in
its critical section no other process is to be allowed to execute in its critical section.

No two processes are executing in their critical section at the same time.

The general structure of a typical process Pi is shown below.

By: Prof. Gourambika D. Page 1


Module-III, Process Synchronization

• In the above diagram, the entry section handles the entry into the critical
section.
• It acquires the resources needed for execution by the process.
• The exit section handles the exit from the critical section.
• It releases the resources and also informs the other processes that the critical
section is free.
• When a process enters a critical section, it must first request access to the
semaphore or mutex associated with the critical section.
• If the resource is available, the process can proceed to execute the critical
section.
• If the resource is not available, the process must wait until it is released by the
process currently executing the critical section.
• Remainder section is optional.

Example:

In push and pop operation on stack accessing a variable called top is nothing
but it’s a critical section.

• Why it is called as critical section means, If push is accessing the top variable
then the pop operation should not access top variable.
• Here accessing a top variable means a critical section.
• Here only one operation of stack can access the variable called top.
By: Prof. Gourambika D. Page 2
Module-III, Process Synchronization

• Means accessing to the critical section must be mutual exclusive i.e. the
critical section should contains only one process at a time.
We have two solutions for the critical section problem:

1. Software solution (Peterson’s Solution)


2. Hardware solution (Synchronization Hardware)

Any solution to the critical section needs to satisfy the following 3 requirements.

1. Mutual Exclusion : If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections.
2. Progress : If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in deciding which will
enter its critical section next, and this selection cannot be postponed
indefinitely.
3. Bounded Waiting: There exists a bound, or limit, on the number of times that
other processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is granted.

1. Peterson’s Solution:

The algorithm utilizes two main variables, which are a turn and a flag. turn is
an integer variable that indicates whose turn it is to enter the critical section. flag is
an array of Boolean variables. Each element in the array represents the intention of
a process to enter the critical section.

Structure of Process Pi :

do
{
flag[i] = TRUE;
turn = j;
while (flag[j] = =true && turn == [j]);
critical section
flag[i]=FALSE
Remainder Section
} while (TRUE);
By: Prof. Gourambika D. Page 3
Module-III, Process Synchronization

Let us look at the explanation of how Peterson's algorithm in OS works:

1. Firstly, both processes set their flag variables to indicate that they don't
currently want to enter the critical section. By default, the turn variable is set
to the ID of one of the processes, it can be 0 or 1. This will indicate that it is
initially the turn of that process to enter the critical section.
2. Both processes set their flag variable to indicate their intention to enter the
critical section.
3. Then the process, which is having the next turn, sets the turn variable to its
own ID. This will indicate that it is its turn to enter the critical section. For
example, if the current value of turn is 1, and it is now the turn of Process 0
to enter, Process 0 sets turn = 0.
4. Both processes enter a loop where they will check the flag of the other process
and wait if necessary. The loop continues as long as the following two
conditions are met:
a) The other process has expressed its intention to enter the critical
section (flag[1-processID]==trueforProcessprocessID).
b) It is currently the turn of the other process (turn == 1 - processID).
If both conditions are satisfied, the process waits and yields the CPU
to the other process. This ensures that the other process has an
opportunity to enter the critical section.
5. Once a process successfully exits the waiting loop, then it can enter the critical
section. It can also access the shared resource without interference from the
other process. It can perform any necessary operations or modifications within
this section.
6. After completing its work in the critical section, the process resets its flag
variable. Resetting is required to indicate that this process no longer wants to
enter the critical section (flag[processID] = false). This step ensures that the
process can enter the waiting loop again correctly if needed.

2. Synchronization Hardware:
• The solution to the critical section problem requires a simple tool called -a
Lock.
• Race Conditions are preemented by locking the critical section.
• Process must aquire a lock before entering a Critical-section. It also releases
the lock when it exists the critical section.
Solution to critical-stion problem using locks is shows below

By: Prof. Gourambika D. Page 4


Module-III, Process Synchronization

do

aquire lock

critical section

release lock

remainder solution

} while (TRUE):

• Many modern computer was special instructions such as TestAndset() and


swap() instructions to solve the critical-section problem.
• These are automatic instructions they are called as machine instruction.
• These will get executed in only one machine cycle means all the instruction
are executed once (without interrupt).
• The TestAndSet () instruction can be defined as shown in below.

Boolean TestAndSet(Boolean *target)


{
Boolean rv=*target;
*target=TRUE;
Return rv;
}
• This instruction is executed automatically.
• Thus, If two TestAndSet () instructions are executed simultaneously, they will
be executed sequentially in some arbitrary order.
• Mutual – execution can be implemented with the help of a Boolean variable
lock which is initial to false
The structure of process pi is shown below
do
{
While (test and set (&lock));
critical -section
Lock =false;
Reminder section

By: Prof. Gourambika D. Page 5


Module-III, Process Synchronization

While (TRUE) ;
• The above structure also shows that how the mutual execution is implemented
with test and set ().
The swap ( )instruction operates on the contens of two words it is defined as
below
Void swap (boolean *a Boolean *b
{
boolean temp =*a ;
*a=*b;
*b= temp ;

• The swap ( ) is executed automatically .


• How the mutual -execution is implemented with swap is shown below
• A global Boolean variable lock is declared & initial to false in addition each
process has a local Boolean variable key the structure of process pi is shown
below ,
do
Key=TRUE ;
While (key==TRUE )
Swap (&lock, &key);
Critical -section
Lock=FALSE;
Reminder -section
}
While ( true );
• These algorithms satisfy the mutual exclusion requirement, but fails to satisfy
the bounded waiting requirement.
• Another algorithm is designed using TestAndSet() instruction that satisfies all
requirements of critical section.
• The common data structures are as follows and are initialized to false.
Boolean waiting[n];
Boolean lock;

Bounded waiting mutual exclusion with TestAndSet () is defined below.


do
{
waiting [i] = TRUE;
Key = TRUE;
while (waiting[i] && key )
key = TestAndSet(&lock);
By: Prof. Gourambika D. Page 6
Module-III, Process Synchronization

waiting [i] = FALSE;


critical section
j=(i+1)%n;
while((j!=i) && !waiting [j]
j=(j+1) % n;
if (j==i)
lock = FALSE;
else
waiting[j] = FALSE;
remainder section
}while (TRUE);

To prove that the mutual exclusion requirement is met.


• Process Pi can enter its critical section only if either waiting[i] == false or
key==false .
• The value of key can become false only if the TestAndSet is executed.
• The process to execute the TestAndSet() will find Key ==false so that all other
processes must wait.
• The variable waiting[i] can be false only if and process leaves critical section.
• Only one process will set waiting[i] =false maintain the mutual exclusion
requirement.
To prove that the progress requirement is met :
• All the arguments which are used in mutual exclusion can be also applied here
• A process which is exiting from the critical section either sets lock to false.
Both allows a waiting process to enter into critical section.
To process that Bounded waiting requirement is met
• When process leaves critical section it scans the are waiting in cyclic ordering
(i+1, i+2…….n-1, 0……..i-1)
• When the 1st process enters nos entry section waiting[j]==true so that next
process can enter the critical section.
• Any process waiting to enter its critical section will be done within (n-1) turns.

Semaphores
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows

By: Prof. Gourambika D. Page 7


Module-III, Process Synchronization

Wait
The wait operation decrements the value of its argument S, if it is positive.
If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
1. Counting Semaphores
These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource access, where
the semaphore count is the number of available resources. If the resources
are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.
2. Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is
1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.

Classical Problems of Synchronization:


Bounded Buffer Problem or Solution for producer consumer problem using
Semaphore:
Assume that there are N buffers where each buffer can hold one item.
• The Bounded Buffer problem is also called the producer-consumer problem.
• This problem is generalized in terms of the Producer-Consumer problem.
• The solution to this problem is, to create two counting semaphores “full” and
“empty” to keep track of the current number of full and empty buffers
respectively.
By: Prof. Gourambika D. Page 8
Module-III, Process Synchronization

• Producers produce a product and consumers consume the product, but both
use of one of the containers each time.

In computing, the producer–consumer problem (also known as the bounded-


buffer problem) is a classic example of a multi-process synchronization problem.
The problem describes two processes, the producer and the consumer, which share
a common, fixed-size buffer used as a queue.
• The producer’s job is to generate data, put it into the buffer, and start again.
• At the same time, the consumer is consuming the data (i.e. removing it from
the buffer), one piece at a time.
Problem : To make sure that the producer won’t try to add data into the buffer if it’s
full and that the consumer won’t try to remove data from an empty buffer.
Solution : The producer is to either go to sleep or discard data if the buffer is full.
The next time the consumer removes an item from the buffer, it notifies the producer,
who starts to fill the buffer again. In the same way, the consumer can go to sleep if
it finds the buffer to be empty. The next time the producer puts data into the buffer,it
Wakes up the sleeping consumer.
An inadequate solution could result in a deadlock where both processes are
waiting to be awakened.

Structure of Producer
do
{
//produce an item in nextp
wait (empty);
wait(mutex);
//add next to buffer
signal(mutex);
signal(full);
}while(TRUE);

Structure of Consumer
do
{
wait (full);
wait(mutex);
//remove an item from buffer + next c
signal(mutex);
signal(empty);
//consume the item in next c

By: Prof. Gourambika D. Page 9


Module-III, Process Synchronization

}while(TRUE);

The Readers Writers Problem:

The Readers-Writers Problem is a classic synchronization issue in operating


systems that involves managing access to shared data by multiple threads or
processes. The problem addresses the scenario where:
• Readers: Multiple readers can access the shared data simultaneously without
causing any issues because they are only reading and not modifying the data.
• Writers: Only one writer can access the shared data at a time to ensure data
integrity, as writers modify the data, and concurrent modifications could lead
to data corruption or inconsistencies.
Challenges of the Reader-Writer Problem
The challenge now becomes how to create a synchronization scheme such that the
following is supported:
• Multiple Readers: A number of readers may access simultaneously if no
writer is presently writing.
• Exclusion for Writers: If one writer is writing, no other reader or writer may
access the common resource.
Solution of the Reader-Writer Problem
There are two fundamental solutions to the Readers-Writers problem:
• Readers Preference: In this solution, readers are given preference over
writers. That means that till readers are reading, writers will have to wait. The
Writers can access the resource only when no reader is accessing it.
• Writer’s Preference: Preference is given to the writers. It simply means that,
after arrival, the writers can go ahead with their operations; though perhaps
there are readers currently accessing the resource.

By: Prof. Gourambika D. Page 10


Module-III, Process Synchronization

Dining Philosopher Problem

• The Dining Philosopher Problem states that K philosophers are seated around
a circular table with one chopstick between each pair of philosophers.
• There is one chopstick between each philosopher.
• A philosopher may eat if he can pick up the two chopsticks adjacent to him.
• One chopstick may be picked up by any one of its adjacent followers but not
both.

Each philosopher is represented by the following pseudocode:

• There are three states of the philosopher: THINKING, HUNGRY, and


EATING.

By: Prof. Gourambika D. Page 11


Module-III, Process Synchronization

• Here there are two semaphores: Mutex and a semaphore array for the
philosophers.
• Mutex is used such that no two philosophers may access the pickup or put it
down at the same time.
• The array is used to control the behavior of each philosopher.
• But, semaphores can result in deadlock due to programming errors.

Deadlocks

Introduction of Deadlock in Operating System


A deadlock is a situation where a set of processes is blocked because each
process is holding a resource and waiting for another resource acquired by some
other process. In this article, we will discuss deadlock, its necessary conditions, etc.
in detail.
What is Deadlock?
Deadlock is a situation in computing where two or more processes are unable
to proceed because each is waiting for the other to release resources. Key concepts
include mutual exclusion, resource holding, circular wait, and no pre-emption.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they are in
front of each other. This is a practical example of deadlock.
How Does Deadlock occur in the Operating System?
Before going into detail about how deadlock occurs in the Operating System,
let’s first discuss how the Operating System uses the resources present. A process in
an operating system uses resources in the following way.
• Requests a resource
• Use the resource
• Releases the resource
A situation occurs in operating systems when there are two or more processes
that hold some resources and wait for resources held by other(s). For example, in the
below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is
acquired by process 2, and process 2 is waiting for resource 1.

By: Prof. Gourambika D. Page 12


Module-III, Process Synchronization

Necessary Conditions for Deadlock in OS


Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
• Mutual Exclusion: Two or more resources are non-shareable (Only one
process can use at a time).
• Hold and Wait: A process is holding at least one resource and waiting for
resources.
• No Preemption: A resource cannot be taken from a process unless the
process releases the resource.
• Circular Wait: A set of processes waiting for each other in circular form.

Resource Allocation Graph (RAG)


• A resource allocation graphs shows which resource is held by which process
and which process is waiting for a resource of a specific kind.
• It is amazing and straight – forward tool to outline how interacting processes
can deadlock.
• Therefore, resource allocation graph describe what the condition of the system
as far as process and resources are concern like what number of resources are
allocated and what is the request of each process.
• Everything can be represented in terms of graph.

Types of Vertices in RAG


So RAG also contains vertices and edges. In RAG vertices are two types
1. Process Vertex: Every process will be represented as a process vertex.
Generally, the process will be represented with a circle.
2. Resource Vertex: Every resource will be represented as a resource vertex. It
is also two types:
• Single instance type resource: It represents as a box, inside the box, there
will be one dot. So the number of dots indicate how many instances are
present of each resource type.

By: Prof. Gourambika D. Page 13


Module-III, Process Synchronization

• Multi-resource instance type resource: It also represents as a box, inside


the box, there will be many dots present.

How many Types of Edges are there in RAG?


Now coming to the edges of RAG.There are two types of edges in RAG –
• Assign Edge: If you already assign a resource to a process then it is called
Assign edge.
• Request Edge: It means in future the process might want some resource to
complete the execution, that is called request edge.

So, if a process is using a resource, an arrow is drawn from the resource node
to the process node. If a process is requesting a resource, an arrow is drawn from the
process node to the resource node.

By: Prof. Gourambika D. Page 14


Module-III, Process Synchronization

Example 1 (Single instances RAG)

If there is a cycle in the Resource Allocation Graph and each resource in the
cycle provides only one instance, then the processes will be in deadlock. For
example, if process P1 holds resource R1, process P2 holds resource R2 and process
P1 is waiting for R2 and process P2 is waiting for R1, then process P1 and process
P2 will be in deadlock.

Here’s another example, that shows Processes P1 and P2 acquiring resources


R1 and R2 while process P3 is waiting to acquire both resources. In this example,
By: Prof. Gourambika D. Page 15
Module-III, Process Synchronization

there is no deadlock because there is no circular dependency. So cycle in single-


instance resource type is the sufficient condition for deadlock.

Example 2 (Multi-instances RAG)

Methods For Handling Deadlock


There are three ways to handle deadlock
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
• Deadlock Recovery

1. Deadlock Prevention
In deadlock prevention the aim is to not let full-fill one of the required
conditions of the deadlock. This can be done by this method:
(i) Mutual Exclusion
We only use the Lock for the non-share-able resources and if the resource is
share- able (like read only file) then we not use the locks here. That ensure that in
case of share -able resource , multiple process can access it at same time. Problem-
Here the problem is that we can only do it in case of share-able resources but in case
of no-share-able resources like printer , we have to use Mutual exclusion.
(ii) Hold and Wait
To ensure that Hold and wait never occurs in the system, we must guarantee that
whenever process request for resource , it does not hold any other resources.
• we can provide the all resources to the process that is required for it’s
execution before starting it’s execution . problem – for example if there are
By: Prof. Gourambika D. Page 16
Module-III, Process Synchronization

three resource that is required by a process and we have given all that resource
before starting execution of process then there might be a situation that
initially we required only two resource and after one hour we want third
resources and this will cause starvation for the another process that wants this
resources and in that waiting time that resource can allocated to other process
and complete their execution.
• We can ensure that when a process request for any resources that time the
process does not hold any other resources. Ex- Let there are three resources
DVD, File and Printer . First the process request for DVD and File for the
copying data into the file and let suppose it is going to take 1 hour and after it
the process free all resources then again request for File and Printer to print
that file.
(iii) No Preemption
If a process is holding some resource and requestion other resources that are
acquired and these resource are not available immediately then the resources that
current process is holding are preempted. After some time process again request for
the old resources and other required resources to re-start.
For example – Process p1 have resource r1 and requesting for r2 that is hold by
process p2. then process p1 preempt r1 and after some time it try to restart by
requesting both r1 and r2 resources.
Problem – This can cause the Live Lock Problem .
Live Lock : Live lock is the situation where two or more processes continuously
changing their state in response to each other without making any real progress.
Example:
• suppose there are two processes p1 and p2 and two resources r1 and r2.
• Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.
• so according to above method- Both p1 and p2 detect that they can’t acquire
second resource, so they release resource that they are holding and then try
again.
• continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired
r2 and requesting to r1 so there is no overall progress still process are changing
there state as they preempt resources and then again holding them. This the
situation of Live Lock.
(iv) Circular Wait:
To remove the circular wait in system we can give the ordering of resources
in which a process needs to acquire.
Ex: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource
acquiring order like the process first need to acquire resource r1 and then resource
r2. so the process that acquired r1 will be allowed to acquire r2 , other process needs
to wait until r1 is free.

By: Prof. Gourambika D. Page 17


Module-III, Process Synchronization

This is the Deadlock prevention methods but practically only fourth method is used
as all other three condition removal method have some disadvantages with them .

2. Deadlock Avoidance:
During OS design, OS designers will takes necessary pre causions so that deadlock
can never occurs into the system.

There are two approaches to avoid deadlock


1. Resource allocation graph
2. Banker’s algorithm

1. Resource allocation graph:


Refer above explanation of resource allocation graph(page no.- 13-16)
• Here we are using 3 different edges such as the request edge, Assignment edge
and claim edge.
Claim Edge: “Specifies that the process may request for the resources in the
future. It is represented with the help of dotted arrow .” its represent as
below..

2. Banker’s algorithm
• The Banker’s Algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes an “s-
state” check to test for possible activities, before deciding whether allocation
should be allowed to continue.

By: Prof. Gourambika D. Page 18


Module-III, Process Synchronization

• The Banker’s Algorithm is a smart way for computer systems to manage how
programs use resources, like memory or CPU time. It helps prevent situations
where programs get stuck and can not finish their tasks.
• This condition is known as deadlock. By keeping track of what resources each
program needs and what’s available, the banker algorithm makes sure that
programs only get what they need in a safe order. This helps computers run
smoothly and efficiently, especially when lots of programs are running at the
same time.
• We need following data structures to implement :
1. Available: Used to indicate the number of available resources of
each type.
2. Max: An n*m matrix defines the maximum demand of each
process.
3. Allocation : An n*m matrix defines the number of resource are
currently allocated to each process.
4. Need : An n*m matrix indicates the remaining resources need of
each process.
There are two categories of algorithms in banker’s algorithm.
1. Safety Algorithm
2. Resource request algorithm

1. Safety Algorithm
a) Step1 : Initialize work = available, Finish[i] = false for i= 0,1,2,…..n-1.
b) Step2 : Find an index i such that both
Finish[i] == false
Needi ≤ work
If no such I exists, go to step4.
c) Step3 : Work = work + Allocation;
Finish[i] =true
Go to step 2.
d) Step4 : If finish[i] == true for all i, then the system is in safe state.
• Assume that process Pi has requested for some additional resources.
• Now we have to check that
Requesti ≤ need;
Go to step 2
• Requesti ≤ available
Go to step 3
• Allocation = allocation+request
Available = Available – request
Need = Need – request

By: Prof. Gourambika D. Page 19


Module-III, Process Synchronization

2. Resource request Algorithm


Let request I be the No. of resource request for process Pi, If request I (j)=k,
then process Pi wants instances of resource type Rj.
When process Pi makes the request for resource, the following
actions are taken.
• If request i<= need I, go for step 2
otherwise raise an error condition, since the otherwise raise an error
condition, since the process has exceeded maximum claim.
• If request i<=available, go to step 3
other wise Pi must wait, since the resources are n available.
• System will allocate the requested resources to Pi by modifying the state as
follows
Available = Available - Request i
Allocation i = Allocation i +Request i
Need i = Need i -Request i
• If the s/m is in safe state then resources will be allocated to process Pi, if the
s/m is unsafe the Pi must wait for request i.

Refer the class notes for more details.

3. Deadlock detection:

• After allocating the resources to process the OS uses an algorithm to check


whether deadlock has occurred or not.
• In order to detect deadlock we have two approaches..

1. Single instance of each resource type:


• If all the resource type in the graph contains single instances then we
uses a variant of the resource allocation graph called wait for graph.
• A wait-for graph is a simplified version of the resource allocation
graph that only shows processes and the resources they are
waiting for. If there is a cycle in this graph, it indicates that a
deadlock has occurred.

By: Prof. Gourambika D. Page 20


Module-III, Process Synchronization

Resource allocation graph Wait for graph

2. Several instance of a resource type:


• The wait for graph scheme is not suitable for the resource types with
multiple instances.
• Here we are using algorithm to detect deadlock which is similar to
banker’s algorithm with little variation.
• We need the following data structure to implement the deadlock
detection algorithm.
a. Available
b. Allocation
c. Request
Algorithm for detecting deadlock:
a) Step1 : Initialize work = available, for i= 0,1,2,…..n-1, if allocation i ≠
0 then Finish[i] = false otherwise, Finish[i] = true.
b) Step2 : Find an index i such that both
a. Finish[i] == false
b. requesti ≤ work
If no such i exists, go to step4.
c) Step3 : Work = work + Allocation;
Finish[i] =true
Go to step 2.
d) Step4 : If finish[i] == false for some i, 0≤ i≤ n, then the system is in a
deadlock state. Moreover if finish [i] == false then process Pi is
deadlocked.
Refer class notes for example.

By: Prof. Gourambika D. Page 21


Module-III, Process Synchronization

4. Recovery from Deadlock:


• Deadlock recovery is a process in computer science and operating
systems that aims to resolve or mitigate the effects of a deadlock after
it has been detected.
• Deadlocks are situations where multiple processes are stuck and unable
to proceed because each process is waiting for a resource held by
another process.
• Recovery strategies are designed to break this deadlock and allow the
system to continue functioning.

Recovery Strategies:
1. Process Termination:
• One way to recover from a deadlock is to terminate one or more of the
processes involved in the deadlock.
• By releasing the resources held by these terminated processes, the remaining
processes may be able to continue executing.
• However, this approach should be used cautiously, as terminating processes
could lead to loss of data or incomplete transactions.

2. Resource Preemption:
• Resources can be forcibly taken away from one or more processes and
allocated to the waiting processes.
• This approach can break the circular wait condition and allow the system to
proceed.
• However, resource preemption can be complex and needs careful
consideration to avoid disrupting the execution of processes.

3. Process Rollback:
• In situations where processes have checkpoints or states saved at various
intervals, a process can be rolled back to a previously saved state.
• This means that the process will release all the resources acquired after the
saved state, which can then be allocated to other waiting processes.
• Rollback, though, can be resource-intensive and may not be feasible for all
types of applications.

4. Wait die and Wound wait schemes:


• As mentioned in the Deadlock Detection in OS section, these schemes can
also be used for recovery.

By: Prof. Gourambika D. Page 22


Module-III, Process Synchronization

• Older processes can preempt resources from younger processes (Wound-


Wait), or younger processes can be terminated if they try to access resources
held by older processes (Wait-Die).

5. Kill the Deadlock:


• In some cases, it might be possible to identify a specific process that is causing
the deadlock and terminate it.
• This is typically a last resort option, as it directly terminates a process without
any complex recovery mechanisms.

By: Prof. Gourambika D. Page 23

You might also like