[go: up one dir, main page]

0% found this document useful (0 votes)
35 views33 pages

OS 4 Synchronization & Deadlock

The document discusses inter-process communication and synchronization. It describes two models of IPC - shared memory and message passing. Process synchronization is needed to preserve execution order of cooperative processes. Synchronization mechanisms use semaphores to control access to critical sections. Counting semaphores can have positive or negative values, while binary semaphores are used like mutex locks with values of 0 or 1.

Uploaded by

Ritesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views33 pages

OS 4 Synchronization & Deadlock

The document discusses inter-process communication and synchronization. It describes two models of IPC - shared memory and message passing. Process synchronization is needed to preserve execution order of cooperative processes. Synchronization mechanisms use semaphores to control access to critical sections. Counting semaphores can have positive or negative values, while binary semaphores are used like mutex locks with values of 0 or 1.

Uploaded by

Ritesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

OPERATING SYSTEMS

SYNCHRONIZATION AND DEADLOCK


4
1. INTER PROCESS COMMUNICATION

A process can be divided into two types, which are as follows:


● Independent process.
● Co-operating process.
An independent process is defined as a process which is not affected by the execution of
other processes whereas a co-operating process is defined as a process which can be affected
by some other executing processes. So, we must have a thought that those processes, which
are running independently, will execute very efficiently but actually in practice, there are
many such situations when the co-operative nature of processes can be utilised for increasing
computational speed, modularity, and convenience.
Inter-process communication (IPC) is a mechanism which provides communication and
synchronization between the actions of different processes. The communication between all
these processes can be seen as co-operation between them.
There are various reasons for providing a situation or an environment that allows co-
operation between processes:
● Information sharing: Concurrent access to a particular piece of information should be
allowed. Since some users may want to access the same piece of information at same
time (for example, a shared file).
● Computation speedup: If we want a particular work to run and complete fast, we must
divide it into sub-tasks where each of the smaller task will get executed in parallel
with the other tasks. Remember that such a speed-up can only be achieved when the
computer has various processing elements like CPUs or I/O components.
● Modularity: Modularity is defined as building a system by dividing its functions into
various processes and threads
● Convenience: Convenience can be easily defined as even a single user may work on
many tasks at a time on a single system. Like, a user may be, editing, formatting,
compiling, and printing in parallel
There are primarily two IPC models:
1. shared memory and
2. message passing.
1.1. Shared Memory System
In the shared-memory model, a region of memory is established which is shared by
cooperating processes. Then processes are able to exchange information by reading and
writing the data into the shared region. A shared-memory region presents in the
address space of any process which is creating the shared memory segment. Some
other processes that want to communicate using this shared-memory segment must
connect it to their own address space.
1.2. Message Passing System-
Message passing model allows different processes to perform read and write data
operation into the message queue without being connected to each other. In the
message passing system messages are stored in the queue until their recipient
retrieves all of them. Message queues are very useful for interprocess communication
(IPC) and used by many operating systems.
Advantages of the Message Passing Model:
Here are some of the advantages of message passing model −
● The message passing model has easier implementation as compared to the shared
memory model.
● We can easily build parallel hardware using message passing models as it can tolerate
higher communication latencies.
Here in the diagram below two communications models are contrasted:
2. PROCESS SYNCHRONISATION

When two or more processes cooperate with each other, the order of execution of both
processes must be preserved otherwise it can generate conflicts while execution and also
result in inappropriate outputs.
The procedure which is involved in preserving the order of execution of cooperative processes
is called Process Synchronization. We have various synchronization mechanisms which are
being used in process synchronization.
Need of Synchronization-
● When two or more processes execute concurrently while sharing some system resources,
So, all these processes execute properly, we need proper synchronization between their
actions.
● Process synchronization avoids inconsistent output.
Critical Section-
Critical section is a small piece of code or a section in the program from where a process
accesses the shared resources concurrently during its execution.
Race Condition-
Race condition is defined as a situation when two or more processes run concurrently and
using some shared resource, so -
● The final output produced completely depends upon the execution order of instructions of
processes.
● Some processes compete with each other.
Critical Section Problem-
● The issue with the critical section is if two or more processes access the critical section
concurrently, then the resulting output might be inconsistent. This inconsistency is
referred to as the Critical Section Problem.
2.1. Synchronization Mechanisms-
Synchronization Mechanism allows various processes to access critical section code in a
synchronized way, to avoid inconsistent outputs.
For every critical section code in the program,
● An entry section is added by the synchronization mechanism before the critical section
● An exit section is added by the synchronization mechanism after the critical section
Entry Section-

● It behaves as an entry gate for a process before entering into the critical section.

● It ensures that at any time only one process should be present inside the critical
section.

● It disallows any other process to enter into the critical section if one process is already
present in the critical section.

Exit Section-

● It acts as an exit gate for a process when it leaves the critical section.

● When a process exits from the critical section, some small changes are made so that
another process can enter inside the critical section to use it.

Criteria For Synchronization Mechanisms-

Following criteria should be satisfied by any synchronization mechanism which is

proposed to manage the critical section problem-

1. Mutual Exclusion

2. Progress

3. Bounded Wait

1. Mutual Exclusion-

The mechanism must ensure that-

● All processes should access the critical section in a mutually exclusive manner.

● At any time only one process should be present inside the critical section.

● Disallow any other process to enter into the critical section if one process is already

present in the critical section. No other process can access the critical section

until the first process completes its task.


2. Progress-

The mechanism must ensure that-

● Entry of a process into the critical section must be independent of the entry of any
other process inside the critical section.

● If no other process is present inside it, then a process can freely enter into the
critical section.

● A process will get into the critical section only if it wants to.

● No process should be forced to enter into the critical section if the process does not
want to enter.

3. Bounded Wait-

The mechanism must ensure that-

● The waiting time of a process to enter into the critical section must be bounded.

● A process should be entered into the critical section before its waiting time gets over.

3. SEMAPHORES IN OS

● A semaphore is nothing much just a simple integer variable.

● Semaphores are used to provide synchronization between multiple processes executing


concurrently in a system.

Types of Semaphores-

Semaphores of mainly two types-

1. Counting Semaphores

2. Binary Semaphores or Mutexes

3.1. Counting Semaphore-

● A counting semaphore has two elements-

⮚ An integer values

⮚ An associated waiting list (Mostly a queue)

● Value of counting semaphores may be positive or negative.

⮚ Positive value shows the total number of processes that are present in the critical
section at a time.

⮚ Negative value shows the total number of processes that are blocked in the
waiting list(queue).

● Operations on Semaphore

⮚ Wait (also referred as: Down operation, P operation).

⮚ Signal (also referred to as: Up operation, V operation, or Release operation).


Example- A Counting Semaphore was initialized to 14. then 8P (wait) and 3V (Signal)

operations were computed on that semaphore. What will be the resulting value of

semaphore after the operations?

Solution-

S = 14 (initial)

8 P (wait):

S = S -8

= 14 - 8 = 6

then 3 V:

S=S+3

=6 + 3 = 9, Hence, the final value of counting semaphores is 9.

3.2. Binary Semaphores or Mutexes-

A binary semaphore has two elements-

⮚ An integer value can be either 0 or 1

⮚ An associated waiting list (mostly a queue)

● The waiting list(queue) of binary semaphores indicates the processes that are

blocked when they are trying to enter the critical section.

● When a process tries to enter the critical section, the wait operation is
executed. Two cases are possible-

Case-01: Binary Semaphore Value = 1

If binary semaphore value is 1,

● The value of the binary semaphore is set to 0 if any process is allowed to enter the

critical section.

Case-02: Binary Semaphore Value = 0

If the binary semaphore value is 0,

● No process is allowed to enter into the critical section, the process is blocked, and
the process is kept in the waiting queue.

● When a process takes exit from the critical section the signal operation is

executed Two cases are possible-

Case-01: Waiting List is Empty-

● If the waiting queue (blocked queue) is empty, then the value of the binary
semaphore will be set to 1.
Case-02: Waiting List is Not Empty-

If the waiting list/queue is not empty, then a process is selected from the waiting list and

executed

● Main purpose behind using binary semaphores-

⮚ To ensure mutual exclusion.

⮚ In implementing the order of execution of processes.

Example- Let suppose S be the binary semaphore variable and initialized to S=1.

Assuming there are no blocked processes in the system. If the following sequence of

operations is performed, then how many processes are blocked in the system at the

end of operations?

7V, 5P, 14V, 10P, 21V, 15P

A. 0

B. 14

C. 15

D. 12

Answer- B

Solution-

Initially, S=1

Operation- 7V

After 7 up operations, S will remain same, i.e, S=1

Operation- 5P

1st down operation is successful, S=0, then no more successful down operations are

possible

For the remaining 4P operation, the four processes, let's say P1, P2, P3 and P4 are

inserted into Blocked queue.

Operation- 14V

For the first 4V up operations, the processes P1, P2, P3 and P4 are taken out from the

Blocked list. For 5th V operation, S=1, and for the remaining V operation S will remain

the same as 1.

Operation- 10P

1st down operation is successful, S=0, further no more successful down operation

possible For the remaining 9P, the nine processes, let’s say P1, P2…, P9 are inserted

into the Blocked list.


Operation- 21V

For first 9V operations, the processes P1, P2, P3, …, P9 are taken out from the Blocked

list. For 10th up operation, S will become 1, and for the remaining V operation S

remains the same as 1.

Right now, no process is in the Blocked state.

Operation-15P

1st P down operation is successful, S will be 0, further no more successful down

operations are possible

For the remaining 14P, the 14 processes, let’s say P1, P2…, P14 are inserted into the

Blocked queue.

Hence, after executing all operations total 14 processes are in the Blocked queue.

4. CLASSICAL SYNCHRONIZATION PROBLEMS

4.1. Bounded Buffer Problem


Bounded buffer problem also known as producer consumer problem, is the classical
problem in synchronization.
First, we need to understand the actual problem then we will further proceed to program
and its solution.
We have a buffer with n number of slots and each slot has one data storing unit. Two
processes, namely producer and consumer are running and operating on the buffer.
Bounded Buffer Problem
The problem is the producer tries to fill data into an empty slot in the buffer. And the
consumer tries to remove data from the filled slot in the buffer. Now, we can observe
that expected output won’t be produced due to concurrent access of the shared
resource, i.e., buffer.
We need a method so that the producer and the consumer work independently while
accessing the same resource.
Solution for Bounded Buffer Problem
This problem can be solved by using semaphores. The semaphores which will be used in
the solution of this problem are:
● A binary semaphore - required to hold and release the lock.
● A counting semaphore(empty) - its initial value will be the number of slots available
in the buffer. Since, initially all the slots are empty.
● A counting semaphore(full)- with 0 as initial value
At any instant of time, the current value of empty indicates the total number of empty
slots present in the buffer and the full indicates the total number of occupied slots
present in the buffer.
The Producer Operation
Pseudocode of the producer:
do
{
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);

/* perform the insert operation in a slot */


// release lock
signal(mutex);
// increment 'full'
signal(full);
}
while (TRUE)
From Above piece of code, we can gather up the following data,
● A producer has to wait until there is at least one empty slot present in the buffer.
● Then it decrements the counting semaphore(empty) because, now there will be less
empty slot, as the producer will insert data in one of those empty slots.
● Then, it will hold lock on the buffer, so that no consumer can access the buffer until
the producer completes its task.
● After the insert operation performed, the lock will be released, and the value
of counting semaphore (full) will be incremented because the producer has filled a
empty slot in the buffer.
The Consumer Operation
Pseudocode for the consumer:
do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
}
while (TRUE);
● The consumer has to wait until there is at least one full slot available in the buffer.
● Then it will decrement the full counting semaphore because now, the total number of
occupied slots will be reduced by one, after the consumer finishes its operation.
● After this, the consumer holds the lock on the buffer.
● Now, the consumer finishes the removal operation from the buffer so that the data is
removed from one of the full slots.
● Then, the consumer will release the lock.
● Finally, the empty counting semaphore will be increased by 1, since consumer has
removed data from an occupied slot, thus making it empty.

4.2. The Sleeping-Barber Problem.

Problem Description: A barbershop has a waiting room with n number of chairs and

the barber room contains one barber chair.

● If there are zero customers in the shop, the barber will go and sleep.

● If any customer comes to the barbershop and all none of the chairs are empty, then

the customer will leave the shop.

● If the barber is busy with some customers but chairs are empty, then the customer

will sit in one of the empty chairs.

● If the barber is sleeping, and a customer arrives, the customer will wake up the barber.

We will use three semaphores.

1. Semaphore customers will counts waiting customers.

2. Semaphore barbers count the idle number of barbers (0 or 1);

3. Mutex is used for mutual exclusion.

A shared data variable customers is used to count waiting customers. It is a copy of

semaphore customers.

We also require a semaphore (cutting). It will ensure that the barber won’t cut another

customer’s hair before the previous customer left the shop.

// shared data

semaphore customers = 0;

semaphore barbers = 0;

semaphore cutting = 0;

semaphore mutex = 1;

int customer1 = 0;

void barber () {
while(true) {

wait(customers); //sleep when there are no waiting customers

wait(mutex); //mutex for accessing customers1

customers1 = customers1 - 1;

signal(barbers);

signal(mutex);

cut_hair();

void customer () {

wait(mutex); //mutex for accessing customers1

if (customers1 < n) {

customers1 = customers1 + 1;

signal(customers);

signal(mutex);

wait(barbers); //wait for available barbers

get_haircut();

else {

//do nothing (leave) when all chairs are used.

signal(mutex);

cut_hair(){

waiting(cutting);

get_haircut(){

get hair cut for some time;

signal(cutting);

The solution to this problem includes three semaphores.

● (Counting semaphore) Customers- It will count the total number of customers


waiting in the shop
● (Binary semaphore) Barber - It’s value will be either 0 or 1, this semaphore will

indicate the barber is idle or working

● Mutex- Used to ensure mutual exclusion.

Solution:

Customers have to keep a record of no. of customers waiting,

● If the total number of customers = number of chairs in the waiting room, then the

upcoming customer has to leave the barbershop.

When the barber comes in the morning, he has to execute the procedure barber, this

will cause him to block the semaphore customers because initially it is 0. Then the

barber falls asleep until the first customer arrives.

When a customer arrives, he executes the customer code and the customer holds the

mutex for entering into the critical section, if another customer enters after the first

customer, the second one will not do anything until the first customer has released the

mutex. Then, the customer will check for the available number of chairs in the waiting

room if the number of waiting customers are less than the total number of chairs then a

customer will sit in the waiting chair otherwise, he will leave the barber shop and

release the mutex.

If the waiting chair is available, then the customer will sit in it and will increment the

value of variable waiting and it will also increase the customers semaphore. This will

wake up the barber if he is asleep.

Now, the customer and the barber both are awake so the barber will give a haircut to

that person. When the haircut is done, the customer will exit the procedure and if there

are no customers in the waiting room the barber will sleep again otherwise give a

haircut to another customer.


4.3. Dining Philosophers Problem (DPP)

The dining philosopher’s problem is a classical synchronization problem; it states that

there are five philosophers sharing a circular dining table and do only two tasks either

they will eat, or they will think. A bowl of rice is kept for each of the philosophers and

there are five chopsticks on the table. If a philosopher is eating so he will require both

right and left chopsticks. A philosopher will only eat if both chopsticks are available.

Otherwise, he will puts down his chopstick and begin thinking.

Fig. philosophers and 5 chopsticks

The dining philosopher problem is a classical synchronization problem as it illustrates a

large set of concurrency control problems.

Solution of Dining Philosophers Problem-

Dining Philosophers Problem can be solved by using semaphores, a semaphore can be

used to represent a chopstick. A chopstick is picked up by executing a wait (P)operation

on the semaphore and it is released by executing a signal(V) semaphore.

The structure of the chopstick can be represented as −

semaphore chopstick [5];

The chopstick is initialized to 1 as initially we are assuming that the chopsticks are on the

table and not picked up by any philosopher.

The structure of a random philosopher i is given as follows −

do {

wait( chopstick[i] );

wait( chopstick[ (i+1) % 5] );

.
. EATING THE RICE

signal( chopstick[i] );

signal( chopstick[ (i+1) % 5] );

. THINKING

} while (1);

In the above solution, the first wait(P) operation is performed on chopstick[i] and

chopstick [ (i+1) % 5]; which means the philosopher i has picked up the chopsticks.

Now the eating function will be performed.

After this, signal(V) operation is performed on chopstick[i] and chopstick [ (i+1) % 5];

which means the philosopher i has eaten the rice and put down the chopsticks on his

sides. Now, the philosopher will go back to thinking.

Difficulty with the solution

The above proposed solution ensures that no two neighbouring philosophers can eat the

food at the same time. But in some cases, this solution can cause a deadlock. Deadlock

may occur if all the philosophers want to eat and pick up their left chopstick

simultaneously. Then no one of them can eat the food and a deadlock occurs.

Here is some ways to avoid deadlock −

● At most four philosophers should be on the table at a time.

● Even numbered philosophers should pick the right chopstick then the left chopstick

whereas an odd numbered philosopher should pick the left chopstick first and then

the right chopstick.

● A philosopher should pick his chopstick only if both chopsticks are available at the
same time.

4.4. Reader Writers Problem-

There are some processes (readers) they only read the shared data, and never alter it,

and there are some other processes(writers) who have permission to change/write the

data not only read it.

Here are some rules of Readers-writers problem

● At a time, any number of readers can read data from the shared resource, but only
one writer is allowed to write into the shared resource.

● When a writer is writing/ changing the data to the resource, non-other process is
allowed to access the resource.
● If there are non-zero no. of readers reading the resource at the same time. Then,
Writer cannot perform write operation
Solution of Reader Writers Problem-
If a writer wishes to perform write operation to the resource, so he must wait until all
readers leave the system.
Here,
● we are using one mutex m and a semaphore w.
● read_count is an integer variable; it is used to maintain the no. of readers accessing
the resource at a time. This variable is initialized to 0.
● Initially, m and w are started with the value 1.

Functions for sempahore:


– wait() : decrements the semaphore value.
– signal() : increments the semaphore value.
The code for the writer process looks like this:
while(TRUE)
{
wait(w);
/* perform the write operation */
signal(w);
}
And, the code for the reader process looks like this:
while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
}
Descriptive Explanation of Code-

● The writer just waits on the semaphore w until all the readers leave the resource and
writer gets a chance to write data to the resource.

● Once the write operation is performed, semaphore w will be incremented so that the
next writer can access the resource to write.

● Whereas, In the reader’s code, the lock is acquired by the reader whenever

the read_count will be updated by any process.

● Whenever a reader wants to read/access the resource, first of all it will increment

the read_count value, then read the data from the resource and once its operation

is completed it will decrement the read_count value.

● The semaphore w is used and modified by the first and last reader who enters and

exits the critical section respectively. Because, when the first reader enters into the

critical section, the writer will be blocked from the resource. Now, new readers can

access the resource only.

● Similarly, when the last reader leaves the critical section, it signals(V) the writer by

using the semaphore w because right now there are no readers in the system and a

writer can access the resource.

5. SYNCHRONIZATION MECHANISMS
5.1. Lock Variable
This is one of the simplest synchronization mechanisms. Lock variable is a software
mechanism which is implemented in User mode. This is a solution to busy waiting that
can be used for more than two processes.
In this mechanism, lock can either be 0 or 1. If Lock value is 0 it means that the C.S is
empty and if the lock value is 1 it means C.S. is occupied.

If a process wants to access the critical section, it must check the value of the lock
variable, if the value is 0 then change it to 1 and let the process enter into the critical
section, and if the value of lock is 1 then wait.
Pseudo code of the lock mechanism -
Entry Section →
While (lock! = 0);
Lock = 1;
//Critical Section
Exit Section →
Lock =0;
Initially, the lock variable’s value is 0.
● If Lock value = 0 it means the CS is currently empty and no process is present in it.
● If Lock value = 1 it means the CS is currently filled and a process is present in
it. Characteristics of Lock variable synchronization mechanism-
● Used in numerous processes.
● It is a solution for busy waiting, when the process is actually waiting the lock variable
keeps the CPU busy.
Failure of the Lock Variable Mechanism-
● Lock variable mechanism completely failed to provide synchronization among the
different processes.
● It can’t guarantee to meet the basic criteria of mutual exclusion.
5.2. Test and Set Lock –
● Test and Set Lock (TSL) is another synchronization mechanism.
● It makes use of a test and set instruction to maintain the synchronization among the
concurrently executing processes.
The os provides a special instruction set known as Test Set Lock (TSL) instruction

whose responsibility is to load the lock variable’s value into the local register R0 and

sets it to 1.

The process in which TSL is executed first that process will enter into the CS and after

that no other process can enter into the critical section until the first process comes out.

No process can access the critical section even if the first process is being pre-empted.

The assembly code of the solution is as follows.


1. TSL Lock, R0

2. CMP R0, #0

3. JNZ step 1

Let's demonstrate TSL on the basis of three synchronisation conditions.

Mutual Exclusion

● Mutual Exclusion is guaranteed in TSL mechanism because a process can never be


pre- empted once lock variable is set. Only one process is able to see the lock variable
as 0 at a particular time instance. Hence, mutual exclusion is guaranteed in TSL.

Progress

In the TSL mechanism, the TSL instruction will only be executed in a process if the process

itself wants to enter into the critical section. And, according to the definition, Progress is

defined as when a process who itself doesn’t want to get into the critical section should

not stop any other process from entering into it. The lock variable’s value will always be

0 if no other process wants to enter the critical section. Hence, progress is also

guaranteed in TSL.
Bounded Waiting

This condition is not guaranteed in TSL. We cannot say that a process will definitely get

a chance to enter the critical section after a certain time, some processes may not get a

chance for a long time. So, bounded waiting is not guaranteed in the Test and Set Lock

synchronisation mechanism.
Characteristics of TSL synchronization mechanism are-

● Ensures mutual exclusion.


● Deadlock free.
● Does not guarantee bounded waiting.
● Some processes may suffer from starvation.

● Suffers from spin lock.


● It is not neutral architecturally because it needs the OS to support test-and-set
instruction.

● It is a solution to busy waiting.


Priority Inversion

In the TSL mechanism, there exists a problem of priority inversion. Let’s suppose that

there are two cooperative processes, P1 and P2.

The priority of process P1 is 2 whereas the priority of process P2 is 1. P1 arrives earlier

and gets scheduled by the CPU. As it is a cooperative process, and it will execute in the

critical section so it will enter in the CS and set the lock variable to 1.

Now, P2 arrives, and we know that the priority of P1 is lower than P2, so according to

priority scheduling, P2 should be scheduled and P1 should be pre-empted. Note that P2

is also a cooperative process, and it also wants to enter into the critical section for

execution.

P1 got pre-empted but the value of the lock variable will be 1 because P1 is not completed,

and it requires the critical section yet.

P1 needs to complete its execution in the critical section but scheduling algorithm says,

CPU is with process P2. Now, P2 wants to enter into the critical section to complete

execution, but the synchronization mechanism says the critical section is with P1.

Such a kind of lock is known as Spin Lock where each of the processes neither

executes nor completes.

This is not a deadlock because none of the processes are in a blocked state. One

process is in the ready state and the other process is in the running state, but none of

the two can be executed.


5.3. Turn Variable-
● Turn variable is another synchronization mechanism that gives synchronization among
processes.
● It uses a variable called turn to provide the synchronization among processes.

This approach is used only for two processes. Let us consider two processes to be P0
and P1. They share a common variable called turn variable.
Pseudocode is defined as following:

Turn variable’s value is set to 0, initially

● Turn value = 0 // turn of process P0 to execute in the critical section.


● Turn value = 1 // turn of process P1 to execute in the critical section.

The problem of the lock variable method is that the process can enter into the critical
section, only when the lock variable is 1. And more than one processes are able to see
the lock variable as 1 simultaneously.
Characteristics of Turn Variable-
● Ensures mutual exclusion.
● Follows a strict alternation approach, so progress is not guaranteed.
● Ensures bounded waiting because processes execute one by one and there is a
guarantee that each process will get a chance to execute.
● Starvation doesn’t exist.
● It is neutral architecturally because it doesn’t need any support of the operating
system.
● Deadlock free.
● It is a solution to busy waiting.
5.4. Strict Alternation
Approach
In strict alternation approach,
● Processes have to enter the critical section compulsorily in an alternate order
whether they want to enter or not.
● This happens because if one process doesn't want to enter the critical section, then
the other process will never get a chance to execute again in the critical section.
Interest Variable-
● It is a synchronization mechanism that synchronize instruction between two processes.
● It uses a variable named interest to synchronise
processes. Its implementation is as follows-

Initially, interest [0] and interest [1] are set to 0 (False).


● Interest value [0] = False means that the process P0 is not interested in entering
critical section.
● Interest value [0] = True means that the process P0 is interested in entering the critical
section.
● Interest value [1] = False means that the process P1 is not interested in entering the
critical section.
● Interest value [1] = True means that the process P1 is interested in entering the critical
section.
An extra variable named interested is used in this mechanism. Interested is a Boolean
variable which is used to store the interest of the processes entering the critical section.
A process who wants to execute in the critical section first checks the entry section
whether another process is interested in getting inside or not. The process will wait until
another process is interested.
In the exit section, the value of its interest variable is set to false by the process so that
the other processes can get the critical section.
Characteristics-
● Ensures mutual exclusion.
● Doesn’t follow the strict alternation approach.
● Ensures progress.
● It is neutral architecturally.
● It is a solution to busy waiting.
● Suffers from deadlock.
● No guarantee of bounded waiting.

5.5. Peterson Solution


Peterson solution is a software-based synchronization mechanism implemented at user
mode. It is a solution to busy waiting and can be implemented for two processes only.
It makes use of two variables: turn and interest variable.
The Code of the Peterson solution is as follows:
# Define N 2
# Define TRUE 1
# Define FALSE 0
int interested[N] = FALSE;
int turn;
void Entry_Section (int process)
{
int other;
other = 1-process;
interested[process] = TRUE;
turn = process;
while (interested [other] =True && TURN=process);
}
Void Exit_Section (int process)
{
interested [process] = FALSE;
}
The Peterson solution fulfills all the necessary requirements such as Progress, Bounded
Waiting, Mutual Exclusion, and Portability.
Analysis of Peterson Solution
Void Entry_Code (int process)
{
1. int other;
2. other = 1-process;
3. interested[process] = TRUE;
4. turn = process;
5. while (interested [other] =True && TURN=process);
}
Critical Section
Void Exit_Code (int process)
{
6. interested [process] = FALSE;
}
Let us suppose two cooperative processes say P1 and P2. Initially, the interested
variable’s and the turn variable’s values are set to 0.
Process P1 arrives and it wants to execute in the critical section. P1 sets its interested
variable to 1 (True) (line 3) and also sets the turn variable to 1(True) (line 4). The
condition in the line 5 is completely satisfied by process P1 therefore P1 will enter the
critical section.
P1 → 1 2 3 4 5 CS
Meanwhile, Process P2 got scheduled and Process P1 got pre-empted. P2 also wants to
enter into the critical section for executing the instructions 1, 2, 3 and 4 of the entry
sections. On instruction 5, P2 will be stuck because it doesn't satisfy the condition and
value of other interested variable is equal to true). Therefore, P2 will get busy waiting.
P2 → 1 2 3 4 5
P1 again scheduled and finished its execution in the critical section by executing the
instruction 6 (setting the interested variable to 0 (false)). Now, If P2 checks condition
then will satisfy the condition because now the other process's interested variable is set
to false. Now, P2 will also enter the critical section.
P1 → 6
P2 → 5 CS
Any of the processes may enter into the critical section for multiple no. of times. Hence
this procedure will occur in the cyclic manner.
Mutual Exclusion
This method provides guaranteed mutual exclusion. Because in the entry section, the
while condition has the criteria for two variables, so a process can’t enter into the CS
until another process is interested and the process is the last one to update the turn
variable. Progress
No uninterested process can stop the other interested process from entering into the
critical section. If any other process is also interested in entering CS, then that process
has to wait.
Bounded waiting
Bounded waiting is not guaranteed because the interested variable failed. But, in the
Peterson solution, deadlock can never occur because the process which will set the turn
first will enter the critical section for sure. So, if a process got pre-empted after
executing line 4 of the entry section, then it will go to the critical section definitely in its

6. INTRODUCTION DEADLOCK
next chance.

A Deadlock is a situation where each of the computer processes waits for a resource which
is being assigned to some other process. In this situation, none of the process gets
executed since the resource it needs is held by some other process which is also waiting
for some other resource to be released.
Example-

Here
• Process P1 holds resource R1 and waits for resource R2 which is held by process P2.
• Process P2 holds resource R2 and waits for resource R1 which is held by process P1.
• None of the two processes can complete and release their resource.
• Thus, both the processes keep waiting infinitely.

7. NECESSARY CONDITIONS FOR DEADLOCKS


7.1. Mutual Exclusion
• A resource can only be shared in mutually exclusive manner. It implies, if two
process cannot use the same resource at the same time.
7.2. Hold and Wait
• A process waits for some resources while holding another resource at the same time.
7.3. No pre-emption
• The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
7.4. Circular Wait
• All the processes must be waiting for the resources in a cyclic manner so that the
last process is waiting for the resource which is being held by the first process.
NOTE-
• All these 4 conditions must hold simultaneously for the occurrence of deadlock.
• If any of these conditions fail, then the system can be ensured deadlock free.

8. STRATEGIES FOR HANDLING DEADLOCK

8.1. Deadlock Ignorance


• This strategy involves ignoring the concept of deadlock and assuming as if it does
not exist.
• This strategy helps to avoid the extra overhead of handling deadlock.
• Windows and Linux use this strategy and it is the most widely used method.
• It is also called as Ostrich approach.
8.2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No pre-emption and
circular wait holds simultaneously. If it is possible to violate one of the four conditions
at any time then the deadlock can never occur in the system.
8.2.1. Mutual Exclusion-
• To violate this condition, all the system resources must be such that they can be
used in a shareable mode.
• In a system, there are always some resources which are mutually exclusive by nature.
• So, this condition cannot be violated.
8.2.2. Hold and Wait-
This condition can be violated in the following ways-
Approach-01:
In this approach,
• A process has to first request for all the resources it requires for execution.
• Once it has acquired all the resources, only then it can start its execution.
• This approach ensures that the process does not hold some resources and wait for
other resources.
Drawbacks-
The drawbacks of this approach are-
• It is less efficient.
• It is not implementable since it is not possible to predict in advance which resources
will be required during execution.
Approach-02:
In this approach,
• A process is allowed to acquire the resources it desires at the current moment.
• After acquiring the resources, it starts its execution.
• Now before making any new request, it has to compulsorily release all the resources
that it holds currently.
• This approach is efficient and implementable.
8.2.3. No Pre-emption-
• This condition can by violated by forceful pre-emption.
• Consider a process is holding some resources and request other resources that
cannot be immediately allocated to it.
• Then, by forcefully pre-empting the currently held resources, the condition can be
violated.
A process is allowed to forcefully pre-empt the resources possessed by some other
process only if-
• It is a high priority process or a system process.
• The victim process is in the waiting state.
8.2.4. Circular Wait-
• This condition can be violated by not allowing the processes to wait for resources in a
cyclic manner.
• To violate this condition, the following approach is followed-
Approach-
• A natural number is assigned to every resource.
• Each process is allowed to request for the resources either in only increasing or only
decreasing order of the resource number.
• In case increasing order is followed, if a process requires a lesser number resource,
then it must release all the resources having larger number and vice versa.
• However, this approach may cause starvation but will never lead to deadlock.
8.3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state
or in unsafe state at every step which the operating system performs. The process
continues until the system is in safe state. Once the system moves to unsafe state, the
OS has to backtrack one step.
• This strategy requires that every process declares its maximum requirement of each
resource type in the beginning.
• The main challenge with this approach is predicting the requirement of the processes
before execution.
Banker’s algorithm is an example of a deadlock avoidance strategy.
8.4. Deadlock detection and recovery
This approach let the processes fall in deadlock and then periodically check whether
deadlock occur in the system or not. If it occurs, then it applies some of the recovery
methods to the system to get rid of deadlock.
Example on Deadlock-
A system has 3 processes each requiring 2 units of resource R. The minimum number of
units of R such that no deadlock will occur-
a) 3
b) 4
c) 5
d) 6
Answer- b
Solution-
In worst case,
The number of units that each process holds = One less than its maximum demand
• So, Process P1 holds 1 unit of resource R
• Process P2 holds 1 unit of resource R
• Process P3 holds 1 unit of resource R
Thus,
Maximum number of units of resource R that ensures deadlock = 1 + 1 + 1 = 3
Minimum number of units of resource R that ensures no deadlock = 3 + 1 = 4

Alternate-
Using the: Formula-: 𝑛(𝑥 − 1) + 1 ≤ 𝑟
Where, n= number of processes, x=need for each process, r= total instances of
resource. “The above formula states that, the number of resources r must be greater
than the overall need of all the processes”
Here, n= 3, number of processes given in problem
x=2, each process requires 2 unit of resource
r=? we need to find the minimum resources to ensure no deadlock occurs.
Putting the values in above formula,
3(2 − 1) + 1 ≤ 𝑟

3+1≤𝑟

4≤𝑟

it means when r =4, there will be no deadlock. (Minimum resources


required) For r =3, the system will cause deadlock.
Suppose n processes, P1, …., Pn share m identical resource units, which can be reserved
and released one at a time. The maximum resource requirement of process P i is si,
where si > 0. The following is a sufficient condition for ensuring that deadlock does not
occur-
• If the deadlock will never occur in the corresponding process, then the following
condition be true.
n

 si  1  m
i0

 n 
 si   n  m
i0 

n
  si  m  n
i0

9. BANKER’S ALGORITHM
• Banker’s Algorithm is a deadlock avoidance strategy.
• It is called so because it is used in banking systems to decide whether a loan can be
granted or not.
Banker’s Algorithm requires-
Whenever a new process is created, it specifies the maximum number of instances of each
resource type that it exactly needs.
• Banker’s Algorithm is executed whenever any process puts forward the request for
allocating the resources.
Safe and Unsafe States
The resource allocation state of a system can be defined by the instances of available and
allocated resources, and the maximum instance of the resources demanded by the processes.
A state of the system is called safe if the system can allocate all the resources requested by
all the processes without entering into deadlock.
If the system cannot fulfil the request of all processes, then the state of the system is called
unsafe.

Fig. Safe and Unsafe States


Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resource types.
Available:
• It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
• Available[j] = k means there are ‘k’ instances of resource type Rj
Max:
• It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
• Max[i,j] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation:
• It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
• Allocation [i, j] = k means process Pi is currently allocated ‘k’ instances of resource type Rj
Need:
• It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
• Need [i, j] = k means process Pi currently need ‘k’ instances of resource type Rj for its
execution.
• Need [i,j] = Max [i, j] – Allocation [i, j]
Let’s Understand this with the help of following example
Example-
An operating system uses the banker’s algorithm for deadlock avoidance when managing the
allocation of three resource types X, Y and Z to three processes P0, P1 and P2. The table
given below presents the current system state. Here, the Allocation matrix shows the current
number of resources of each type allocated to each process and the Max matrix shows the
maximum number of resources of each type required by each process during its execution.
Allocation Max

X Y Z X Y Z

P0 0 0 1 8 4 3

P1 3 2 0 6 2 0

P2 2 1 1 3 3 3

There are 3 units of type X, 2 units of type Y and 2 units of type Z still available. The system
is currently in safe state. Consider the following independent requests for additional
resources in the current state-
REQ1: P0 requests 0 units of X, 0 units of Y and 2 units of Z
REQ2: P1 requests 2 units of X, 0 units of Y and 0 units of Z
Which of the following is TRUE?
1. Only REQ1 can be permitted
2. Only REQ2 can be permitted
3. Both REQ1 and REQ2 can be permitted
4. Neither REQ1 nor REQ2 can be permitted
Solution-
According to question,
Available = [ X Y Z] = [3 2 2]
Now,
Need = Max – Allocation
So, we have-
Allocatio
Max Need
n

X Y Z X Y Z X Y Z

P0 0 0 1 8 4 3 8 4 2

P1 3 2 0 6 2 0 3 0 0

P2 2 1 1 3 3 3 1 2 2

Currently, the system is in safe state.


(It is given in question. If we want, we can check)
Checking Whether REQ1 Can Be Entertained-
• Need of P0 = [ 0 0 2]
• Available = [ 3 2 2]
Clearly,
• With the instances available currently, the requirement of REQ1 can be satisfied.
• So, banker’s algorithm assumes that the request REQ1 is entertained.
• It then modifies its data structures as-

Allocation Max Need

X Y Z X Y Z X Y Z

P0 0 0 3 8 4 3 8 4 0

P1 3 2 0 6 2 0 3 0 0

P2 2 1 1 3 3 3 1 2 2

Available
= [3 2 2] – [0 0 2]
= [3 2 0]
• Now, it follows the safety algorithm to check whether this resulting state is a safe state or
not.
• If it is a safe state, then REQ1 can be permitted otherwise not.
Step-01:
• With the instances available currently, only the requirement of the process P1 can be
satisfied.
• So, process P1 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by
it. Then-
Available
= [3 2 0] + [3 2 0]
= [6 4 0]
Now,
• It is not possible to entertain any process.
• The system has entered the deadlock state which is an unsafe state.
• Thus, REQ1 will not be permitted.
Checking Whether REQ2 Can Be Entertained-
• Need of P1 = [2 0 0]
• Available = [3 2 2]
Clearly,
• With the instances available currently, the requirement of REQ1 can be satisfied.
• So, banker’s algorithm assumes the request REQ2 is entertained.
• It then modifies its data structures as-
Allocatio
Max Need
n

X Y Z X Y Z X Y Z

P0 0 0 1 8 4 3 8 4 2

P1 5 2 0 6 2 0 1 0 0

P2 2 1 1 3 3 3 1 2 2

Available
= [3 2 2] – [2 0 0]
= [1 2 2]
• Now, it follows the safety algorithm to check whether this resulting state is a safe state or
not.
• If it is a safe state, then REQ2 can be permitted otherwise not.
Step-01:
• With the instances available currently, only the requirement of the process P1 can be satisfied.
• So, process P1 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by
it. Then-
Available
= [1 2 2] + [5 2 0]
= [6 4 2]
Step-02:
• With the instances available currently, only the requirement of the process P2 can be satisfied.
• So, process P2 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by it.
Then-
Available
= [6 4 2] + [2 1 1]
= [8 5 3]
Step-03:
• With the instances available currently, the requirement of the process P0 can be satisfied.
• So, process P0 is allocated the requested resources.
• It completes its execution and then free up the instances of resources held by
it. Then-
Available
= [8 5 3] + [0 0 1]
= [8 5 4]
Thus,
• There exists a safe sequence P1, P2, P0 in which all the processes can be executed.
• So, the system is in a safe state.
• Thus, REQ2 can be
permitted. Thus, Correct

10. RESOURCE ALLOCATION GRAPH


Option is (B).

Resource Allocation Graph (RAG) is a graph that represents the state of a system pictorially.
Components Of RAG-
There are two major components of a Resource Allocation Graph-
1. Vertices
2. Edges
Vertices-
There are following types of vertices in a Resource Allocation Graph-
1. Process Vertices
2. Resource Vertices
Process Vertices-
• Process vertices represent the processes.
• They are drawn as a circle by mentioning the name of process inside the circle.
Resource Vertices-
• Resource vertices represent the resources.
• Depending on the number of instances that exists in the system, resource vertices may be
single instance or multiple instances.
• They are drawn as a rectangle by mentioning the dots inside the rectangle.
• The number of dots inside the rectangle indicates the number of instances of that resource
existing in the system.
Edges-
There are two types of edges in a Resource Allocation Graph-
1. Assign Edges
2. Request Edges
Assign Edges-
• Assign edges represent the assignment of resources to the processes.
• They are drawn as an arrow where the head of the arrow points to the process and tail of
the process points to the instance of the resource.
Request Edges-
• Request edges represent the waiting state of processes for the resources.
• They are drawn as an arrow where the head of the arrow points to the instance of the resource
and tail of the process points to the process.
• If a process requires ‘n’ instances of a resource type, then ‘n’ assign edges will be drawn.
Example Of RAG-
The following diagram represents a Resource Allocation Graph-

It gives the following information-


• There exist three processes in the system namely P1, P2 and P3.
• There exist two resources in the system namely R1 and R2.
• There exists a single instance of resource R1 and two instances of resource R2.
• Process P1 holds one instance of resource R1 and is waiting for an instance of resource R2.
• Process P2 holds one instance of resource R2 and is waiting for an instance of resource R1.
• Process P3 holds one instance of resource R2 and is not waiting for anything.

11. DEADLOCK DETECTION USING RAG


Using Resource Allocation Graph, it can be easily detected whether system is in
a Deadlock state or not.
The rules are-
Rule-01:
In a Resource Allocation Graph where all the resources are single instance,
• If a cycle is being formed, then system is in a deadlock state.
• If no cycle is being formed, then system is not in a deadlock state.
Rule-02:
In a Resource Allocation Graph where all the resources are NOT single instance,
• If a cycle is being formed, then system may be in a deadlock state.
•Banker’s algorithm is applied to confirm whether system is in a deadlock state or not.
• If no cycle is being formed, then system is not in a deadlock state.
• Presence of a cycle is a necessary but not a sufficient condition for the occurrence of deadlock.

12. DEADLOCK DETECTION AND RECOVERY

In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
Therefore, the system considers that the deadlock will definitely occur. In order to get rid of
deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of
the deadlock then the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the
help of Resource allocation graph.
In order to recover the system from deadlocks, either OS considers resources or processes.
For Resource
• Pre-empt the resource
We can snatch one of the resources from the owner of the resource (process) and give it to
the other process with the expectation that it will complete the execution and will release this
resource sooner. Well, choosing a resource which will be snatched is going to be a bit
difficult.
• Rollback to a safe state
System passes through various states to get into the deadlock state. The operating system
can roll back the system to the previous safe state. For this purpose, OS needs to implement
check pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the previous
safe state.
For Process
• Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to
kill. Generally, Operating system kills a process which has done least amount of work until
now.
• Kill all process
This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute
again from starting.
Example- Consider the resource allocation graph in the figure-

Find if the system is in a deadlock state otherwise find a safe sequence.


Solution-
The given resource allocation graph is single instance with a cycle contained in it.
Thus, the system is definitely in a deadlock state.

****

You might also like