UNIT:2
1 Process Management
SYLLABUS
2.1 Process and Process management
i. Process model overview
ii. Programmers view of process
iii. Process states
2.2 Process and Processor Scheduling
i Scheduling Criteria
ii First Come First Serve
iii Round Robin
iv SJF
v SRTN 2
CONT.
2.3 Schedulers
i Inter Process communication &
a. synchronization
ii Race condition
iii Mutual Exclusion
iv Monitors
2.4 Dead lock
i Prevention
ii Avoidance
iii Detection and recovery 3
2.1 PROCESS AND PROCESS MANAGEMENT
PROCESS
In computing, a process is an instance of a
computer program that is being executed.
It contains the program code and its current
activity.
Whereas a program is a set of Instructions.
4
PROCESS V/S PROGRAM
Process is a part of code which under execution
or in detail u can say that in main memory
whereas a program is a complete task or work to
be done
A process is a program in execution. it also
includes the current activity, as represented by
the value of program counter and the contents of
the program counter.
A process includes a process stack which contains
the temporary data and a data section which
contains the global variables.
while a program is a sequence of instructions 5
written in a manner to obtain the desired output.
PROCESS CONTROL BLOCK
Each process is represented by a process control
block (PCB) which contains information such as:
state of the process (ready, running, blocked)
register values
priorities and scheduling information
memory management information
accounting information such as open files
allocated resources
6
PROCESS LIFE CYCLE(MODEL)
A process moves through various states during
its life:
new,
ready,
running,
blocked, and
done states, and
transitions between them
7
CONTINUE…
8
PROCESS STATES
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to
occur
ready: The process is waiting to be assigned to a
Processor
terminated: The process has finished execution
9
2.2 PROCESS AND PROCESSOR SCHEDULING
SCHEDULING CRITERIA
CPU utilization – keep the CPU as busy as
possible
Throughput – Number of processes that
complete their execution per time unit .
Turnaround time – amount of time to
execute a particular process
Waiting time – amount of time a process has
been waiting in the ready queue
Response time – amount of time it takes
from when a request was submitted until
the first response is produced, not output
10
(for time-sharing environment)
OPTIMIZATION CRITERIA
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
11
SCHEDULING ALGORITHM
First Come First Serve (FCFS) Scheduling
Shortest-Job-First (SJF) Scheduling
Priority Scheduling
Round Robin(RR) Scheduling
12
FIRST COME FIRST SERVE (FCFS)
Jobs are executed on first come, first serve basis.
Easy to understand and implement.
Poor in performance as average wait time is high.
13
CONTINUE…
14
CONTINUE…
15
SHORTEST JOB FIRST (SJF)
Best approach to minimize waiting time.
Impossible to implement
Processer should know in advance how much
time process will take.
16
CONTINUE…
17
CONTINUE…
18
SRTN(SHORTEST REMAINING TIME
NEXT)
SRTN-Preemptive version of SJF
Shortest remaining time, also known
as shortest remaining time first (SRTF), is
a scheduling method that is a preemptive version
of shortest job next scheduling.
In this scheduling algorithm, the process with the
smallest amount of time remaining until
completion is selected to execute.
19
PRIORITY BASED SCHEDULING
Each process is assigned a priority. Process with
highest priority is to be executed first and so on.
Processes with same priority are executed on first
come first serve basis.
Priority can be decided based on memory
requirements, time requirements or any other
resource requirement.
20
CONTINUE…
21
CONTINUE…
22
ROUND ROBIN SCHEDULING
Each process is provided a fix time to execute
called quantum.
Once a process is executed for given time period.
Process is preempted and other process executes
for given time period.
Context switching is used to save states of
preempted processes.
23
CONTINUE…
24
Average Wait Time: (9+2+12+11) / 4 = 8.5
2.3 IPC
(INTER PROCESS COMMUNICATION)
In multiprogramming OS more than one process
may be running simultaneously. Such processes
can communicate with each other, such type of
communication is called Inter process
communication.
IPC is useful in a distributed environment where
the communication process may reside on
different machine connected with a network.
Example of IPC:
A shell pipeline in UNIX : ls | wc –l
Printing a paper in network
25
Chat or mail server
ISSUES RELATED TO IPC
How one process can pass information to other
process:
To share information
Two or more process should not get into each
other’s way:
Proper sequencing should be maintained in
execution of process where dependencies are
present.
26
ADVANTAGES AND DISADVANTAGE
Advantage:
Information sharing
Computation speed up(increase )
Modularity
Convenience
Many tasks on which work can be done at
same time
Disadvantage
Race condition problem
Process synchronization should be there.
27
PROCESS SYNCHRONIZATION
Process synchronization is a mechanism to
ensure systematic sharing of resources among
concurrent (parallel) process.
Two issues related to Process synchronization:
Two or more process should not come into each
other’s way
Proper sequencing should be maintained
during execution.
In IPC one problem arises called Race condition.
28
RACE CONDITION / RACING PROBLEM:
It is situation where two or more process are
reading / writing some shared data and final
result depends on relative order of their
execution ,is called race condition.
29
CONTINUE…
A=1000,At the end of two process A should be 1100.but if P0
and P1 permitted to execute in any arbitrary fashion then
output will be not same.
30
CONTINUE…
31
CONTINUE…
Here two process are reading and writing
common variable ‘A’ .The final value depends on
relative execution order of P0 and P1,such
situation is called race condition.
This implies that concurrent process are racing
with each other to access a shared resource in
arbitrary order and procedure wrong final results
,so race condition must be avoided.
32
CRITICAL SECTION
A critical section is a code segment of a process where
the shared resources are accessed.
It is also known as critical region.
It is a part of a program or code segment of a process
where a process may change common variables,
updating a table, writing a file or accessing some
physical device.
Only one process should be allowed to execute
in its critical section.
OR
Only one process should be allowed to access a shared
resource at a time.
It is the responsibility of an OS to ensure that no
more than one process is present in its critical section
simultaneously. 33
REQUIREMENT OF CRITICAL SECTION
SOLUTION:
An ideal critical section solution meet the following
three condition:
Mutual exclusion:
At any time, at most one of the co-operating process
should be executing in its critical section
Progress:
When no any process is running in its critical section
and other process wants to enter its critical section, it
should be allowed immediately.
If more than one process is waiting to enter their
critical sections, one of them should be selected to
enter.
This termed as requirement of progress, which must 34
be met under all possible conditions.
CONTINUE…
Bounded waiting:
When more than one process is waiting to enter their
critical sections, one of them is selected to enter in
critical section.
Other processes simply wait. But such type of waiting
must be limited. There should not be starvation.
It should not be such that one process waits forever
and other processes get entry in critical section more
than once.
There should be some upper bound on the no, of times
that other processes allowed to enter in their critical
sections before a requesting process is granted to
35
enter in its critical section.
CONTINUE…
36
CRITICAL SECTION SOLUTION
General structure
do
{
Entry section
Critical section
Exit section
Remainder section
} while (1);
Process execution divided into four sections for
co-operating processes.
37
CONTINUE…
Entry section:
This section is executed when a process wants
to enter its critical section.
Eligibility of process to enter critical section is
checked here. If all requirements are satisfied
process is allowed to enter else it will wait till
the entire requirements are met.
It assures that at most one process is in its
critical section, condition of mutual exclusion.
38
CONTINUE…
Critical section:
In this code segment, process accesses a
shared resource.
Only one co-operating process will be in its
critical section and enjoying exclusive access to
the shared resource.
Exit section:
This code segment is executed when a process
exits its critical section.
In this process performs certain operations,
indicating its exit from the critical section
This process enables one of the waiting 39
processes to enter its critical section.
CONTINUE…
Remainder section:
This is remaining part of process’s code.
In this section, process performs tasks which
are independent from other processes. Means
they do not include use of any shared
resources.
This process continuous till process execution
completes. Some tools are available to
implement entry and exit section such as
semaphore, mutex and monitor.
40
SEMAPHORE
It is tools used to provide process synchronization.
Semaphore is an integer variable that apart from
initialization ,is accessed through two standard
atomic operation: Down (wait) and Up( signal)
Semaphores are used to indicate the status of
shared resources whether they are free or being
used.
They can be used to control the entry of a process
in its critical section.
41
SEMAPHORE OPERATIONS
Semaphores can be accessed only through the
two operations :
Down and Up
42
CONTINUE…
Down (or Wait ) and Up operation :
Down (S) // S is semaphore variable
{
while(s <=0) //check to see whether s <=0
sleep( ); //if so, put process to sleep
S--; //else decrement S,and
continue
}
This operation checks to see if the semaphore value is less
than or equal to zero.
Up(S)
{
S++;
Wakeup();
} 43
SEMAPHORE TYPES
Binary semaphore:
A binary semaphore is a semaphore with an
integer value ranging between 0 and 1.
Simpler to implement
It is used to provide mutual exclusion.
Counting semaphore:
It is a semaphore with an integer value ranging
between 0 and arbitrarily large number.
Its initial value might represent the number of
units of the critical resources that are available.
It is used to manage multiple instance of a
resource. 44
MONITOR
A monitor is a synchronization construct that
allows threads to have both mutual exclusion and
the ability to wait (block) for a certain condition
to become true.
Monitors also have a mechanism for signalling
other threads that their condition has been met.
A monitor consists of a mutex (lock) object
and condition variables.
A condition variable is basically a container of
threads that are waiting for a certain condition.
45
2.4 DEADLOCK
Whenever process needs any resource, it request
for it. OS grants Resource if it is free. If resource
is not free, process will waits for it to be free.
Examples: Traffic Jam
Device allocation
process 1 requests tape drive 1 & gets it
process 2 requests tape drive 2 & gets it
process 1 requests tape drive 2 but is blocked
process 2 requests tape drive 1 but is blocked
46
CONTINUE…
Dead lock:
A set of processes is deadlocked if each process in
the set is waiting for an event that only another
process in the set can cause.
A process request the resources, the resources
are not available at that time, so the process
enter into the waiting state.
The requesting resources are held by another
waiting process, both are in waiting state, this
situation is said to be “Deadlock”.
47
CONTINUE…
For example :
P1 and P2 are two processes, R1 and R2 are two
resources. P1 request the resource R1, R1 held by
process P2, P2 request the resource R2, R2 is held by
P1, then both are entered into the waiting state.
There is no work progress for process P1 and P2, it is
also a Deadlock. Deadlock can be represented by
Resource Allocation Graph.
48
NECESSARY CONDITION FOR DEADLOCK
Deadlock situation can arise if the following four
conditions hold simultaneously (all at one time)
in system.
A Deadlocked system must satisfy the following 4
conditions. These are:
MUTUAL EXCLUSION
HOLD AND WAIT
NO PREEMPTION
CIRCULAR WAIT
49
CONTINUE…
MUTUAL EXCLUSION: Each resource can be
assigned to exactly one process if any process requests
resources which are not free then that process will
wait until resource becomes free.
HOLD AND WAIT: A process must be holding at
least one resource and waiting to acquire (get)
additional resources that are currently being held by
other process.
NO PREEMPTION: A process must be holding at
least one resource and waiting to acquire (get)
additional resources that are currently being held by
other process. No preemption means resources are not
released in the middle of the work, they released only
after the process has completed its task. 50
CONTINUE…
CIRCULAR WAIT: There must be circular
chain or two or more process . A set { P1,
P2,..…Pn } of waiting process must exist such
that P1 is waiting for resource R1, P2 waits for
resource R2, P n-1 waits for Pn and Pn waits for
P0.it said to be circular wait.
Process { P0 , P1 ,P2 } and Resource { R1, R2
,R3 ,R4}
51
RESOURCE ALLOCATION GRAPH
Deadlocks can be described accurately using a
directed graph called a system resource allocation
graph.
This graph consists of a set of vertices V and a set of
edges E.
The set of vertices V is partitioned into two
different types of nodes :
P = {P1, P2, P3…………….Pn}, the set of all active
processes in the system ;
Process Pi represented using circle.
R = {R1, R2,…………Rm}, the set of all resource types
in the system.
Resource Rj represented using rectangle.
If resource Rj may have more than instance as dot within 52
circle.
CONTINUE…
The set of edges E is in two different types :
A directed edge from process Pi to resource
type Rj is denoted by Pi Rj, it is called
request edge. It means that process Pi
requested an instance of resource type Rj and
is waiting for that resource.
A directed edge from resource type Rj to
process Pi is denoted by Rj Pi, It is called
assignment edge. It means that an instance
of resource type Rj has been allocated to
process Pi.
53
CONTINUE…
54
CONTINUE…
The sets of
process P = { P1, P2, P3 },
resource R = {R1, R2, R3, R4 },
Edges E = { P1R1, P2R3, R1 P2, R2P2,
R2R1, R3 P3}
Resource instances – 1 instance of resource type R1, 2
instances of resource type R2, 1 instance of resource
type R3 and 3 instances of resource type R4.
Here ,Process states –
Process P1 is holding an instance of resource type R2,
and is waiting for an instance of resource type R1.
Process P2 is holding an instance of resource type R1
and R2, and is waiting for an instance of resource
type R3.
55
Process P3 is holding an instance of resource type R3.
CONTINUE…
If a RAG contains no cycles, then no process
in the system is deadlocked. If the graph
56
contains a cycle, then deadlock may exist.
DEADLOCK DETECTION
Some system grants the available resources to
requesting process freely and then check for deadlock.
This strategy allows deadlock to occur, it doesn’t
prevent deadlock from occur. It periodically checks for
deadlock, if they are present, takes some action to
recover it.
To detect deadlock, Resource Allocation Graph is
constructed including all processes and resources. If
there exists a cycle, then deadlock is there.
If all resource have single instance then we can wait
for draw graph, which is used by deadlock detection
algorithm.
To detect deadlock, system needs to maintain wait for
graph and periodically invoke an algorithm that
searches a cycle in graph.
57
RECOVERY FROM DEADLOCK
Once deadlock has been detected, some strategy
is needed for recovery. The various approaches of
recovering from deadlock are:
Recovery through preemption:
Temporarily take a resource away from its
current owner and give it to another.
Recovery through rollback:
Some processes are roll backed to previous
moment when there was no deadlock. This
requires periodic check pointing of process.
Check pointing means that its state is written to 58
a file so that it can be restarted later.
CONTINUE…
Recovery through process termination:
The easiest way to break a deadlock is to kill one
or more processes. When we terminate process
resources accessed by that process will be free
and can be allocated to other process which is
waiting for that process. Two different approach
for this,
Abort all deadlocked process: It means
release all the processes in the deadlocked state,
and start the allocation from the starting point.
It is a great expensive method.
Abort one by one process until the deadlock
cycle is eliminated. 59
DEADLOCK AVOIDANCE
It is one of the methods of dynamically escaping from
the deadlocks, the word dynamically means ‘online’.
In this scheme if a process request for resources the
avoidance algorithm checks before the allocation of
resources about the state of system.
If the state is safe, the system allocate the resources
to the requesting process otherwise (unsafe) do not
allocate the resources. So taking care before the
allocation said to be deadlock avoidance.
Safe state means “no deadlock will happen; even we
allocate the resources to requesting processes”.
Unsafe means the deadlocks may happen if grant the
resources. Consider the below figure for better
understanding. 60
DEADLOCK PREVENTION
Deadlock prevention is same as take the preventive
methods before attacking the deadlock.
For deadlock to occur, each of the four necessary
conditions must hold, by ensuring that at least one of
these conditions can’t hold, we can prevent the
occurrence of the deadlock. So we can prevent by
attacking one of these conditions.
1.Mutual exclusion :
Mutual exclusion means only one process at a time
can use a resource.
2.Hold and wait:
It requires that all processes should request all 61
resource at the beginning of execution.
CONTINUE…
3.No preemption:
No preemption means resources are not released
in the middle of the work.
They released only after the process has
completed its task.
Normally, not possible.
4. Circular wait:
Resources are allocated in some fixed order and it
is assured that there is no cycle in allocating
resources.
62