Processes
Dr. Neetu Narwal
Assoc. Prof.,
BCA[M]
Dr. Neetu Narwal, MSI
Summary of Unit I
Dr. Neetu Narwal, MSI
Process
Process or task is an instance of a program in
execution.
It is the smallest unit of work individually
schedulable by an operating system.
Each multiprogramming operating system keeps
track of all active process and allocates system
resources to them according to policies.
Dr. Neetu Narwal, MSI
The operating system is responsible for some
important aspects of process management
The creation and deletion of both user and system
processes;
The scheduling of processes; and
The provision of mechanisms for synchronization,
communication, and deadlock handling for processes.
Dr. Neetu Narwal, MSI
Process States
Dormant- is peripheral as it denotes process not
known to OS
Ready- possess all resources except processor
Running- possess all resources including
processor.
Suspended- lack some resource other than
processor.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Process Control Block
OS group all information needed by process into a data
structure called process descriptor or process control
block.
When process is created the OS creates a
corresponding process control block to serve as its
runtime description during the lifetime of the process.
When process terminates, its PCB is released to the pool
of free cells from which PCB is drawn.
Dr. Neetu Narwal, MSI
• Process name(ID)
• Priority
• State(ready, running, suspended)
• Hardware state(processor registers and flags)
• Scheduling information and usage statistics
• Memory management information (registers, tables)
• I/O status
• File management information(open file, access
rights)
• Accounting information
Dr. Neetu Narwal, MSI
OS maintains process list
Ready list
Suspended list
Running list
Dr. Neetu Narwal, MSI
Process switch
A transition between two memory resident
processes in a multiprogramming system is
called process switch or task switch.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
OS services for process management
CREATE(processID, attributes);
DELETE(processID);
ABORT(processID)
FORK/JOIN
SUSPEND(processID);
RESUME(processID);
DELAY(processID,time);
GET_ATTRIBUTES(processID,attribute_set);
CHANGE_PRIORITY(processID,new_priority);
Dr. Neetu Narwal, MSI
Scheduling
Scheduling refers to a set of policies and
mechanisms built into the operating system that
governs the order in which the work to be done
by a computer system is completed.
A scheduler is an OS module that selects the
next job to be admitted into the system and the
next process to run.
Dr. Neetu Narwal, MSI
Types of Scheduler
There are three different types of scheduler,
which may coexist in complex operating system
1. long-term scheduler
2. medium-term scheduler
3. short-term scheduler
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Scheduling and Performance Criteria
Processor Utilization- average fraction of time during
which processor is busy
Throughput- amount of work completed in a unit of time,
Turnaround Time- T, time that elapse from the moment a
process is submitted until it is completed by the system.
Waiting Time- W, time that a process spends waiting for
resource.
Response Time- time elapses from moment last
character of program is launched and first result
appears on the terminal.
Dr. Neetu Narwal, MSI
W(x)=T(x)-x
x is service time, W(x) is waiting time,
T(x) is turnaround time.
Dr. Neetu Narwal, MSI
Scheduling Algorithm
Scheduling algorithm may be preemptive or non
preemptive.
Non-Preemptive means that the running process retains
the ownership of allocated resources, including the
processor until it voluntarily surrenders the control to the
operating system.
Preemptive means a running process may be replaced
by higher-priority process at any time.
Dr. Neetu Narwal, MSI
First Come First Serve (FCFS) Scheduling
In this algorithm the workload is processed in the
order of arrival, with no preemption.
The implementation of the FCFS policy is easily
managed with a FIFO queue.
When a process enters the ready queue, its PCB is
linked onto the tail of the queue. When the CPU is
free, it is allocated to the process at the head of the
queue. The running process is then removed from
the queue.
Dr. Neetu Narwal, MSI
Consider the following set of processes that
arrive at time 0, with the length of the CPU burst
given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
Dr. Neetu Narwal, MSI
If the processes arrive in the order P1, P2, P3,
and are served in FCFS order, we get the result
shown in the following Gantt chart,
The waiting time is 0 milliseconds for process P1,
24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average
waiting time is (0 + 24 + 27)/3 = 17 milli seconds.
Dr. Neetu Narwal, MSI
Problem: If the processes arrive in the order P2, P3, P1.
Draw the Gantt chart and find average waiting time?
Consider the following set of processes that arrive at time
0, with the length of the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Shortest Job First(SJF) Algorithm
When the CPU is available, it is assigned to the
process that has the smallest next CPU burst. If
the next CPU bursts of two processes are the
same, FCFS scheduling is used to break the tie.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Preemptive SJF- Shortest Remaining
Time Next
A preemptive SJF algorithm will preempt the
currently executing process.
Preemptive SJF scheduling is sometimes called
shortest-remaining-time-first scheduling.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Priority Scheduling
The SJF algorithm is a special case of the general
priority-scheduling algorithm.
A priority is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal-
priority processes are scheduled in FCFS order.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
A major problem with priority scheduling
algorithms is indefinite blocking, or starvation.
A process that is ready to run but waiting for the
CPU can be considered blocked. A priority
scheduling algorithm can leave some low
priority processes waiting indefinitely
Dr. Neetu Narwal, MSI
Round Robin Scheduling algorithm
The round-robin (RR) scheduling algorithm is
designed especially for timesharing systems. It is
similar to FCFS scheduling, but preemption is
added to enable the system to switch between
processes. A small unit of time, called a time
quantum or time slice, is defined.
A time quantum is generally from 10 to 100
milliseconds in length. The ready queue is
treated as a circular queue.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
QUIZ
Dr. Neetu Narwal, MSI
Q1
Dr. Neetu Narwal, MSI
Q2
Dr. Neetu Narwal, MSI
Q3
Dr. Neetu Narwal, MSI
Q4
Dr. Neetu Narwal, MSI
Q5
Dr. Neetu Narwal, MSI
Q6
Q7
Q8
Dr. Neetu Narwal, MSI
Q9
Q10
Dr. Neetu Narwal, MSI
Q11
Q12
Dr. Neetu Narwal, MSI
ANSWERS
Ans1
Ans 2
Ans 3
Ans 4
Ans 5
Ans 6
Ans 7
Ans 8
Ans 9
Ans 10
Ans 11
Ans
Dr. Neetu 12MSI
Narwal,
Multiple-Processor Scheduling
One approach to CPU scheduling in a multiprocessor
system has all scheduling decisions, I/O processing, and
other system activities handled by a single processor—
the master server.
The other processors execute only user code.
This asymmetric multiprocessing is simple because only
one processor accesses the system data structures,
reducing the need for data sharing.
Dr. Neetu Narwal, MSI
A second approach uses symmetric multiprocessing
(SMP), where each processor is self-scheduling.
All processes may be in a common ready queue, or
each processor may have its own private queue of
ready processes.
Regardless, scheduling proceeds by having the
scheduler for each processor examine the ready queue
and select a process to execute
Dr. Neetu Narwal, MSI
if we have multiple processors trying to access and
update a common data structure, the scheduler must
be programmed carefully.
We must ensure that two separate processors do not
choose to schedule the same process and that
processes are not lost from the queue.
Dr. Neetu Narwal, MSI
Processor Affinity
Suppose, the data most recently accessed by the
process populate the cache for the processor. As a
result, successive memory accesses by the process are
often satisfied in cache memory.
Now consider what happens if the process migrates to
another processor. The contents of cache memory must
be invalidated for the first processor, and the cache for
the second processor must be repopulated.
Dr. Neetu Narwal, MSI
Because of the high cost of invalidating and
repopulating caches, most SMP systems try to
avoid migration of processes from one processor
to another and instead attempt to keep a
process running on the same processor.
This is known as processor affinity—that is, a
process has an affinity for the processor on
which it is currently running.
Dr. Neetu Narwal, MSI
Dr. Neetu Narwal, MSI
When an operating system has a policy of attempting to
keep a process running on the same processor—but not
guaranteeing that it will do so—we have a situation
known as soft affinity.
In contrast, some systems provide system calls that
support hard affinity, thereby allowing a process to
specify a subset of processors on which it may run.
Dr. Neetu Narwal, MSI
Load Balancing
On SMP systems, it is important to keep the workload
balanced among all processors to fully utilize the
benefits of having more than one processor.
Otherwise, one or more processors may sit idle while
other processors have high workloads, along with lists of
processes awaiting the CPU.
Load balancing attempts to keep the workload evenly
distributed across all processors in an SMP system.
Dr. Neetu Narwal, MSI
It is important to note that load balancing is typically
necessary only on systems where each processor has its
own private queue of eligible processes to execute.
On systems with a common run queue, load balancing is
often unnecessary, because once a processor becomes
idle, it immediately extracts a runnable process from the
common run queue.
Dr. Neetu Narwal, MSI
There are two general approaches to load balancing:
push migration and pull migration.
With push migration, a specific task periodically checks
the load on each processor and—if it finds an
imbalance—evenly distributes the load by moving (or
pushing) processes from overloaded to idle or less-busy
processors.
Pull migration occurs when an idle processor pulls a
waiting task from a busy processor.
Push and pull migration need not be mutually exclusive
and are in fact often implemented in parallel on load-
balancing systems.
Dr. Neetu Narwal, MSI
Multicore Processors
Traditionally, SMP systems have allowed several threads to run
concurrently by providing multiple physical processors.
Recent computer hardware has been to place multiple
processor cores on the same physical chip, resulting in a
multicore processor.
Each core maintains its architectural state and thus appears
to the operating system to be a separate physical processor.
SMP systems that use multicore processors are faster and
consume less power than systems in which each processor
has its own physical chip.
Dr. Neetu Narwal, MSI
When a processor accesses memory, it spends a significant amount
of time waiting for the data to become available. This situation,
known as a memory stall, may occur for various reasons, such as a
cache miss
Dr. Neetu Narwal, MSI
To remedy this situation, many recent hardware designs have
implemented are assigned to each core.
That way, if one thread stalls while waiting for memory, the core can
switch to another thread.
Dr. Neetu Narwal, MSI
END
Dr. Neetu Narwal, MSI