Chapter 5: Process
Scheduling
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 5: Process Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Operating Systems Example: Linux scheduling
Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Objectives
To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-scheduling
algorithm for a particular system
Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013
Basic Concepts
Maximum CPU utilization obtained
with multiprogramming
Multiple process are kept in
memory.
When one process has to wait, the
OS takes the CPU away from that
and gives the CPU to another
process.
CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main
concern
Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Histogram of CPU-burst Times
• An I/O-bound program typically has many short CPU bursts. A
CPU-bound program might have a few long CPU bursts.
Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
• A ready queue can be implemented as a FIFO queue, a priority
queue, a tree, or simply an unordered linked list.
• All the processes in the ready queue are lined up waiting for a
chance to run on the CPU.
• The records in the queues are generally process control blocks
(PCBs) of the processes.
Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (I/O interrupt)
2. Switches from running to ready state ( Interrupt occurs)
3. Switches from waiting to ready (completion of I/O)
4. Terminates
In preemptive scheduling the CPU is allocated to the processes for
the limited time.
While in Non-preemptive scheduling, the CPU is allocated to the
process till it terminates or switches to waiting state.
Scheduling under 1 and 4 is non-preemptive
All other scheduling is preemptive
Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
Windows 95 introduced pre-emptive scheduling, and all subsequent
versions of Windows operating systems have used preemptive scheduling.
The Mac OS X operating system for the Macintosh also uses pre-emptive
scheduling
Problems with pre-emptive scheduling are:
Consider access to shared data (While one process is updating the
data, it is preempted)
Consider preemption while in kernel mode (when kernel busy with
activity)
Consider interrupts occurring during crucial OS activities
Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Dispatcher
Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to
restart that program
Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling
algorithms. The choice of a particular algorithm may favor one class of
processes over another.
CPU utilization – keep the CPU as busy as possible (0 to 100%)
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready
queue
Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-sharing
environment)
Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P 3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
Question to Solve
Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest
time
SJF is optimal – gives minimum average waiting time for a given
set of processes
The difficulty is knowing the length of the next CPU request
Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Determining Length of Next CPU Burst
Can only estimate the length – should be similar to the previous one
Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU bursts, using
exponential averaging
1. t n actual length of n th CPU burst
2. n 1 predicted value for the next CPU burst
3. , 0 1
4. Define : n1 tn 1 n .
Commonly, α set to ½
Preemptive version called shortest-remaining-time-first
Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Prediction of the Length of the Next CPU Burst
Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013
Examples of Exponential Averaging
=0
n+1 = n
Recent history does not count
=1
n+1 = tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1, each
successive term has less weight than its predecessor
Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Example of Shortest-remaining-time-first
SRTF is also called preemptive SJF
Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5
msec
Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Process ID Arrival time Burst Time
P1 0 12
P2 2 4
P3 3 6
P4 8 5
Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
Preemptive
Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time (larger burst, low priority)
Problem Starvation – low priority processes may never execute
Solution Aging – as time progresses increase the priority of the
process (15 mins)
Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
Average waiting time = 8.2 msec
Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Turn around time= completion time - Arrival time
Waiting time = TAT – Burst time
Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
Timer interrupts every quantum to schedule next process
Performance
q(time quantum) large same as FCFS policy
q small large number of context switchs, otherwise
overhead is too high
Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time
Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better
response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With The Time Quantum
80% of CPU bursts should
be shorter than q
Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
Process Arrival Time ms Burst in ms Priority
A 0 4 3
B 1 3 4
C 2 3 6
D 3 5 5
For the processes listed below, draw a chart illustrating their execution
using preemptive and non-preemptive priority scheduling. A larger priority
number has higher priority. Calculate the average TAT and WT.
Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue
Ready queue is partitioned into separate queues, eg:
foreground (interactive)
background (batch)
Process permanently in a given queue
Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS
Scheduling must be done between the queues:
Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
20% to background in FCFS
Operating System Concepts – 9th Edition 6.30 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue Scheduling
Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
Multilevel Feedback Queue
A process can move between the various queues; aging can be
implemented this way
Multilevel-feedback-queue scheduler defined by the following
parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter
when that process needs service
Operating System Concepts – 9th Edition 6.32 Silberschatz, Galvin and Gagne ©2013
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8
milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is
served FCFS
When it gains CPU, job receives 8
milliseconds
If it does not finish in 8 milliseconds,
job is moved to queue Q1
At Q1 job is again served FCFS and
receives 16 additional milliseconds
If it still does not complete, it is
preempted and moved to queue Q2
Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne ©2013
Thread Scheduling
Distinction between user-level and kernel-level threads depends on
how they scheduled
When threads supported, threads scheduled, not processes
Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
Known as process-contention scope (PCS) since scheduling
competition is within the process
Typically done via priority set by programmer
Kernel thread scheduled onto available CPU is system-contention
scope (SCS) – competition among all threads in system
Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne ©2013
Pthread Scheduling
API allows specifying either PCS or SCS during thread creation
PTHREAD_SCOPE_PROCESS schedules threads using
PCS scheduling
PTHREAD_SCOPE_SYSTEM schedules threads using
SCS scheduling
Can be limited by OS – Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM
Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne ©2013
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.\n");
}
Operating System Concepts – 9th Edition 6.36 Silberschatz, Galvin and Gagne ©2013
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}
Operating System Concepts – 9th Edition 6.37 Silberschatz, Galvin and Gagne ©2013
Operating System Examples
Linux scheduling
Windows scheduling
Solaris scheduling
Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne ©2013
Linux Scheduling Through Version 2.5
Prior to version 2.5 Unix scheduler were not supporting SMP
systems and not scale well as number of tasks on the system
grows.
New scheduler supports SMP and interactive tasks.
Linux scheduler is a preemptive and priority based algorithm with 2
separate priority range: 0 to 99 (real time) and 100 to 140 (nice
value).
Lower value indicates higher priority
Linux assigns higher priority task longer time quanta (200ms) and
lower priority task shorter time quanta (10ms).
A runnable task executes on the CPU until its time remaining in time
slice.
When task has exhausted its time, it is considered as expired.
The kernel maintains list of runnable tasks in runqueue data
structure.
Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne ©2013
Because of SMP, each processor maintains its own runqueue and
schedules itself independently.
Each runqueue contains two priority arrays – active and expired.
When active array becomes empty the expired array becomes active array
and vice versa. New priorities and time slice will be assigned.
Real time tasks are assigned static priorities and other tasks have dynamic
priorities based on nice value plus/minus 5 value.
Recalculation of a task’s dynamic priority occurs when the task has
exhausted its time quantum and is to be moved to the expired array.
Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
End of Chapter 6
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013