[go: up one dir, main page]

0% found this document useful (0 votes)
13 views29 pages

Cpu Scheduling: Dr.P.Suresh

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 29

CPU SCHEDULING Dr.P.

SURESH
BASIC CONCEPTS

Maximum CPU utilization


obtained with multiprogramming
CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main
concern
HISTOGRAM OF CPU-BURST TIMES
Large number of short bursts

Small number of longer bursts


CPU SCHEDULER
The CPU scheduler selects from among the processes in ready
queue, and allocates a CPU core to one of them
 Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
For situations 1 and 4, there is no choice in terms of scheduling.
A new process (if one exists in the ready queue) must be
selected for execution.
For situations 2 and 3, however, there is a choice.
PREEMPTIVE AND NONPREEMPTIVE SCHEDULING

When scheduling takes place only under circumstances 1 and


4, the scheduling scheme is nonpreemptive.
Otherwise, it is preemptive.
Under Nonpreemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it
releases it either by terminating or by switching to the waiting
state.
Virtually all modern operating systems including Windows,
MacOS, Linux, and UNIX use preemptive scheduling
algorithms.
PREEMPTIVE SCHEDULING AND RACE CONDITIONS

Preemptive scheduling can result in race conditions when


data are shared among several processes.
Consider the case of two processes that share data. While
one process is updating the data, it is preempted so that
the second process can run. The second process then tries to
read the data, which are in an inconsistent state.
DISPATCHER
Dispatcher module gives control of
the CPU to the process selected by
the CPU scheduler; this involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in
the user program to restart that
program
Dispatch latency – time it takes for the
dispatcher to stop one process and
start another running
SCHEDULING CRITERIA
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per
time unit
Turnaround time – amount of time to execute a particular
process
Waiting time – amount of time a process has been waiting in the
ready queue
Response time – amount of time it takes from when a request
was submitted until the first response is produced.
SCHEDULING ALGORITHM OPTIMIZATION CRITERIA

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time
FIRST- COME, FIRST-SERVED (FCFS) SCHEDULING

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
FCFS SCHEDULING (CONT.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes
SHORTEST-JOB-FIRST (SJF) SCHEDULING
Associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time

SJF is optimal – gives minimum average waiting time for a given


set of processes
Preemptive version called shortest-remaining-time-first
How do we determine the length of the next CPU burst?
 Could ask the user
 Estimate
EXAMPLE OF SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3

SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


SHORTEST REMAINING TIME FIRST SCHEDULING

Preemptive version of SJN


Whenever a new process arrives in the ready queue, the decision
on which process to schedule next is redone using the SJN
algorithm.
Is SRT more “optimal” than SJN in terms of the minimum average
waiting time for a given set of processes?
EXAMPLE OF SHORTEST-REMAINING-TIME-FIRST

Now we add the concepts of varying arrival times and preemption to the
analysis
Process i Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5


ROUND ROBIN (RR)
Each process gets a small unit of CPU time (time quantum q), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
 q large  FIFO (FCFS)
 q small  RR

Note that q must be large with respect to context switch, otherwise


overhead is too high
EXAMPLE OF RR WITH TIME QUANTUM = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Typically, higher average turnaround than SJF, but better response


q should be large compared to context switch time
 q usually 10 milliseconds to 100 milliseconds,
 Context switch < 10 microseconds
PRIORITY SCHEDULING
A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority (smallest
integer  highest priority)
 Preemptive
 Nonpreemptive

SJF is priority scheduling where priority is the inverse of predicted next


CPU burst time

Problem  Starvation – low priority processes may never execute

Solution  Aging – as time progresses increase the priority of the process


EXAMPLE OF PRIORITY SCHEDULING
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Priority scheduling Gantt Chart

Average waiting time = 8.2


PRIORITY SCHEDULING W/ ROUND-ROBIN
Run the process with the highest priority. Processes with the same priority
run round-robin
Example:
Process a Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
Gantt Chart with time quantum = 2
MULTILEVEL QUEUE
The ready queue consists of multiple queues
Multilevel queue scheduler defined by the following parameters:
 Number of queues
 Scheduling algorithms for each queue
 Method used to determine which queue a process will enter when that process
needs service
 Scheduling among the queues
MULTILEVEL QUEUE
With priority scheduling, have separate queues for each priority.
Schedule the process in the highest-priority queue!
MULTILEVEL QUEUE

Prioritization based upon process type


MULTILEVEL FEEDBACK QUEUE
A process can move between the various queues.
Multilevel-feedback-queue scheduler defined by the following
parameters:
 Number of queues
 Scheduling algorithms for each queue
 Method used to determine when to upgrade a process
 Method used to determine when to demote a process
 Method used to determine which queue a process will enter when that process needs
service

Aging can be implemented using multilevel feedback queue


EXAMPLE OF MULTILEVEL FEEDBACK QUEUE
Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

Scheduling
 A new process enters queue Q0 which is served in RR
 When it gains CPU, the process receives 8 milliseconds
 If it does not finish in 8 milliseconds, the process is moved
to queue Q1
 At Q1 job is again served in RR and receives 16
additional milliseconds
 If it still does not complete, it is preempted and moved to
queue Q2
MULTIPLE-PROCESSOR SCHEDULING
CPU scheduling more complex when multiple CPUs are available
Multiprocess may be any one of the following architectures:
 Multicore CPUs
 Multithreaded cores
 NUMA systems (Non Uniform Memory Access)
 Heterogeneous multiprocessing
MULTIPLE-PROCESSOR SCHEDULING
Symmetric multiprocessing (SMP) is where each processor is self
scheduling.
All threads may be in a common ready queue (a)
Each processor may have its own private queue of threads (b)
MULTIPLE-PROCESSOR SCHEDULING – LOAD BALANCING

If SMP, need to keep all CPUs loaded for efficiency


Load balancing attempts to keep workload evenly distributed
Push migration – periodic task checks load on each processor, and if
found pushes task from overloaded CPU to other CPUs
Pull migration – idle processors pulls waiting task from busy processor

You might also like