[go: up one dir, main page]

0% found this document useful (0 votes)
75 views26 pages

OS Ch#5 CPU Scheduling

CPU scheduling aims to maximize CPU utilization and minimize response time. There are several scheduling algorithms like FCFS, SJF, priority, and round robin. The short-term scheduler selects processes from the ready queue. Preemptive scheduling allows interrupting the current process, while non-preemptive runs processes to completion. Multilevel queue and feedback queue schedulers prioritize interactive processes over batch processes. Multiple CPU systems require load balancing to distribute processes evenly across CPUs. Real-time systems must meet process deadlines to be considered hard real-time.

Uploaded by

Afza Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views26 pages

OS Ch#5 CPU Scheduling

CPU scheduling aims to maximize CPU utilization and minimize response time. There are several scheduling algorithms like FCFS, SJF, priority, and round robin. The short-term scheduler selects processes from the ready queue. Preemptive scheduling allows interrupting the current process, while non-preemptive runs processes to completion. Multilevel queue and feedback queue schedulers prioritize interactive processes over batch processes. Multiple CPU systems require load balancing to distribute processes evenly across CPUs. Real-time systems must meet process deadlines to be considered hard real-time.

Uploaded by

Afza Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

CPU Scheduling

Basic Concepts
Maximum CPU utilization
obtained with
multiprogramming

CPU–I/O Burst Cycle –


Process execution
consists of a cycle of
CPU execution and I/O
wait

CPU burst followed by


I/O burst

CPU burst distribution is


of main concern
Histogram of CPU-burst Times
CPU Scheduler
● Short-term scheduler selects from among the
processes in ready queue, and allocates the CPU to one
of them
● Queue may be ordered in various ways
● CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
● Scheduling under 1 and 4 is non-preemptive
● All other scheduling is preemptive
● Consider access to shared data
● Consider preemption while in kernel mode
● Consider interrupts occurring during crucial OS activities
Preemptive Vs. Non Preemptive
Scheduling
Dispatcher
Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler; this involves:
◦ switching context
◦ switching to user mode
◦ jumping to the proper location in the user program
to restart that program

Dispatch latency – time it takes for the


dispatcher to stop one process and start
another running
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible

Throughput – # of processes that complete their


execution per time unit

Turnaround time – amount of time to execute a


particular process
Turnaround time (TAT)=Completion time – Arrival time

Waiting time – amount of time a process has been


waiting in the ready queue

Response time – amount of time it takes from when a


request was submitted until the first response is
produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization
Criteria

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order:
P1 , P2 , P3

The Gantt Chart for the schedule is:


P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
◦ Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF)
Scheduling
Associate with each process the length of
its next CPU burst
◦ Use these lengths to schedule the process with
the shortest time

SJF is optimal – gives minimum average


waiting time for a given set of processes
◦ The difficulty is knowing the length of the next
CPU request
◦ Could ask the user
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2

0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Determining Length of Next CPU
Burst
 
Prediction of the Length of the
Next CPU Burst

Figure 6.3 shows an exponential average with Alpha= 1/2


and T0 = 10.
Priority Scheduling
A priority number (integer) is associated with each
process
The CPU is allocated to the process with the highest
priority (smallest integer ≡ highest priority)
◦ Preemptive
◦ Nonpreemptive
SJF is priority scheduling where priority is the
inverse of predicted next CPU burst time
Problem ≡ Starvation – low priority processes may
never execute
Solution ≡ Aging – as time progresses increase the
priority of the process
Example of Priority Scheduling

ProcessA arri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

Average waiting time = 8.2 msec


Round Robin (RR)
Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to
the end of the ready queue.
If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
◦ q large ⇒ FIFO
◦ q small ⇒ q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3
The Gantt chart is:

Completion time = P1=30; P2=7; P3=10


TAT =P1= (30-0); P2=(7-4); P3=(10-7)
Avg. TAT=
Multilevel Queue
Process is assigned to one queue based on memory size,
priority, process type.

Ready queue is partitioned into separate queues, eg:


◦ foreground (interactive)
◦ background (batch)

Each queue has its own scheduling algorithm:


◦ foreground – RR
◦ background – FCFS
Scheduling must be done between the queues:
◦ Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
◦ Time slice – each queue gets a certain amount of CPU time which
it can schedule amongst its processes; i.e., 80% to foreground in
RR
◦ 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
Idea is to separate process according to CPU burst
time
If a process uses too much CPU time, it is moved to
low priority
If a process waits too long (aging) in low priority then
it is moved to high priority

Multilevel-feedback-queue scheduler defined by the


following parameters:
◦ number of queues
◦ scheduling algorithms for each queue
◦ method used to determine when to upgrade a process
◦ method used to determine when to demote a process
◦ method used to determine which queue a process will enter
when that process needs service
Example of Multilevel Feedback
Queue
Three queues:
◦ Q0 – RR with time quantum 8
milliseconds
◦ Q1 – RR time quantum 16 milliseconds
◦ Q2 – FCFS

Scheduling
◦ A new job enters queue Q0 which is
served FCFS
● When it gains CPU, job receives 8
milliseconds
● If it does not finish in 8 milliseconds,
job is moved to queue Q1
◦ At Q1 job is again served FCFS and
receives 16 additional milliseconds
● If it still does not complete, it is
preempted and moved to queue Q2
Multiple-Processor Scheduling
CPU scheduling more complex when multiple
CPUs are available
Homogeneous processors within a
multiprocessor
Asymmetric multiprocessing – only one
processor accesses the system data structures,
alleviating the need for data sharing
Symmetric multiprocessing (SMP) – each
processor is self-scheduling, all processes in
common ready queue, or each has its own
private queue of ready processes
Multiple-Processor
Scheduling
Processor affinity – process has affinity for
processor on which it is currently running
soft affinity: When an operating system has a
policy of attempting to keep a process running
on the same processor—but not guaranteeing
that it will do so known as soft affinity.
hard affinity: allowing a process to specify a
subset of processors on which it may run.
Multiple-Processor Scheduling – Load
Balancing

If SMP, need to keep all CPUs loaded for


efficiency

Load balancing attempts to keep workload


evenly distributed

Push migration – periodic task checks load


on each processor, and if found pushes task
from overloaded CPU to other CPUs

Pull migration – idle processors pulls waiting


task from busy processor
Real-Time CPU Scheduling

Soft real-time systems – no


guarantee as to when critical
real-time process will be
scheduled
Hard real-time systems –
task must be serviced by its
deadline
Two types of latencies affect
performance
1. Interrupt latency – time from
arrival of interrupt to start of
routine that services
interrupt
2. Dispatch latency – time for
schedule to take current
process off CPU and switch
to another

You might also like