[go: up one dir, main page]

0% found this document useful (0 votes)
28 views39 pages

Lecture - 3

Uploaded by

Alemayehu Guta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views39 pages

Lecture - 3

Uploaded by

Alemayehu Guta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Lecture - 3

CPU Scheduling
Out line
CPU Scheduling introduction
✓ Scheduling Criteria
✓ Scheduling Algorithms
✓ Scheduling in Linux OS as an example
Process Synchronization
✓ The critical section problem
✓ Software and Hardware solutions for critical section problem
✓ Classic Problems of Synchronization
Deadlocks
✓ Definition and characteristics of deadlock
✓ Methods for handling deadlocks
Main objective of CPU scheduling
▪ Decide which process runs, when, and for how long.

▪ Be Fair while allocating resources to the processes

▪ Enforce Priorities

▪ Balance resource use

▪ CPU utilization (high throughput).

▪ Better interactive performance (low latency)…


CPU Scheduling introduction
✓It is the task performed by the CPU that decides the way and order in
which processes should be executed.

✓There are two types of CPU scheduling:

- Preemptive, and non-preemptive

✓ A non-preemptive scheduling algorithm picks a process to run and then


just lets it run until it blocks or until it voluntarily releases the CPU.

✓preemptive scheduling algorithm picks a process and lets it run for a


maximum of some fixed time.
Comparison of Preemptive Scheduling Non-preemptive Scheduling
Parameter Preemptive Scheduling Non-preemptive Scheduling
Flexibility flexible rigid

Process can be interrupted in Process can not be interrupted until it


Interrupt
between. terminates itself or its time is up.

Cost cost associated no cost associated

CPU Utilization high low

Waiting Time less high

Response Time less high

Round Robin and Shortest First Come First Serve and Shortest Job
Examples
Remaining Time First. First.
CPU “Scheduling“ Criteria
Criteria Definition Goal
CPU utilization The % of time the CPU is executing user level process code. Maximize
Throughput Number of processes that complete their execution per time Maximize
unit.
Turnaround time Amount of time to execute a particular process. Minimize
Waiting time Amount of time a process has been waiting in the ready queue. Minimize
Response Time Amount of time it takes from when a request was submitted Minimize
until the first response is produced.
CPU Scheduling introduction
CPU Scheduling introduction
CPU Scheduling
✓ When a computer is multi-programmed, it frequently has multiple
processes competing for the CPU at the same time.

✓ When more than one process is in the ready state and there is only one
CPU available, the operating system must decide which process to run first.

✓ The part of the operating system that makes the choice is called the
scheduler; the algorithm it uses is called the scheduling algorithm.
CPU Scheduling
S c h e d u l i n g Po l i c y
▪ Based on time-sharing
- time slice
▪ Based on priority ranking
- dynamic
▪ Based on Classification of processes
- interactive processes
- batch processes
- real-time processes
System calls related to CPU scheduling
Scheduling
❑The important thing to notice that some processes,
(a) spend most of their time computing, while others,
(b) spend most of their time waiting for I/O.
The former are called compute-bound; the latter are called I/O-bound.
Scheduling
❑ There are a variety of situations in which scheduling may occur.

1) When a process exits.

2) When a process blocks on I/O, or a semaphore.

3) When a new process is created.

4) When an I/O interrupt occurs.

5) When a clock interrupt occurs.


❑Scheduling Algorithm Goals:
❑ All systems
1) Fairness giving each process a fair share of the CPU
2) Policy enforcement seeing that stated policy is carried out
3) Balance keeping all parts of the system busy
❑ Batch systems
1) Throughput maximize jobs per hour
2) Turnaround time minimize time between submission and termination
3) CPU utilization keep the CPU busy all the time
❑ Interactive systems
1) Response time respond to requests quickly
2) Proportionality meet users' expectations
❑ Real time systems
1) Meeting deadlines avoid losing data
2) Predictability avoid quality degradation in multimedia systems
Scheduling
❑The managers of corporate computer centers that run many batch
jobs typically look at three metrics to see how well their systems are
performing: throughput, turnaround time, and CPU utilization.

❑ Throughput is the number of jobs per second that the system


completes. All things considered, finishing 50 jobs per second is better
than finishing 40 jobs per second.

❑Turnaround time is the average time from the moment that a batch
job is submitted until the moment it is completed. It measures how long
the average user has to wait for the output. Here the rule is: Small is
Beautiful.
1) Scheduling in Batch Systems
1.1) First-Come First-Served (FCFS)
1.2) Shortest Job First (SJF)
1.3) Shortest Remaining Time Next
1.4) Three-Level Scheduling
▪ The admission scheduler decides which jobs to admit to the system.
▪ Once a job has been admitted to the system, a process can be created for
it and it can contend for the CPU.
▪ We will call this scheduler the memory scheduler, since it determines
which processes are kept in memory and which on the disk.
Picking one of the ready processes in main memory to run next which is
called the CPU scheduler.
First-Come First-Served (FCFS)
Shortest Job First (SJF)
1) Scheduling in Batch Systems..

To optimize system performance as a whole, the memory scheduler


decide how many processes it wants in memory, called the degree of
multiprogramming.
2 ) S che du l i n g i n I n t e ra ct ive S y st e m s
2.1) Round-Robin Scheduling

✓ One of the oldest, simplest, fairest, and most widely used algorithms is
round robin.

✓ Each process is assigned a time interval, called its quantum, which it


is allowed to run.

✓ If the process is still running at the end of the quantum, the CPU is
preempted and given to another process.

✓ If the process has blocked or finished before the quantum has elapsed,
the CPU switching is done when the process blocks.

✓ Round robin is easy to implement.


2) Scheduling in Interactive Systems
2) Scheduling in Interactive Systems
2.2) Priority Scheduling

2.3) Multiple Queues: It is to set up priority classes.

2.4) Guaranteed Scheduling: The system must keep track of how much CPU each
process has had since its creation.

2.5) Lottery Scheduling: The basic idea is to give processes lottery tickets for various
system resources, such as CPU time.

✓ Whenever a scheduling decision has to be made, a lottery ticket is chosen at


random, and the process holding that ticket gets the resource.

2.6) Fair-Share Scheduling: we have assumed that each process is scheduled on its
own, without regard to who its owner is.
3) Scheduling in Real-Time Systems
✓ The events that occurring at regular intervals or aperiodic (occurring
unpredictably).
✓ A system may have to respond to multiple periodic event streams.
✓ Depending on how much time each event requires for processing, it may
not even be possible to handle them all.
✓ For example, if there are m periodic events and event i occurs with
period Pi and requires Ci seconds of CPU time to handle each event,
𝑐𝑖
then the load can only be handled if ෍ ≤1
𝑖=1 𝑝𝑖

✓ A real-time system that meets this criteria is said to be schedulable.


3) Scheduling in Real-Time Systems
✓ As an example, consider a soft real-time system with three periodic
events, with periods of 100, 200, and 500 msec, respectively.

✓ If these events require 50, 30, and 100 msec of CPU time per event,
respectively, the system is schedulable because 0.5 + 0.15 + 0.2 < 1.

✓ If a fourth event with a period of 1 sec is added, the system will


remain schedulable as long as this event does not need more than 150
msec of CPU time per event.

✓ Implicit in this calculation is the assumption that the context-


switching overhead is so small that it can be ignored.
CPU scheduling algorithms
terminologies are:
✓ Arrival Time
✓ Completion Time
✓ Burst Time
✓ Turn Around Time
Example: Consider the processes P1 , P2 , P3 , P4 given in the below table, arrives for
execution in the same order, with given Arrival Time and Burst Time.
PROCESS ARRIVAL TIME BURST TIME
P1 P2 P3 P4
P1 0 8
PROCESS WAIT TIME TURN AROUND TIME P2 1 4
P1 0 8–0=8 P3 2 9
P2 8–1=7 12 – 1 = 11 P4 3 5
P3 12 – 2 = 10 21 – 2 = 19
P4 21 – 3 = 18 26 – 3 = 23

Total Wait Time = 0 + 7 + 10 + 18 = 35 ms


Average Waiting Time = (Total Wait Time) / (Total number of processes)= 35/4 = 8.75 ms
Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of processes) 61/4 = 15.25 ms
Throughput: 4 jobs/26 sec = 0.15385 jobs/sec
Scheduling in Linux OS as an example
▪ Earlier version of Linux: simple and straight forward
- at every process switch, scan the runnable processes, compute
the priorities, and select the best to run.
▪ Much more complex in Linux 2.6
- constant time scheduling
▪ 3 scheduling classes
- SCHED_FIFO: First-In First-Out real-time
- SCHED_RR: Round-Robin real-time
- SCHED_NORMAL: conventional time-shared processes
Process Synchronization
✓It ensures the orderly sharing of system resources.

✓Concurrent access to shared data may result in data inconsistency.

✓Maintaining data consistency requires mechanisms to ensure the


orderly execution of co-operating processes.

✓Example: Consider car and train process, if only one process is


active, then no resource conflict exists.
The critical section problem
▪ Each process has a code segment, called critical section, in which
the shared data is accessed.

▪ N processes all competing to use some shared data.

▪ Problem – ensure that when one process is executing in its critical,


no other process is allowed to execute in its critical section.

▪ Solution to critical section problem: mutual exclusion, progress,


bounded waiting.
Software and Hardware solutions for
critical section problem

Reading assignment
Deadlocks
A set of two or more processes are deadlocked if they are blocked (i.e.,
in the waiting state) each handling a resource and waiting to acquired a
resource held by another process in the set.
Examples
Characteristics of deadlock
✓Deadlock can a rise if four conditions hold simultaneously

✓Mutual exclusion – only one process at a time can use a resource.

✓Hold and wait

✓No preemption: a resource can be released only voluntarily by the


process holding it, after that process has completed its task.

✓Circular wait: there exists a set of {Po, P1, P2, …, Pn}


Methods for handling deadlock
The various strategies for handling deadlock are-
1. Deadlock Prevention

2. Deadlock Avoidance

3. Deadlock Detection and Recovery

4. Deadlock Ignorance

▪ Ignore the problem and pretend that deadlocks never occur in the
system: used by most OS, including Linux.

You might also like