[go: up one dir, main page]

0% found this document useful (0 votes)
43 views23 pages

Unit-III CPU Scheduling

Operating system notes Of chapter 3.cpu schedulling

Uploaded by

vs4741144
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views23 pages

Unit-III CPU Scheduling

Operating system notes Of chapter 3.cpu schedulling

Uploaded by

vs4741144
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT-III CPU Scheduling

Unit –III CPU Scheduling

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 1
UNIT-III CPU Scheduling

Syllabus: Total Marks:16

3.1 Scheduling: basic concept, CPU and I/O burst cycle

3.2 Preemptive and Non-preemptive scheduling, scheduling criteria

3.3 Types of Scheduling algorithms:

First Come First Serve (FCFS)

Shortest Job First (SJF)

Shortest Remaining Time Next (SRTN)

Round Robin (RR)

Priority Scheduling

Multilevel Queue Scheduling

3.4 Deadlock:

System Models

Necessary conditions leading to Deadlock

Deadlock Handling:

Deadlock prevention

Deadlock avoidance – Banker’s Algorithm

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 2
UNIT-III CPU Scheduling

Scheduling:
-In a computer system, the CPU (Central Processing Unit) is the brain that performs all
the calculations and operations.
-Many processes (programs) want to use the CPU at the same time.
-But the CPU can only work on one process at a time. So, the system needs a way to
decide which process gets the CPU next.
-This decision-making process is called CPU Scheduling.
-The main goal of CPU scheduling is to maximize CPU usage, reduce waiting time,
and improve system performance. It helps in handling multiple processes efficiently.
-There is a special part of the operating system called the CPU Scheduler. It selects one
process from the ready queue (list of waiting processes) and assigns the CPU to it.

➢ Why Scheduling is Needed

- A computer runs many processes at the same time (called multitasking).


- The CPU is a limited resource, and all processes cannot use it at once.
- Some processes take more time, while others are short.
- Without scheduling, some processes might wait too long or never get a chance.
- So, to maintain fairness and efficiency, scheduling is necessary.

➢ Types of Scheduling

There are different types of scheduling:

- Long-Term Scheduling – decides which processes are admitted into the system.
- Short-Term Scheduling (CPU scheduling) – decides which ready process runs next.
- Medium-Term Scheduling – temporarily removes processes to free up memory.

But when we say CPU Scheduling, we usually refer to short-term scheduling.

➢ Terminologies Used in CPU Scheduling

• Arrival Time: The time at which the process arrives in the ready queue.
• Completion Time: The time at which the process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 3
UNIT-III CPU Scheduling

CPU and I/O burst cycle:


Every process alternates between two types of activities:

1. CPU Burst
o Time when the process is using the CPU to execute instructions.
o Example: Calculating, processing logic, etc.
2. I/O Burst
o Time when the process is waiting for input or output (I/O) operations.
o Example: Reading from a file, writing to disk, waiting for user input.

A process does not keep using the CPU continuously. Instead, it uses the CPU for a while
(CPU burst) and then waits for I/O (I/O burst). This cycle repeats until the process
finishes.

➢ Alternating Sequence of CPU And I/O Bursts

Fig: Alternating Sequence of CPU And I/O Bursts.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 4
UNIT-III CPU Scheduling

➢ Why Burst Cycle is Important in Scheduling

Understanding the CPU and I/O burst cycle helps in:

• Choosing the best scheduling algorithm.


• Balancing CPU and I/O usage.
• Ensuring smooth and efficient execution of all processes.

If the scheduler selects a process wisely (like choosing a short CPU burst process first), it can
reduce waiting time for others and improve system performance.

Preemptive and Non-Preemptive Scheduling


In operating systems, CPU scheduling is used to decide which process should run on the CPU
next. This can be done in two ways:

1. Preemptive Scheduling

• The CPU can be taken away from a running process if needed.


• This is useful when a higher priority process arrives.
• It is more complex but gives better responsiveness.
• Allows fair use of the CPU among all processes.
• Preemptive scheduling is used when a process switches from running state to ready state
or from the waiting state to the ready state.

Example:
Think of a traffic signal. If an ambulance comes, it is given priority even if other vehicles are
waiting.
Similarly, in preemptive scheduling, high-priority or short tasks can interrupt running processes.

Examples of Preemptive Algorithms:

• Round Robin (RR)


• Shortest Remaining Time First (SRTF)
• Priority Scheduling (Preemptive)

2. Non-Preemptive Scheduling

• Once a process gets the CPU, it keeps it until it finishes or goes to waiting (I/O).
• The CPU cannot be taken away from the process.
• It is simpler and easier to implement.
• But it can be unfair to short processes if a long process is already running.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 5
UNIT-III CPU Scheduling

• Non-Preemptive scheduling is used when a process terminates , or when a process


switches from running state to waiting state.

Example:
Imagine a single-lane road. If a car enters, other cars must wait until it finishes its journey.
Similarly, in non-preemptive scheduling, once a process starts, others have to wait.

Examples of Non-Preemptive Algorithms:

• First-Come, First-Served (FCFS)


• Shortest Job First (SJF – Non-preemptive)

Scheduling Criteria
The scheduling criteria for CPU scheduling algorithms include the following key metrics:

⏳1. CPU Utilization

• Shows how busy the CPU is.


• Goal: Keep CPU as busy as possible (ideally 100%).

💨 2. Throughput

• Number of processes completed in a given time.


• Higher throughput = better performance.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 6
UNIT-III CPU Scheduling

⏱️ 3. Turnaround Time

• Time taken from process submission to completion.


• Formula:
Turnaround Time = Completion Time - Arrival Time

⌛ 4. Waiting Time

• Time a process spends in the ready queue, waiting for the CPU.
• Lower waiting time = better scheduling.

🕒 5. Response Time

• Time between submitting the process and the first response from CPU.
• Important in real-time systems where quick response is needed.

Types of Scheduling Algorithms:


The CPU scheduling algorithm is a method used by the operating system to choose which
process will run next on the CPU. Below are some commonly used scheduling algorithms:

1. First-Come First-Serve Scheduling, FCFS:

- It is the simplest scheduling algorithm.


-The process that comes first is served first.
- It is non-preemptive, meaning once a process starts, it runs until it finishes.
- Like people standing in a line at a ticket counter – the person who comes first gets the
ticket first.
-In this algorithm, a process, that a request the CPU first, is allocated the CPU first.
-FCFS scheduling is implemented with a FIFO queue.
-When the CPU is available, it is allocated to the process at the head of the queue.
-Once the CPU is allocated to a process, that process is removed from the queue.
-The process releases the CPU by its own.
-Easy to understand and implement.
-Its implementation is based on FIFO queue.
-Poor in performance as average wait time is high.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 7
UNIT-III CPU Scheduling

Example 01:

Process Burst Time


P1 24
P2 3
P3 3

Suppose that the processes arrive in the order: P1, P2, P3


The Gantt Chart for the schedule is:

• Consider order: P1→P2→P3 :


• Gantt chart:

• Average waiting time = (0 + 24 + 27) / 3 = 17 ms.

• Consider order: P2→P3→P1:


• Gantt chart:

• Average waiting time = (0 + 3 + 6) / 3 = 9 ms.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 8
UNIT-III CPU Scheduling

Advantages:

• Easy to implement
• Fair for processes that come first

Disadvantages:

• Long waiting time for short processes if a long one comes first
• Convoy effect – long process delays all others

2.Shortest-Job-First Scheduling, SJF:


• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Technically this algorithm picks a process based on the next shortest CPU
burst, not the overall process time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time is
not known.
• The processer should know in advance how much time process will take.
• The process with the shortest CPU burst time is selected next.

Example 1:

Given processes:

Process Burst Time


P1 6
P2 8
P3 7
P4 3

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 9
UNIT-III CPU Scheduling

• By SJF scheduling:
• Gantt chart:

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 ms.

Example 2:

• Given processes:

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
p4 3 5

• By preemptive SJF scheduling:


• Gantt chart:

• Average waiting time = [(10 -1) +(1 - 1) +(17 - 2) + (5 - 3)] /4 = 26 /4 = 6.5 ms.

Advantages:

• Reduces average waiting time

Disadvantages:

• Hard to know the burst time in advance


• May lead to starvation for longer processes.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 10
UNIT-III CPU Scheduling

3.Shortest Remaining Time Next (SRTN):


- It is the preemptive version of SJF.
- the pre-emptive version of Shortest Job First (SJF) scheduling, called Shortest
Remaining Time First (SRTF).
- CPU is given to the process with the shortest remaining time.
- If a new process arrives with a shorter time, the current process is preempted.

Example 1:

Process Arrival Time (T0) Time required for completion (∆T)(CPU Burst Time)
P0 0 10
P1 1 6
P2 3 2
P3 5 4
Gantt Chart:

P0 P1 P2 P3 P0
01 3 5 13 22

Initially only process P0 is present and it is allowed to run. But, when P1 comes, it has
shortest remaining run time. So, P0 is preempted and P1 is allowed to run. Whenever new
process comes or current process blocks, such type of decision is taken. This procedure is
repeated till all processes complete their execution

Statistics:

Waiting
CPU Turnaround
Arrival Finish Time
Process Burst Time(TAT=T1-
Time(T0) Time(T1) (WT=TAT-
Time(∆T) T0)
∆T)
P0 0 10 22 22 12
P1 1 6 9 8 2
P2 3 2 5 2 0
P3 5 4 13 8 4

Average Turnaround Time: (22+8+2+8) / 4 = 40/4 = 10 ms.

Average Waiting Time: (12+2+0+4)/4 = 18 / 4 = 4.5 ms.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 11
UNIT-III CPU Scheduling

Advantages:
• Better average turnaround and waiting time than non-preemptive SJF.

Disadvantages:
• Can lead to starvation of long processes
• Requires accurate prediction of remaining time

4.Round Robin (RR):


- RR is similar to FCFS except that preemption is added to switch between
processes.
- Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are
assigned with limits called time quantum.
- Each process is given a fixed time slice or quantum (e.g., 2 ms).
- After the time ends, the process is moved to the back of the queue if not finished.
- It is preemptive.
- Like giving every student in a class 2 minutes to speak. After 2 minutes, the next
student gets the chance.
- Goal: fairness, time sharing.

Example 1:

Process Burst Time


P1 24
P2 3
P3 3

• Gantt chart:

• Average waiting time = [(10 - 4) + 4 + 7] / 3 = 5.66 ms.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 12
UNIT-III CPU Scheduling

Advantages:

• Good for time-sharing systems


• All processes get equal attention

Disadvantages:

• Too short time quantum = too many context switches


• Too long time quantum = becomes like FCFS

5. Priority Scheduling:
- Priority scheduling is a more general case of SJF, in which each job is assigned a priority
and the job with the highest priority gets scheduled first. ( SJF uses the inverse of the next
expected burst time as its priority - The smaller the expected burst, the higher the
priority.)
- Each process is assigned a priority number.
- Process with highest priority gets the CPU first.
- Can be preemptive or non-preemptive.
- Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
- Processes with same priority are executed on first come first served basis.
- Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Example 1 :
• Given processes:

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 13
UNIT-III CPU Scheduling

• By preemptive SJF scheduling:


• Gantt chart:

• Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 8.2 ms.

Advantages:

• Important tasks are completed faster

Disadvantages:

• Low-priority processes may starve


• Starvation can be solved using aging (gradually increasing priority of waiting processes)

6. Multilevel Queue Scheduling:


- Multiple queues are maintained for processes with common characteristics.
- Each queue can have its own scheduling algorithms.
- Priorities are assigned to each queue.
- Processes are divided into groups (queues) based on their type (e.g., system,
interactive, batch).
- Each queue has its own scheduling algorithm.
- There is a rule to select which queue’s process gets CPU next.

Example Queues:
- System processes → Priority Scheduling
- User processes → Round Robin

Advantages:
• Organizes different types of processes well
• Flexible and can be tuned for better performance

Disadvantages:
• Hard to manage and balance between queues
• Risk of starvation for lower-priority queues

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 14
UNIT-III CPU Scheduling

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 15
UNIT-III CPU Scheduling

Deadlock:

- In an operating system, deadlock is a situation where a group of processes are waiting


for each other forever and none of them can continue. All the processes are blocked and
the system cannot proceed further.
- Imagine four cars stuck at a four-way signal, each waiting for the other to move first —
no one moves. That’s what happens in a deadlock.
- In the diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.

Fig: Deadlock in Operating system

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 16
UNIT-III CPU Scheduling

System Model:

- In an operating system, many processes run at the same time. These processes need
resources like CPU, memory, files, printers, etc., to complete their tasks.
- The System Model explains how processes and resources interact, and how
deadlock can happen when this interaction goes wrong.
➢ Types of Resources
There are two types of system resources:

1. Preemptable Resources

o These resources can be taken away from a process without causing problems.

o Example: CPU, memory.

2. Non-preemptable Resources

o These resources cannot be taken back once given to a process.

o The process must release them voluntarily.

o Example: Printers, files.

Deadlocks usually happen with non-preemptable resources, because they can’t be forcefully
taken back.

➢ Process and Resource Behavior


A process follows these steps to use a resource:

1. Request the resource

2. Use the resource

3. Release the resource

If a resource is not available, the process waits (gets blocked).

Deadlock occurs when processes hold some resources and keep waiting for others, which are
held by other blocked processes — forming a cycle of waiting.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 17
UNIT-III CPU Scheduling

Necessary Conditions for Deadlock:

There are four conditions that must happen together for a deadlock to occur. If even one of
these is missing, deadlock won’t happen.

1. Mutual Exclusion

• Only one process can use a resource at a time.


• If another process wants the same resource, it has to wait.

Example:
Only one process can use the printer at a time.

2. Hold and Wait

• A process is holding at least one resource and waiting for more.


• It does not release the resources it already has.

Example:
Process A is using the printer and waiting for a file.
Process B is using the file and waiting for the printer.
→ Both are holding and waiting.

3. No Preemption

• A resource cannot be taken forcibly from a process.


• It must be released voluntarily by the process after finishing.

Example:
If Process A has the printer, it will not give it up until it's done — even if Process B needs it
urgently.

4. Circular Wait

• There is a cycle of processes, each waiting for a resource held by the next.

Example:

• Process A waits for a file held by B


• B waits for a printer held by C
• C waits for a memory block held by A
→ They are stuck in a circular wait.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 18
UNIT-III CPU Scheduling

Deadlock Handling:
- Deadlock occurs when two or more processes are stuck, each waiting for a resource
that another process is holding. To solve this, operating systems use deadlock
handling methods.

- There are three main ways to handle deadlocks:


1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery

In this explanation, we’ll focus on the first two.

1. Deadlock Prevention:
- The Concept of deadlock prevention is like a normal, the very important phrase is
“Prevention is better than cure”.
- The Method is the time when the deadlock occurs, we are trying to find the solution
better is that before the deadlock occurs we try to find some solutions.
- Deadlock prevention means we change the system rules so that at least one of the four
deadlock conditions never occurs.

Let’s recall the 4 conditions for deadlock:

1. Mutual Exclusion

2. Hold and Wait

3. No Preemption

4. Circular Wait

Now let’s see how we can prevent deadlock by breaking these:

A. Mutual Exclusion

• Not always preventable.

• Some resources must be used one at a time (like printers), so mutual exclusion is
necessary.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 19
UNIT-III CPU Scheduling

B. Hold and Wait

• We can stop hold and wait by forcing each process to request all required resources at
once, before starting.

• But this can cause low resource utilization and long waiting time.

C. No Preemption

• Allow preemption: If a process is waiting and cannot get a resource, then take back the
resources it already has and give them to others.

• This method works, but not all resources can be preempted.

D. Circular Wait

• Prevent circular wait by assigning a fixed order to all resource types (like R1 < R2 < R3).

• A process can only request resources in increasing order, not randomly.

• This prevents any cycle from forming.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 20
UNIT-III CPU Scheduling

2.Deadlock Avoidance:
- Instead of strict prevention, deadlock avoidance means the system checks before
allocating resources whether the allocation is safe or not.
- Means when we give the resources to a particular, we check whether deadlock can
occur or not? Or deadlock can be avoided or not?

This method needs the system to know in advance:

• How many resources each process might need

• Which resources are currently free

• What is already allocated

- For every resources allocation ,we check safe and Unsafe Situation.
- This is done by Banker’s Algorithm.
- The Dijkstra’s Algorithm is also known as the Banker’s Algorithm.

➢ Banker’s Algorithm (By Dijkstra):

- This is the most popular deadlock avoidance algorithm, used in systems like
banks
- We also called Banker’s algorithm as Deadlock Avoidance Algorithm.
- Just like a bank only gives loans if it knows it has enough funds to meet
future needs, the system only gives resources if it will still be in a safe state.
- Banker's algorithm is a deadlock avoidance algorithm. It is named so
because this algorithm is used in banking systems to determine whether a
loan can be granted or not.

Characteristics of Banker's Algorithm

The characteristics of Banker's algorithm are as follows:

• If any process requests for a resource, then it has to wait.

• This algorithm consists of advanced features for maximum resource allocation.

• There are limited resources in the system we have.

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 21
UNIT-III CPU Scheduling

• In this algorithm, if any process gets all the needed resources, then it is that it should
return the resources in a restricted period.

• Various resources are maintained in this algorithm that can fulfill the needs of at least
one client.

Important Terms in Banker’s Algorithm:

1. Available – Resources currently free.

2. Max – Maximum resources each process may need.

3. Allocation – Resources currently given to a process.

4. Need = Max – Allocation

Steps of Banker’s Algorithm:


1. Check if Need ≤ Available for any process.

2. If true, temporarily allocate resources to that process.

3. Update the Available list.

4. Repeat for all processes.

5. If all processes can finish in some order → safe state.

6. If not → don’t give the resource → unsafe state (avoid allocation).

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 22
UNIT-III CPU Scheduling

Example of Banker’s Algorithm:

Operating System by Prof. Vishal Jadhav Sir’s (VJTech Academy, contact us: +91-9730087674) 23

You might also like