BHARATI VIDYAPEETH’S
JAWAHARLAL NEHRU INSTITUTE OF
TECHNOLOGY , PUNE
Academic Year: 2024- 2025
Course Name:
Operating System
Course Code:
22516
Microproject Topic:
Report on CPU Scheduling
Project Guided By:
Miss R.S.Anami
Submitted By:
Nilkant Gorle (TYCM-A)
Roll No:
04
pg. 1
Introduction
CPU scheduling is a key function within an operating system that determines
which process in the queue will get CPU time next. This is especially important
in multi-programming environments, where multiple processes need the CPU's
attention simultaneously. Efficient CPU scheduling helps to maximize CPU
utilization and improves the system's overall responsiveness.
CPU scheduling algorithms are typically divided into two categories: pre emptive
and non-pre emptive. Pre emptive scheduling allows the operating system to
interrupt a process's CPU time if another process with a higher priority or shorter
time requirement arrives. This approach is suitable for real-time and interactive
systems because it ensures that urgent tasks get immediate attention, making the
system more responsive to user inputs. Common examples of pre emptive
scheduling algorithms include Round Robin, which allocates a fixed time slice to
each process in a circular manner, and Shortest Remaining Time First, where the
process with the shortest remaining burst time gets the CPU next.
Non-pre emptive scheduling, on the other hand, operates by giving a process the
CPU and allowing it to complete its burst without interruption. This reduces the
need for frequent context switching, which can be resource-intensive. However,
this can also lead to higher waiting times for other processes, especially if a long
job occupies the CPU while shorter tasks are queued. First-Come, First-Served is
a classic example of non-pre emptive scheduling, where processes are handled in
the order they arrive. Another example is Shortest Job First, where processes are
selected based on the length of their CPU burst, favoring shorter tasks. In non-
pre emptive Priority Scheduling, each process has a priority, and the CPU is
allocated to the process with the highest priority without interruption.
Both pre emptive and non-pre emptive scheduling approaches have their
advantages and limitations. Pre emptive scheduling is more responsive and
suitable for real-time applications, while non-pre emptive scheduling can be
simpler to implement and may lead to lower overhead due to fewer context
switches. The choice of scheduling algorithm often depends on the system
requirements, with factors like response time, turnaround time, and fairness
playing a role in determining the best approach for specific workloads.
pg. 2
Process and scheduling
A process is an instance of a program in execution. It represents a single task within the
operating system and includes the program code, current activity (represented by the value of
the program counter and the contents of processor registers), and a set of resources like
memory, files, and I/O devices. When you open an application on your computer, it typically
runs as a process.
Process Control Block (PCB)
The Process Control Block (PCB) is a data structure used by the operating system to store all
the information about a process. This includes:
• Process ID (PID): A unique identifier assigned to each process.
• Process State: Indicates whether the process is new, ready, running, waiting, or
terminated.
• Program Counter (PC): Points to the next instruction the CPU will execute for this
process.
• CPU Registers: Store intermediate data during execution.
• Memory Management Information: Describes memory usage, including pointers to
the process's address space.
• I/O Status Information: Specifies devices and files open for this process.
• Accounting Information: May include details like process start time, CPU usage, and
limits.
The life cycle of a process typically includes these states:
1. New: The process is being created.
2. Ready: The process is ready to be assigned to a CPU.
3. Running: Instructions are being executed by the CPU.
4. Waiting: The process is waiting for some event (like I/O completion).
5. Terminated: The process has finished execution.
pg. 3
Threads
Threads are the smallest units of execution within a process. In modern systems, a process
may contain multiple threads, each running concurrently within the same process. Threads
within a process share resources such as memory and open files, making communication and
context switching faster compared to process-level multitasking.
Types of Threads
1. User-Level Threads: Managed by the user without kernel support. They are fast to
switch between, but if one thread blocks, the entire process blocks.
2. Kernel-Level Threads: Managed by the operating system kernel. It allows better
concurrency as the OS can manage multiple threads independently, but the overhead
of managing these threads is higher.
3. Hybrid Models: Combine user-level and kernel-level threading. For example, some
applications create user-level threads but map them onto kernel-level threads for better
performance.
Benefits of Multithreading
1. Responsiveness: Enables interactive applications to perform tasks in the background
without freezing the user interface.
2. Resource Sharing: Threads within the same process share memory and resources,
improving resource utilization.
3. Scalability: In multiprocessor systems, multiple threads can run in parallel, maximizing
CPU usage.
pg. 4
Scheduling Criteria
CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are essential
metrics used to evaluate the efficiency of scheduling algorithms. For a deeper understanding of
how these criteria are applied in real-world scenarios, the GATE CS and IT – 2025 course offers
detailed lessons on CPU scheduling algorithms, helping students master these concepts
through practical examples and exercise.
1. CPU Utilization: Measures the percentage of time the CPU is busy. Ideal utilization is
close to 100% to avoid idle CPU time.
2. Throughput: The number of processes completed per time unit. A higher throughput
indicates efficient scheduling.
3. Turnaround Time: The time taken from process submission to process completion. It
includes waiting, execution, and I/O time.
4. Waiting Time: Total time a process spends in the ready queue awaiting CPU time.
Reducing waiting time is key to a responsive system.
5. Response Time: The time taken from submission until the system responds to the
process (not necessarily completing it). This is important in interactive systems where
prompt feedback is essential.
pg. 5
Scheduling Algorithms
CPU scheduling algorithms determine the order in which processes are executed in a CPU, with
the aim to optimize various performance metrics, such as CPU utilization, throughput,
turnaround time, waiting time, and response time. Scheduling algorithms fall into two main
categories: pre-emptive and non-pre-emptive.
1. First-Come, First-Served (FCFS)
First Come First Served (FCFS) is the simplest and non-pre-emptive scheduling algorithm. In
First Come First Served (FCFS), the process is allocated to the CPU in the order of their arrival. A
queue data structure is used to implement the FCFS scheduling algorithm. The process which
is at the head of the ready queue is allocated to the CPU, when CPU is free. Then the process
which is running is removed from the queue. When a new process enters into the ready queue,
it is placed onto the tail of the ready queue.
Example:-
pg. 6
2. Shortest Job First (SJF)
The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. SJN, also known as Shortest Job Next
(SJN), can be pre-emptive or non-pre-emptive.
Characteristics of SJF Scheduling:
• Shortest Job first has the advantage of having a minimum average waiting time among
all scheduling algorithms.
• It is a Greedy Algorithm.
• It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
• It is practically infeasible as Operating System may not know burst times and therefore
may not sort them. While it is not possible to predict execution time, several methods
can be used to estimate the execution time for a job, such as a weighted average of
previous execution times.
• SJF can be used in specialized environments where accurate estimates of running time
are available.
Examples:-
pg. 7
3. Shortest Remaining Time First (SRTF)
In previous post, we have discussed Set 1 of SJF i.e. non-pre-emptive. In this post we will
discuss the pre-emptive version of SJF known as Shortest Remaining Time First (SRTF).
In the Shortest Remaining Time First (SRTF) scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently
executing process is the one with the shortest amount of time remaining by definition, and
since that time should only reduce as execution progresses, processes will always run until
they complete or a new process is added that requires a smaller amount of time.
Examples:-
pg. 8
4.Round Robin (RR)
Round Robin CPU Scheduling is a widely used algorithm in computer science that helps
manage the execution of processes in a time-shared environment. With Round Robin,
processes are assigned fixed time slots, or slices, in a cyclical manner. This ensures fair
distribution of the CPU's time among all the processes, preventing any single process from
monopolizing the system for an extended period. Round Robin is a CPU scheduling algorithm
where each process is cyclically assigned a fixed time slot. It is the pre-emptive version of the
First come First Serve CPU Scheduling algorithm.
• Round Robin CPU Algorithm generally focuses on Time Sharing technique.
• The period of time for which a process or job is allowed to run in a pre-emptive method
is called time quantum.
• Each process or job present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the process
will end else the process will go back to the waiting table and wait for its next turn to
complete the execution.
pg. 9
5.Priority Scheduling:-
Priority scheduling is a non-pre-emptive algorithm and one of the most common scheduling
algorithms in batch systems. Each process is assigned first arrival time (less arrival time
process first) if two processes have same arrival time, then compare to priorities (highest
process first). Also, if two processes have same priority then compare to process number (less
process number first). This process is repeated while all process get executed.
Implementation –
1. First input the processes with their arrival time, burst time and priority.
2. First process will schedule, which have the lowest arrival time, if two or more processes
will have lowest arrival time, then whoever has higher priority will schedule first.
3. Now further processes will be schedule according to the arrival time and priority of the
process. (Here we are assuming that lower the priority number having higher priority). If
two process priority are same then sort according to process number.
Note: In the question, They will clearly mention, which number will have higher priority
and which number will have lower priority.
4. Once all the processes have been arrived, we can schedule them based on their
priority.
pg. 10
6.Multilevel Queue Scheduling
Multilevel Queue(MLQ) CPU scheduling is a type of scheduling that is applied at the operating
system level with the aim of sectioning types of processes and then being able to manage them
properly. As with the MQ Series, processes are grouped into several queues using MLQ based
on known parameters such as priority, memory, or type. Every queue has its scheduling policy
and processes that are in the same queue are very similar too. This article will explain the
Multilevel Queue in detail. Multilevel Queue Scheduling is a CPU scheduling mechanism where
the process is divided into several hierarchy queues and each queue possesses a different
priority, and process type. The scheduling algorithm can be different for each Queue and these
processes are mapped in a permanent manner to a particular Queue following some criteria,
for example in relation to priority or resources.
pg. 11
Deadlock
Deadlock is a situation in computing where two or more
processes are unable to proceed because each is waiting for the
other to release resources. Key concepts include mutual
exclusion, resource holding, circular wait, and no pre-emption.
Consider an example when two trains are coming toward each
other on the same track and there is only one track, none of the
trains can move once they are in front of each other. This is a
practical example of deadlock.
Before going into detail about how deadlock occurs in the
Operating System, let’s first discuss how the Operating System
uses the resources present. A process in an operating system
uses resources in the following way.
• Requests a resource
• Use the resource
• Releases the resource
A situation occurs in operating systems when there are two or
more processes that hold some resources and wait for resources
held by other(s). For example, in the below diagram, Process 1
pg. 12
is holding Resource 1 and waiting for resource 2 which is
acquired by process 2, and process 2 is waiting for resource 1.
pg. 13