1.
Introduction to Process Scheduling
Process Scheduling
Definition: Process scheduling is the mechanism by which the operating system
decides which processes in the ready state should be moved to the running state. This
is crucial for ensuring efficient CPU utilization.
Importance: Ensures that the CPU is used effectively and efficiently by managing the
execution of processes and maximizing various performance metrics like throughput
and response time.
Objectives of Process Scheduling
Maximize CPU Utilization: Ensure the CPU is busy as much as possible by reducing
idle time.
Maximize Throughput: Increase the number of processes that complete their
execution in a given time period.
Minimize Turnaround Time: Reduce the total time taken for a process to complete,
from submission to termination.
Minimize Waiting Time: Decrease the time a process spends waiting in the ready
queue.
Minimize Response Time: Reduce the time it takes from when a request was
submitted until the first response is produced, important for interactive systems.
CPU-I/O Burst Cycle
CPU Burst: A period during which a process is using the CPU for computation.
I/O Burst: A period during which a process is waiting for I/O operations to complete.
Alternation: Processes typically alternate between CPU bursts and I/O bursts.
Effective scheduling aims to overlap CPU and I/O activities to keep all parts of the
system busy.
2. Types of Scheduling
Long-term Scheduling
Role: Controls the degree of multiprogramming by deciding which processes are
admitted to the system for execution.
Frequency: Less frequent compared to short-term scheduling.
Objective: Ensures a balanced load on the system by controlling the number of
processes in memory.
Medium-term Scheduling
Role: Part of the operating system’s memory management. It involves swapping
processes in and out of memory to optimize the use of available memory.
Swapping: Temporarily removes processes from memory and places them on disk (or
vice versa) to free up memory for other processes.
Short-term Scheduling (CPU Scheduling)
Role: Decides which process from the ready queue is to be executed next.
Frequency: Occurs frequently, typically every few milliseconds, to ensure efficient
CPU utilization and process responsiveness.
Objective: Provides optimal CPU allocation by selecting the most appropriate process
to run next.
3. Scheduling Criteria
CPU Utilization
Definition: The percentage of time the CPU is active versus idle.
Goal: Achieve high CPU utilization to ensure that the system’s resources are used
effectively.
Throughput
Definition: The number of processes that complete their execution per time unit.
Goal: Increase throughput to enhance system productivity.
Turnaround Time
Definition: The total time taken for a process to complete after it is submitted.
Goal: Minimize turnaround time to ensure that processes are completed promptly.
Waiting Time
Definition: The total time a process spends waiting in the ready queue before it gets
CPU time.
Goal: Reduce waiting time to improve overall system efficiency and responsiveness.
Response Time
Definition: The time from submission of a request until the first response is produced.
Goal: Minimize response time, especially important for interactive systems.
4. Scheduling Algorithms
Non-Preemptive Scheduling
First-Come, First-Served (FCFS)
o Operation: Processes are executed in the order they arrive.
o Advantages: Simple to implement.
o Disadvantages: Can cause long waiting times, especially if a long process
arrives before shorter ones (convoy effect).
Shortest Job Next (SJN) or Shortest Job First (SJF)
o Operation: Selects the process with the smallest execution time.
o Advantages: Optimal for minimizing waiting time.
o Disadvantages: Can cause starvation for longer processes if short processes
keep arriving.
Preemptive Scheduling
Round Robin (RR)
o Operation: Each process gets a fixed time quantum of CPU time.
o Advantages: Provides good response time for interactive systems.
o Disadvantages: High overhead due to frequent context switching.
Shortest Remaining Time First (SRTF)
o Operation: Preemptive version of SJF; selects the process with the shortest
remaining time to completion.
o Advantages: Minimizes average waiting time.
o Disadvantages: Can cause starvation of longer processes.
Priority Scheduling
o Operation: Processes are assigned priorities, and the process with the highest
priority is executed first.
o Advantages: Allows critical processes to be executed first.
o Disadvantages: Can cause starvation for lower-priority processes.
Other Algorithms
Multilevel Queue Scheduling
o Operation: Processes are divided into multiple queues based on priority or
type. Each queue can have its own scheduling algorithm.
o Advantages: Provides flexibility in scheduling different types of processes.
o Disadvantages: Complex to implement and manage.
Multilevel Feedback Queue Scheduling
o Operation: Processes can move between queues based on their behavior and
age.
o Advantages: Dynamically adjusts priorities to improve overall system
performance.
o Disadvantages: Complex and requires careful tuning of parameters.
5. Algorithm Evaluation
Deterministic Modeling
Description: Uses predetermined workloads to evaluate the performance of
scheduling algorithms.
Advantages: Simple and easy to understand.
Disadvantages: May not accurately represent real-world scenarios.
Queueing Models
Description: Uses mathematical formulas to describe the behavior and performance
of scheduling algorithms.
Advantages: Provides a theoretical framework for understanding scheduling
behavior.
Disadvantages: Can be complex and may require simplifications that reduce
accuracy.
Simulations
Description: Uses computer programs to simulate scheduling algorithms and
workloads.
Advantages: Can model complex and realistic scenarios.
Disadvantages: Requires significant computational resources and time.
Implementation and Measurement
Description: Implements scheduling algorithms in a real system and measures their
performance.
Advantages: Provides accurate and practical performance data.
Disadvantages: Time-consuming and resource-intensive.
6. Real-Time Scheduling
Hard Real-Time Systems
Definition: Systems that require tasks to be completed within a strict deadline.
Examples: Medical devices, automotive systems, industrial control systems.
Soft Real-Time Systems
Definition: Systems where deadlines are important but not absolutely critical.
Examples: Multimedia systems, online transaction processing.
Real-Time Scheduling Algorithms
Rate Monotonic Scheduling (RMS)
o Operation: Assigns static priorities based on the periodicity of tasks; shorter
periods get higher priorities.
o Advantages: Simple and easy to implement.
o Disadvantages: Not optimal for all task sets.
Earliest Deadline First (EDF)
o Operation: Dynamic priority scheduling; tasks with the closest deadlines are
given highest priority.
o Advantages: Optimal for uniprocessor systems.
o Disadvantages: Requires careful handling of deadline misses.
Least Slack Time (LST)
o Operation: Prioritizes tasks with the least slack time (time until deadline
minus remaining execution time).
o Advantages: Can effectively manage varying task loads.
o Disadvantages: Complex to implement and manage.
7. Thread Scheduling
User-level Thread Scheduling
Management: Managed by the user-level library without kernel intervention.
Advantages: Faster context switching, more flexibility in scheduling.
Disadvantages: Kernel is unaware of user-level threads, which can cause
inefficiencies.
Kernel-level Thread Scheduling
Management: Managed by the operating system kernel.
Advantages: Kernel is aware of all threads, allowing better resource management.
Disadvantages: Slower context switching compared to user-level threads.
8. Multiprocessor Scheduling
Load Balancing
Definition: Distributes workload evenly across multiple processors to prevent some
processors from being overloaded while others are underloaded.
Techniques: Static load balancing (predefined distribution), dynamic load balancing
(adjusts based on current load).
Symmetric Multiprocessing (SMP)
Definition: Each processor is self-scheduling; all processors share a common ready
queue.
Advantages: Simple and provides good load distribution.
Disadvantages: Can have contention for the shared ready queue.
Asymmetric Multiprocessing
Definition: One master processor controls the scheduling, while other processors
perform tasks.
Advantages: Reduces contention for the ready queue.
Disadvantages: Can create a bottleneck at the master processor.
9. Virtualization and Scheduling
Virtual Machine Scheduling
Definition: Scheduling algorithms used in hypervisors to manage the execution of
virtual machines.
Challenges: Ensuring fair resource distribution, minimizing overhead, and managing
the complexity of multiple layers of scheduling.
Challenges in Virtualization
Overhead: Virtualization introduces additional layers of scheduling, which can
reduce overall performance.
Resource Distribution: Ensuring that virtual machines receive a fair share of
resources without negatively impacting the performance of other VMs or the host
system.