[go: up one dir, main page]

0% found this document useful (0 votes)
28 views10 pages

OS Scheduling

Operating system and how it schedules tasks

Uploaded by

Revanth Shalon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views10 pages

OS Scheduling

Operating system and how it schedules tasks

Uploaded by

Revanth Shalon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Operating system (OS) scheduling techniques are fundamental mechanisms that

manage the execution of processes and threads by allocating CPU time effi-
ciently. Effective scheduling maximizes CPU utilization, ensures fairness, mini-
mizes waiting time, and meets performance criteria specific to different system
needs. Below is an in-depth explanation of various OS scheduling techniques,
along with their advantages, disadvantages, suitable use cases, and a glossary
of technical terms for clarity.

1. First-Come, First-Served (FCFS) Scheduling


Overview
First-Come, First-Served (FCFS) is the simplest CPU scheduling algorithm. It
schedules processes based on their arrival time in the ready queue.

How it Works
• Process Queueing: Processes enter the ready queue and are placed at
the end.
• Execution Order: The CPU is assigned to the process at the front of
the queue.
• Non-Preemptive: Once a process has the CPU, it runs until completion
or it performs I/O.

Pros
• Simplicity: Easy to understand and implement.
• Fairness by Arrival: Processes are treated equally based on arrival time.

Cons
• Convoy Effect: Short processes wait behind long processes, leading to
inefficient CPU use.
• Poor Average Waiting Time: Can result in high average waiting times
if long processes are ahead.
• Non-Preemptive: Not suitable for time-sharing systems where preemp-
tion is necessary.

Use Cases
• Batch Systems: Where processes are executed without user interaction,
and turnaround time is acceptable.
• Simple Environments: Systems prioritizing simplicity over perfor-
mance.

1
2. Shortest Job First (SJF) Scheduling
Overview
Shortest Job First (SJF) schedules processes with the shortest estimated CPU
burst time next.

How it Works
• Selection Criteria: Processes with the smallest CPU burst time are
selected.
• Variants:
– Non-Preemptive SJF: Once a process starts, it runs to completion.
– Preemptive SJF (Shortest Remaining Time First - SRTF): A run-
ning process may be preempted if a new process with a shorter burst
time arrives.

Pros
• Optimal Average Waiting Time: Provides the minimum average wait-
ing time for a given set of processes.
• Efficiency: Reduces turnaround time for short processes.

Cons
• Estimation Difficulty: Requires accurate knowledge of CPU burst
times, which is often unavailable.
• Starvation: Long processes may suffer indefinite postponement.
• Not Suitable for Time-Sharing Systems: Lacks preemption in its
non-preemptive form.

Use Cases
• Batch Environments: Where job lengths are known, and shortest jobs
are prioritized.
• Systems Aiming to Minimize Waiting Time: Suitable when short
jobs are common and favored.

3. Priority Scheduling
Overview
Priority Scheduling assigns priority levels to processes and schedules them ac-
cordingly.

2
How it Works
• Priority Assignment: Each process is given a priority number.
• Scheduling Order: Processes with higher priorities are scheduled before
lower-priority ones.
• Preemption:
– Preemptive: A process can be preempted if a higher priority process
arrives.
– Non-Preemptive: Running processes continue until completion or
blocking.

Pros
• Customizable Scheduling: Can tailor scheduling based on process im-
portance.
• Efficient for Critical Tasks: Ensures high-priority processes receive
prompt attention.

Cons
• Starvation: Low-priority processes may never execute.
• Complexity: Requires a mechanism to assign and adjust priorities.
• Fairness Issues: May favor certain processes over others unjustly.

Solution to Starvation
• Aging: Gradually increasing the priority of waiting processes to ensure
eventual execution.

Use Cases
• Real-Time Systems: Where certain tasks have higher importance.
• Systems with Differentiated Services: Environments where process
importance varies.

4. Round Robin (RR) Scheduling


Overview
Round Robin (RR) scheduling is designed for time-sharing systems, allocating
CPU time in small increments.

How it Works
• Time Quantum: Each process is assigned a fixed time slice (e.g., 10ms).
• Circular Queue: Processes are placed in a FIFO circular queue.
• Execution Cycle:

3
– A process runs for a time quantum.
– If it doesn’t finish, it’s placed at the end of the queue.
– Scheduler moves to the next process.

Pros
• Fairness: Each process gets equal CPU time in turns.
• Responsiveness: Good for interactive systems needing prompt
responses.
• Simplicity: Easy to implement.

Cons
• Time Quantum Sensitivity:
– Too Small: High overhead due to frequent context switches.
– Too Large: Behaves like FCFS, reducing responsiveness.
• Average Waiting Time: Can be higher than SJF.

Use Cases
• Time-Sharing Systems: Where multiple users or processes require fre-
quent CPU access.
• Interactive Environments: Systems needing to balance throughput
and response time.

5. Multilevel Queue Scheduling


Overview
Multilevel Queue Scheduling divides the ready queue into multiple separate
queues, each with its own scheduling algorithm.

How it Works
• Queue Separation: Processes are grouped based on characteristics (e.g.,
foreground vs. background).
• Fixed Priority Scheduling: Between queues, fixed-priority scheduling
decides which queue gets CPU time.
• Individual Queue Scheduling: Each queue can employ different
scheduling algorithms.

Pros
• Specialization: Tailors scheduling to process types.
• Priority Enforcement: Essential processes can be prioritized.

4
Cons
• Starvation Risk: Lower-priority queues may receive little CPU time.
• Inflexibility: Processes are confined to their assigned queue.

Use Cases
• Mixed Environments: Systems running a mix of interactive and batch
processes.
• Prioritized Workloads: Where different classes of processes need dis-
tinct handling.

6. Multilevel Feedback Queue Scheduling


Overview
An extension of Multilevel Queue Scheduling, Multilevel Feedback Queue
Scheduling allows processes to move between queues based on their execution
history and behavior.

How it Works
• Dynamic Priority Adjustment: Processes that use more CPU time
are moved to lower-priority queues.
• Feedback Mechanism: Encourages processes to be interactive by re-
warding shorter CPU bursts.
• Multiple Queues: Each with different scheduling policies and time quan-
tums.

Pros
• Adaptability: Adjusts to the process’s behavior, promoting fairness.
• Prevention of Starvation: Processes eventually reach higher priority
as needed.

Cons
• Complexity: Difficult to configure and tune effectively.
• Overhead: Process movement between queues requires additional com-
putation.

Use Cases
• General-Purpose Operating Systems: Where processes have varying
CPU and I/O demands.
• Systems Needing Balance: Environments requiring both responsive-
ness and efficient CPU use.

5
7. Earliest Deadline First (EDF) Scheduling
Overview
Earliest Deadline First (EDF) is a dynamic scheduling algorithm used in real-
time operating systems to ensure that all critical deadlines are met.

How it Works
• Deadline Assignment: Each process has a specific deadline.
• Scheduling Decision: The process with the closest deadline is scheduled
next.
• Preemption: Processes can be preempted if another process with an
earlier deadline arrives.

Pros
• Optimal for Uniprocessor Systems: Ensures all deadlines are met if
possible.
• Dynamic Adaptation: Adjusts to process arrivals and changing dead-
lines.

Cons
• Complexity: Requires constant monitoring of deadlines and preemption.
• Overhead: Frequent context switches can degrade performance.

Use Cases
• Hard Real-Time Systems: Where missing a deadline could be catas-
trophic.
• Systems with Variable Workloads: Environments requiring dynamic
scheduling.

8. Rate Monotonic Scheduling (RMS)


Overview
Rate Monotonic Scheduling is a static priority algorithm used in real-time sys-
tems with periodic tasks.

How it Works
• Priority Assignment: Processes are assigned priorities based on the
frequency of their periodic requests—the shorter the period, the higher

6
the priority.
• Fixed Priorities: Priorities do not change over time.
• Scheduling Decision: Always selects the process with the highest prior-
ity that is ready to run.

Pros
• Predictability: Analysis can determine schedulability ahead of time.
• Simplicity: Straightforward implementation.

Cons
• Conservative Utilization: May underutilize CPU as it assumes worst-
case scenarios.
• Inflexible: Not suitable for tasks with varying periods.

Use Cases
• Embedded Systems: Devices with fixed, periodic tasks.
• Safety-Critical Applications: Systems requiring proven scheduling
guarantees.

9. Lottery Scheduling
Overview
Lottery Scheduling uses randomization to allocate CPU time, aiming for pro-
portional resource sharing.

How it Works
• Ticket Assignment: Each process receives a number of tickets repre-
senting its share of the CPU.
• Random Selection: A lottery draw decides which process runs next.
• Flexibility: Processes can trade tickets or adjust their allocations.

Pros
• Fairness: Resources are shared according to ticket allocation.
• Flexibility and Simplicity: Easy to adjust priorities by changing ticket
counts.

Cons
• Randomness: May not be suitable for processes needing guaranteed
CPU time.
• Unpredictability: Short-term execution order can vary widely.

7
Use Cases
• Experimental Systems: Environments exploring new scheduling
paradigms.
• Resource Allocation Testing: Systems experimenting with propor-
tional sharing.

10. Fair-Share Scheduling


Overview
Fair-Share Scheduling ensures that CPU time is equitably divided among users
or groups, not just processes.

How it Works
• User-Based Allocation: CPU time is allocated based on user or group
quotas.
• Process Weighting: Each user’s processes share their allotted CPU
time.
• Resource Limits: Prevents any single user from monopolizing CPU re-
sources.

Pros
• User Fairness: Equitable distribution of resources among users.
• Control: Administrators can enforce usage policies.

Cons
• Complexity: More intricate than per-process scheduling.
• Potential Inefficiency: CPU may be underutilized if users don’t fully
use their share.

Use Cases
• Multi-User Systems: Servers with multiple users running processes con-
currently.
• Resource-Constrained Environments: Systems where resource con-
tention is high.

Glossary of Technical Terms


• Aging: A technique to increase the priority of a process the longer it
waits, preventing starvation.

8
• Batch Systems: Systems that execute jobs in bulk without user interac-
tion.
• Context Switch: Saving the state of a process so another process can
use the CPU.
• Convoy Effect: When short processes wait for a long process to release
the CPU.
• CPU Burst: A period when a process runs on the CPU without inter-
ruption.
• Deadline: The latest time by which a process must complete in real-time
systems.
• Fairness: Equal opportunity for processes or users to access CPU re-
sources.
• FIFO (First-In, First-Out): A queue ordering where the first element
added is the first served.
• Hard Real-Time System: Systems where missing a task deadline could
lead to catastrophic outcomes.
• I/O: Input/Output operations involving communication with external de-
vices.
• Non-Preemptive Scheduling: Once a process starts, it runs to comple-
tion without interruption.
• Overhead: Additional system resources consumed to manage processes
(e.g., context switching).
• Preemptive Scheduling: The scheduler can interrupt a running process
to start another.
• Real-Time System: Systems that require tasks to be completed within
strict timing constraints.
• Schedulability: The ability of the scheduler to meet all process deadlines.
• Starvation: When a process never gets CPU time due to scheduling
policies.
• Throughput: The number of processes completed in a given time frame.
• Time Quantum: The fixed time slice allocated to a process in Round
Robin scheduling.
• Turnaround Time: The total time taken from submission to completion
of a process.
• Waiting Time: Total time a process spends in the ready queue.

Conclusion
Selecting the appropriate CPU scheduling algorithm depends on the specific
requirements of the system, such as the need for quick response times, through-
put optimization, fairness among processes or users, and adherence to dead-
lines in real-time systems. Understanding the strengths and weaknesses of each
scheduling technique enables system designers and administrators to tailor the
operating system’s behavior to meet application demands effectively.

9
This comprehensive overview provides detailed insights into various OS schedul-
ing techniques, aiding in the selection and implementation of the most suitable
algorithm for different computing environments.

10

You might also like