[go: up one dir, main page]

0% found this document useful (0 votes)
5 views18 pages

OS Unit 2 Pyq

The document discusses CPU scheduling in operating systems, explaining preemptive and non-preemptive scheduling, along with specific algorithms like FCFS, SJF, and RR. It covers process states, the critical section problem, principles for selecting scheduling algorithms, and the impact of scheduling on performance. Additionally, it describes multilevel feedback queue scheduling and the benefits of threads, as well as synchronization issues in processes.

Uploaded by

tushardharmro1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

OS Unit 2 Pyq

The document discusses CPU scheduling in operating systems, explaining preemptive and non-preemptive scheduling, along with specific algorithms like FCFS, SJF, and RR. It covers process states, the critical section problem, principles for selecting scheduling algorithms, and the impact of scheduling on performance. Additionally, it describes multilevel feedback queue scheduling and the benefits of threads, as well as synchronization issues in processes.

Uploaded by

tushardharmro1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit 2

Q1 Explain preemptive & non-preemptive scheduling.

⚖️ Preemptive vs Non-Preemptive Scheduling in Operating Systems


CPU scheduling is a key part of process management in operating
systems. It determines which process gets to use the CPU next

🟦 1) Preemptive Scheduling🔹 Definition:

In preemptive scheduling, the operating system can forcibly take the CPU away from a
currently running process and assign it to another process (usually one with higher priority or
shorter expected run time).

🔹 Characteristics:
●​ Process can be interrupted.
●​ Better for systems requiring real-time responsiveness.
●​ Needs context switching (saving and restoring process state).

🔹 Examples of Preemptive Algorithms:


●​ Round Robin (RR)
●​ Shortest Remaining Time First (SRTF)
●​ Priority Scheduling (Preemptive)
●​ Multilevel Queue with Preemption

✅ Advantages:
●​ Better response time in interactive systems.
●​ Higher-priority tasks are serviced faster.
●​ Ensures fairness among processes.

❌ Disadvantages:
●​ More overhead due to frequent context switching.
●​ Complex to implement and manage priorities.

🟨 2) Non-Preemptive Scheduling🔹 Definition:

In non-preemptive scheduling, once a process starts executing, it runs to completion or


until it voluntarily yields the CPU (e.g., for I/O operations).
🔹 Characteristics:
●​ No interruption once the process starts.
●​ Simple and easy to implement.
●​ Used in batch systems or real-time tasks with fixed deadlines.

🔹 Examples of Non-Preemptive Algorithms:


●​ First-Come, First-Served (FCFS)
●​ Shortest Job First (SJF)
●​ Priority Scheduling (Non-Preemptive)

✅ Advantages:
●​ No context switching overhead.
●​ Easier to design and implement.
●​ Predictable execution order.

❌ Disadvantages:
●​ Poor responsiveness to high-priority processes.
●​ One long process can block others (convoy effect).

📊 Comparison Table:
Aspect Preemptive Scheduling Non-Preemptive Scheduling

Interruptibility Process can be interrupted Process runs until completion or I/O


mid-execution request

Responsiveness More responsive to user needs Less responsive

Complexity More complex to implement Simpler to design

Context Frequent, leads to overhead Minimal or none


Switching

Use Case Real-time, interactive systems Batch or simple systems


✅ Conclusion:
●​ Preemptive scheduling improves responsiveness and fairness but adds complexity.​

●​ Non-preemptive scheduling is simpler and predictable but may lead to delays in


servicing higher-priority tasks.
Q2 Describe FCFS and SIE CPU scheduling algorithms with the help of suitable example.

Here's a detailed explanation of FCFS (First-Come, First-Served) and SJF (Shortest Job
First) CPU scheduling algorithms with examples:

🟦 1) FCFS – First-Come, First-Served Scheduling


Definition:FCFS is the simplest CPU scheduling algorithm. The process that arrives first
gets executed first, just like a queue at a service counter.

🔹 Characteristics:
●​ Non-preemptive.
●​ Based on arrival time.
●​ Simple to implement using a FIFO queue.

✅ Example:
Proces Arrival Time Burst Time
s

P1 0 5

P2 1 3

P3 2 8

P4 3 6

🔹 Gantt Chart:
| P1 | P2 | P3 | P4 |
0 5 8 16 22

🔹 Calculation:
Proces Waiting Time Turnaround Time
s

P1 0 5

P2 4 7

P3 7 15
P4 13 19

●​ ​
Average Waiting Time = (0 + 4 + 7 + 13) / 4 = 6
●​ Average Turnaround Time = (5 + 7 + 15 + 19) / 4 = 11.5

✅ Pros of FCFS:
●​ Simple and easy to implement.
●​ Fair in terms of order of arrival.

❌ Cons:
●​ Convoy effect: One long process can delay others.
●​ Not suitable for interactive systems.

🟨 2) SJF – Shortest Job First Scheduling🔹 Definition:

SJF selects the process with the shortest burst time for execution next.

●​ Non-preemptive by default (but preemptive version is called SRTF).


●​ Optimized for minimal average waiting time.

✅ Example (Same as above for comparison):


Proces Arrival Time Burst Time
s

P1 0 5

P2 1 3

P3 2 8

P4 3 6

🔹 Execution Order:
●​ At time 0: P1 starts (only available process)
●​ After P1 (5 units), ready queue: P2, P3, P4
●​ Choose shortest burst: P2 → P4 → P3

🔹 Gantt Chart:
| P1 | P2 | P4 | P3 |
0 5 8 14 22 🔹 Calculation:
Proces Waiting Time Turnaround Time
s

P1 0 5

P2 4 7

P3 6 14

P4 5 11


Average Waiting Time = (0 + 4 + 6 + 5) / 4 = 3.75

Average Turnaround Time = (5 + 7 + 14 + 11) / 4 = 9.25

✅ Pros of SJF:
●​ Produces the lowest average waiting time.
●​ Efficient for batch jobs with known burst times.

❌ Cons:
●​ Starvation of long processes.
●​ Not practical without accurate prediction of burst time.

📌 Conclusion:
Aspect FCFS SJF

Type Non-preemptive Non-preemptive

Based On Arrival Time Burst Time

Performanc Simple, fair, but inefficient Optimized waiting time


e

Starvation No Yes (long jobs may wait forever)

Use Case Simple/batch systems Systems with predictable tasks


Q3 (a) Discuss process states. (b) Explain the critical section problem.

Here's a detailed explanation of both parts of your question:

🅰️ (a) Process States🔹 What is a Process?

A process is a program in execution. It undergoes various states during its lifetime, depending
on events like I/O operations, scheduling, or completion.

🔹 Five Main Process States:


1.​ New
○​ The process is being created.
2.​ Ready
○​ The process is waiting to be assigned to the CPU.
○​ It is ready to run but not currently executing.
3.​ Running
○​ The process is currently being executed by the CPU.
4.​ Waiting (Blocked)
○​ The process is waiting for some event to occur (e.g., I/O completion).
5.​ Terminated
○​ The process has finished execution and is removed from the system.

🔹 Process State Transition Diagram:


New → Ready → Running → Terminated
↑ ↓ ↘
Waiting ←──→ (I/O)

🔹 Additional States (in some OS):


●​ Suspended Ready / Suspended Waiting: Process is swapped out of memory but retains
its status.

🅱️ (b) Critical Section Problem🔹 What is a Critical Section?

A critical section is a part of a program where a process accesses shared resources (like

🔹
variables, files, or memory) that must not be accessed by more than one process at the same
time. Problem Definition:

If multiple processes access and modify shared data simultaneously, it may lead to inconsistent
or incorrect results. This is called the Critical Section Problem.
🔹 Requirements for Solution (3 Conditions):
1.​ Mutual Exclusion
○​ Only one process can enter the critical section at a time.
2.​ Progress
○​ If no process is in the critical section, and some want to enter, one must be
allowed in without indefinite delay.
3.​ Bounded Waiting
○​ A process waiting to enter the critical section will eventually get a chance, i.e., no
starvation.

🔹 Structure of a Process for Critical Section Handling:


do {
// Entry Section (Request access)
// Critical Section (Access shared data)
// Exit Section (Release access)
// Remainder Section (Other work)
} while (true);

🔹 Solutions to the Critical Section Problem:


Type Examples

Software-Based Peterson’s Algorithm, Dekker’s Algorithm

Hardware-Based Disabling interrupts, Test-and-Set,


Compare-and-Swap

OS-Level Support Semaphores, Mutexes, Monitors

✅ Conclusion:
●​ Process States describe the lifecycle of a process from creation to termination.​

●​ The Critical Section Problem addresses the challenge of safe access to shared
resources and is solved using synchronization techniques.​

Q4 1. What are the different principles which must be considered while selection of a
scheduling algorithm. 2. Define scheduling, What criterion affects the scheduler
performance.

🔹 1. Principles for Selection of a Scheduling Algorithm


When choosing a scheduling algorithm, several key principles must be considered to ensure
the system meets its goals in terms of performance, responsiveness, and fairness.

✅ Principles to Consider:
Principle Description

CPU Utilization Keep the CPU as busy as possible. Aim for high utilization (ideally
100%).

Throughput Maximize the number of processes completed per unit of time.

Turnaround Minimize the time taken to execute a process from submission to


Time completion.

Waiting Time Reduce the amount of time a process spends in the ready queue.

Response Time Minimize the time from request submission to first response (important
in interactive systems).

Fairness Every process should get a fair share of CPU time; no process should
be starved.

Predictability Performance should be consistent and predictable.

Prioritization High-priority tasks should be handled first (especially in real-time


systems).
Scalability The algorithm should perform well as the number of processes
increases.

Overhead The algorithm should incur minimal CPU overhead in managing


scheduling decisions.

🔹 2. Definition of Scheduling & Scheduler Performance


Criteria ✅ Definition of Scheduling:
Scheduling is the process by which the operating system selects the next process to run on
the CPU from the ready queue. The goal is to improve CPU efficiency, responsiveness, and
system throughput.

There are different types of schedulers:

●​ Long-term scheduler (job scheduler) – selects jobs from the job pool.
●​ Short-term scheduler (CPU scheduler) – selects which ready process will execute next.
●​ Medium-term scheduler – handles swapping in and out of processes (in systems with

✅ Criteria Affecting Scheduler Performance:


memory management).​

Criterion Impact on Performance

CPU Utilization Higher utilization = better performance.

Throughput More completed processes = more productive


system.

Turnaround Time Shorter turnaround means faster job completion.

Waiting Time Less waiting = quicker access to CPU.


Response Time Important for interactive users (e.g., GUIs, terminals).

Fairness Ensures all processes get CPU time; avoids


starvation.

Context Switch Time More frequent switches increase overhead.

Priority Handling Should allow urgent tasks to preempt others if


needed.

✅ Conclusion:
●​ Choosing the right scheduling algorithm involves balancing multiple goals: speed,
efficiency, fairness, and resource usage.
●​ Good scheduling enhances system responsiveness and stability, especially in
multi-user or real-time environments.​

Q5 Describe following CPU Scheduling algorithm by taking suitable example 1. SJF 2. RR


with time quantum = l unit 3. Multilevel feedback queue scheduling

✅ 1. SJF – Shortest Job First Scheduling🔹 Definition:

SJF selects the process with the shortest CPU burst time. It can be non-preemptive or
preemptive (SRTF – Shortest Remaining Time First).

📌 Example:
Proces Arrival Time Burst Time
s

P1 0 7

P2 2 4

P3 4 1

P4 5 4
🔸 Assuming Non-Preemptive SJF
●​ At time 0 → P1 runs (only available process)
●​ At time 7 → P2, P3, P4 are in queue → P3 has the shortest burst
●​ After P3, pick the next shortest (P2), then P4

🕒 Gantt Chart:
| P1 | P3 | P2 | P4 |

📈 Waiting Time:
0 7 8 12 16

Proces Turnaround Time Waiting Time


s

P1 7 0

P2 10 6

P3 4 3

P4 11 7

Avg Waiting Time = (0 + 6 + 3 + 7)/4 = 4

✅ 2. RR – Round Robin Scheduling (Time Quantum = 1


unit) 🔹 Definition:
Each process gets a fixed time slice (quantum) in circular order. If it doesn't finish, it is placed
at the end of the ready queue.

📌 Example:
Proces Arrival Time Burst Time
s

P1 0 4

P2 1 3

P3 2 1

⏱️ Time Quantum = 1 unit 🕒 Gantt Chart:


0 1 2 3 4 5 6 7 8
| P1 | P2 | P3 | P1 | P2 | P1 | P2 |
⏱️ Completion Times:
●​ P1 finishes at 6
●​ P2 finishes at 8

📈 Waiting Time:
●​ P3 finishes at 3​

Proces Turnaround Time Waiting Time


s

P1 6 2

P2 7 4

P3 1 0

Avg Waiting Time = (2 + 4 + 0)/3 = 2

✅ 3. Multilevel Feedback Queue (MLFQ) Scheduling


Definition:

A more advanced scheduling method with multiple queues (e.g., high, medium, low priority),
each with different scheduling algorithms and time quantums.

Processes can move between queues based on:

●​ Execution history
●​ CPU/I/O behavior

🧠 Features:
●​ Aging or priority changes​

●​ Allows a mix of interactive and batch jobs.


●​ Prevents starvation using aging.

📌 Example Setup:
●​ Commonly used in modern OS like Windows & Linux.​

●​ Queue 1 (Highest): RR with quantum = 2


●​ Queue 2: RR with quantum = 4
●​ Queue 3: FCFS

Proces Arrival Time Burst Time


s

P1 0 10

P2 0 4
P3 1 2

🔁 Execution Flow:
●​ P1 and P2 enter Queue 1 → Each gets 2 units
●​ P1 not finished → moves to Queue 2
●​ P2 finished in Queue 1
●​ P3 arrives → gets 2 units in Queue 1 → finishes
●​ P1 continues in Queue 2 with 4 units → still not finished → moves to Queue 3

🕒 Gantt Chart (Simplified):


●​ Final 2 units done in FCFS​

| P1 | P2 | P3 | P1 | P1 |
0 2 6 8 12 14

📈 Waiting Time:
●​ P1: Waited in all 3 queues = 4
●​ P2: Finished in first queue = 2
●​ P3: Minimal wait = 1

MLFQ provides balance between responsiveness and fairness.

✅ Conclusion:
Algorithm Type Best For Weakness

SJF Non-preempti Short jobs, minimal Starvation of long processes


ve waiting time

Round Robin Preemptive Fair sharing, interactive Higher overhead (frequent


(RR) systems switching)

MLFQ Hybrid Mixed workloads, Complex to implement


adaptable
Q6 .1) What are the benefits of threads. Explain about user and kernel threads in detail.
2.What is process synchronization hardware. What are the classical problems of
synchronization.

✅ 1. Benefits of Threads & Types of Threads


🔷 Benefits of Threads:Threads are lightweight units of a process. Multiple
threads within a process share code, data, and system resources.

🔹 Advantages:
Benefit Explanation

Responsiveness Threads can handle tasks (e.g., UI, I/O) in the background,
improving interactivity.

Resource Sharing Threads share memory and files of the same process, which
reduces overhead.

Economy (Low Creating threads is cheaper than creating processes.


Overhead)

Scalability Multi-threaded applications can run on multi-core CPUs for


improved performance.

Better Utilization Enables efficient use of CPU cycles while one thread waits (e.g.,
for I/O).

🔷 Types of Threads:
🔹 1. User-Level Threads (ULT)
Aspect Description
Managed by User-level thread library (e.g., POSIX, Java threads)

Visibility OS doesn't recognize them directly.

Context Fast (no kernel involvement).


Switching

Disadvantage If one thread blocks (e.g., I/O), entire process


blocks.

🔹 2. Kernel-Level Threads (KLT)


Aspect Description

Managed by Operating System

Visibility Kernel sees and manages each thread


individually.

Context Slower than ULT (involves kernel).


Switching

Advantage One thread can block without affecting others.

🆚 Comparison Table:
Feature User-Level Threads (ULT) Kernel-Level Threads (KLT)
OS Knowledge No Yes

Creation/Context Switch Fast Slower

Blocking Entire process blocks Only the blocked thread waits

Portability More portable Platform-dependent

Example POSIX threads (in user lib) Windows threads, Linux KLT

✅ 2. Process Synchronization Hardware & Classical


Problems
🔷 Process Synchronization Hardware
To prevent race conditions and ensure mutual exclusion, hardware provides atomic
operations that support synchronization:

🔹 Hardware Instructions for Synchronization:


Instruction Description

Test-and-Se Atomically tests and sets a lock variable. If lock is free (0), sets it to 1 and
t enters; else, waits.

Compare-and Compares the contents of a memory location with a given value; if equal,
-Swap swaps with a new value.

Exchange Atomically swaps contents of two memory locations.


🔐 These are low-level primitives often used to implement mutexes or locks.
🔷 Classical Synchronization Problems
These are well-known problems used to test synchronization solutions:

🔹 1. Producer-Consumer Problem
●​ One or more producers generate data and place it in a buffer.
●​ One or more consumers take data from the buffer.
●​ Need to ensure:
○​ Buffer doesn't overflow (producer waits if full).
○​ Buffer doesn't underflow (consumer waits if empty).

🔹 2. Reader-Writer Problem
●​ Multiple readers can read simultaneously.
●​ Only one writer can write at a time.
●​ Must ensure no reader reads while writing is in progress and vice versa.

🔹 3. Dining Philosophers Problem


●​ Five philosophers sit at a table with forks between them.
●​ Each needs two forks to eat.
●​ Must avoid deadlock and starvation while picking up forks (shared resources).

🔷 Common Issues in Synchronization:


Issue Description

Race Two or more processes access shared data at the same time, leading to
Condition unpredictable results.

Deadlock Processes wait indefinitely for resources held by each other.

Starvation A process waits indefinitely because other higher-priority processes keep


executing.
Busy Waiting A process continuously checks for a condition, wasting CPU cycles.

✅ Conclusion:
●​ Threads improve responsiveness and resource usage; user threads are lightweight but
limited, while kernel threads are powerful but slower.
●​ Hardware instructions like Test-and-Set are essential for implementing
synchronization.
●​ Classical problems help understand the challenges and design of concurrent systems.

You might also like