Operating Systems – Exam Notes (Clean & Concise)
Covers: Definitions, Diagrams, Comparisons, and Solved CPU Scheduling Numericals
1) Define Operating System. Explain Dual-Mode OS.
Definition:
An Operating System (OS) is system software that acts as an intermediary between users/applications and the
hardware. It manages CPU, memory, and I/O devices, and provides an execution environment ensuring efficiency,
security, and convenience.
Dual-Mode Operation:
User Mode: Runs user applications with restricted privileges; cannot execute privileged instructions. Services are
requested via system calls. Kernel (Supervisor) Mode: Runs OS code with full access to hardware and privileged
instructions (e.g., I/O, memory management, interrupts).
Mode Bit: A hardware flag distinguishes modes (e.g., 0 = Kernel, 1 = User). On a system call/interrupt (trap), CPU
switches to kernel mode to service the request and returns to user mode.
Diagram:
+--------------------------------------------------+ | User Applications | <-- User Mode
+--------------------------------------------------+ | System Call / Trap v
+--------------------------------------------------+ | Operating System Kernel | <--
Kernel Mode | (CPU scheduling, Memory mgmt, File system, I/O) |
+--------------------------------------------------+ | Hardware |
+--------------------------------------------------+
Why Dual-Mode? Security, stability, and controlled access to resources.
2) Multiprogramming vs Time-Sharing Systems (with
examples)
Feature Multiprogramming Time-Sharing
Primary Goal Maximize CPU utilization Provide fast response to users
User Interaction None (batch-oriented) Yes (interactive multi-user)
Scheduling Typically non-preemptive; switch on I/O wait Preemptive (time slice / Round Robin)
CPU Allocation Trigger On I/O wait/block On timer interrupt (quantum expiry)
Response Time Not critical Critical (low latency)
Examples Early batch systems (IBM OS/360) UNIX, Linux, Windows multi-user
Conceptual View:
Multiprogramming → CPU switches when the running job blocks for I/O. Time-Sharing → CPU
switches rapidly among users after a small time slice.
3) What are System Calls? Types with examples.
A system call is the interface between a user program and the OS kernel. A program invokes a system call (e.g.,
read()), which triggers a trap to switch to kernel mode, the OS performs the service, and control returns to user
mode.
Type Example Calls / Purpose
Process Control fork(), exit(), wait()
File Management open(), read(), write(), close()
Device Management ioctl() (device control), read()/write() on device files
Information Maintenance getpid(), gettimeofday(), alarm()
Communication (IPC) pipe(), send()/recv(), socket()
Flow (conceptual):
User Program → System Call → Trap to Kernel → Service by OS → Return to User Program
4) OS Services – User-Level and System-Level
User-Level Services System-Level Services
Program execution Resource allocation
I/O operations Accounting / usage statistics
File system manipulation Protection & security
Communication System monitoring / performance tuning
Error detection Overall efficiency / policy enforcement
Layered View:
+----------------------------- User-Level Services -----------------------------+ |
Program Exec | Files | I/O | Communication | Error Detection |
+---------------------------- System-Level Services ----------------------------+ |
Resource Allocation | Accounting | Protection & Security | Monitoring |
+----------------------------------- Hardware ---------------------------------+
5) Define Process, Process States (diagram), and PCB
Process: A program in execution (with PC, registers, stack, data, heap).
Basic States:
New → Ready → Running → Waiting (Blocked) → Terminated
State Transition Diagram:
+-------+ | New | +-------+ | v +-------+ | Ready | <-------------+ +-------+ | | | CPU
Scheduling | v | +---------+ | | Running |-------------+ +---------+ | ^ | ^ | | | | | | v
| | +----------+ | +----------+ | Waiting |---+ |Terminate | +----------+ +----------+
Process Control Block (PCB) contains:
Category Fields
Identification PID, Parent PID, User ID
State & CPU Context Process state, Program counter, CPU registers
CPU Scheduling Info Priority, Scheduling queues pointers
Memory Management Base/limit registers, Page/segment tables
Accounting CPU time used, job/accounting info
I/O Status Open files, I/O devices allocated
6) Queuing Diagram of Process Scheduling & Schedulers
Queues:
• Job Queue – all processes in the system • Ready Queue – processes in memory waiting for CPU • Device (I/O)
Queues – one per device (disk, printer, network)
Schedulers:
• Long-Term (Job) Scheduler – controls admission to memory (degree of multiprogramming) • Medium-Term
Scheduler – suspends/resumes (swapping) to balance load • Short-Term (CPU) Scheduler – picks next ready
process (FCFS, SJF, RR, Priority)
Queuing Diagram:
+------------------+ | Job Queue | +------------------+ | Long-Term Scheduler | v
+------------------+ | Ready Queue | <---+ +------------------+ | | | Short-Term Scheduler
| | | v | +------------------+ | | CPU (Run) |-----+ +------------------+ |
+-------------------+ | I/O Requests | +-------------------+ | Device Queues
7) Inter-Process Communication (IPC): Message Passing &
Shared Memory
IPC allows processes to exchange data and coordinate.
A) Message Passing:
Processes communicate via OS-mediated messages (send/receive). Supports direct (naming processes) or indirect
(mailboxes/ports). Simpler for distributed systems; synchronization via blocking calls.
Diagram (Message Passing):
Process A -- send(msg) --> [ Kernel / OS ] -- receive(msg) --> Process B
B) Shared Memory:
OS establishes a shared region; processes read/write directly. Very fast but requires synchronization
(semaphores/monitors) to avoid race conditions. Works on the same machine.
Diagram (Shared Memory):
+-----------------------------+ | Shared Memory | +-----------------------------+ ^ ^
Process A Process B (read/write) (read/write)
Feature Message Passing Shared Memory
Speed Slower (OS involvement) Faster (direct access)
Synchronization Implicit via blocking Explicit (locks/semaphores)
Complexity Easier to design Harder (concurrency control)
Usage Distributed systems Single system (common memory)
8) CPU Scheduling Problems – Solved (Gantt + CT/TAT/WT)
a) First-Come-First-Serve (FCFS)
Processes: P1(AT=0,BT=5), P2(AT=1,BT=6), P3(AT=2,BT=3), P4(AT=4,BT=8)
Gantt: 0 — 5 P1 — 11 P2 — 14 P3 — 22 P4
Process AT BT CT TAT WT
P1 0 5 5 5 0
P2 1 6 11 10 4
P3 2 3 14 12 9
P4 4 8 22 18 10
Avg TAT = 11.25, Avg WT = 5.75
b) Preemptive Shortest Job First (SJF)
Processes: P1(0,6), P2(1,10), P3(2,4), P4(3,6)
Gantt: 0–2 P1 | 2–6 P3 | 6–12 P4 | 12–22 P2
Process AT BT CT TAT WT
P1 0 6 22 22 16
P2 1 10 22 21 11
P3 2 4 6 4 0
P4 3 6 12 9 3
Avg TAT = 14, Avg WT = 7.5
c) Shortest Remaining Time First (SRTF)
Processes: P1(0,8), P2(1,6), P3(3,3), P4(5,2), P5(6,4)
Gantt: 0–1 P1 | 1–3 P2 | 3–6 P3 | 6–8 P4 | 8–12 P5 | 12–16 P2 | 16–23 P1
Process AT BT CT TAT WT
P1 0 8 23 23 15
P2 1 6 16 15 9
P3 3 3 6 3 0
P4 5 2 8 3 1
P5 6 4 12 6 2
Avg TAT = 10, Avg WT = 5.4
d) Round Robin (Quantum = 4)
Processes: P1(0,8), P2(1,6), P3(3,3), P4(5,2), P5(6,4)
Gantt: 0–4 P1 | 4–8 P2 | 8–11 P3 | 11–13 P4 | 13–17 P5 | 17–21 P1 | 21–23 P2
Process AT BT CT TAT WT
P1 0 8 21 21 13
P2 1 6 23 22 16
P3 3 3 11 8 5
P4 5 2 13 8 6
P5 6 4 17 11 7
Avg TAT = 14, Avg WT = 9.4