[go: up one dir, main page]

0% found this document useful (0 votes)
7 views13 pages

PCS24102 Advanced Operating Systems Units I II

The document covers advanced operating systems, detailing fundamentals, structures, synchronization mechanisms, and distributed operating systems. It discusses various OS architectures, types, synchronization methods, and issues related to distributed systems, including communication primitives and agreement protocols. Additionally, it includes exercises, illustrations, and references for further study.

Uploaded by

anurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

PCS24102 Advanced Operating Systems Units I II

The document covers advanced operating systems, detailing fundamentals, structures, synchronization mechanisms, and distributed operating systems. It discusses various OS architectures, types, synchronization methods, and issues related to distributed systems, including communication primitives and agreement protocols. Additionally, it includes exercises, illustrations, and references for further study.

Uploaded by

anurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

PCS24102 - ADVANCED OPERATING SYSTEMS

UNIT I & UNIT II


Prepared for: M.E. / M.Tech. – Computer Science (Postgraduate)
Author: ChatGPT (generated content)
Date: 22 October 2025

-----------------------------------------------------------------------

UNIT–I: FUNDAMENTALS OF OPERATING SYSTEMS


-----------------------------------------------------------------------

1. INTRODUCTION TO OPERATING SYSTEMS


An Operating System (OS) is the system software that manages computer hardware,
provides
a user interface, and offers services for application software. It acts as an intermediary
between
users/applications and the hardware. Objectives of an OS include resource allocation,
process
management, memory management, and security.

Key responsibilities:

- Process creation, scheduling, and termination.


- Memory allocation and virtual memory management.
- Device management and I/O handling.
- File system management.
- Protection, security, and user interface services.

2. OPERATING SYSTEM STRUCTURE


OS structure affects modularity, maintainability, and performance.

2.1 Monolithic Kernel


- Single large kernel running in privileged mode.
- All OS services (file system, device drivers, network stack) run in kernel space.
- Pros: high performance, simple interfaces.
- Cons: poor modularity, harder to maintain and debug.

2.2 Layered Approach


- OS divided into layers. Each layer provides services to the layer above.
- Example: Hardware -> Kernel -> File System -> User Interface.
- Easier verification and modularity. Performance overhead possible.

2.3 Microkernel Architecture


- Minimal kernel providing essential services (IPC, basic scheduling).
- Other services run in user space (file servers, device drivers).
- Pros: better reliability and portability.
- Cons: IPC overhead can reduce performance.

2.4 Modular and Hybrid Kernels


- Modules (loadable kernel modules) allow dynamic extension.
- Hybrid kernels (e.g., Windows NT) combine microkernel ideas with performance
optimizations.

2.5 Virtual Machine Monitors / Hypervisors


- Type 1 (bare-metal) and Type 2 (hosted) hypervisors.
- Provide isolation and virtualization of hardware.

3. TYPES OF ADVANCED OPERATING SYSTEMS


- Real-time operating systems (RTOS): strict timing constraints, deterministic scheduling.
- Distributed operating systems: coordinate resources over multiple machines.
- Multiprocessor and multicore OS: symmetric multiprocessing (SMP), load balancing.
- Embedded OS: resource-constrained devices.
- Mobile OS: power management, app sandboxing.
- Cloud OS and orchestration systems (Kubernetes-like): resource virtualization, multi-
tenancy.
4. SYNCHRONIZATION MECHANISMS
Synchronization ensures correct sequence of operations among concurrent
processes/threads.

4.1 Locks and Mutexes


- Mutual exclusion using binary locks (mutex).
- Usage patterns: acquire(), critical section, release().

4.2 Semaphores
- Counting semaphores (integer value) with P (wait) and V (signal) operations.
- Binary semaphore is similar to a mutex but can be used for signaling.

4.3 Monitors
- High-level constructs combining mutexes and condition variables.
- Language-level support (e.g., Java synchronized methods/blocks).

4.4 Condition Variables


- Used with mutexes to wait for certain conditions and signal other threads.

4.5 Read-Write Locks


- Allow multiple concurrent readers but exclusive writer.
- Useful for data structures with frequent reads and infrequent writes.

4.6 Lock-Free and Wait-Free Algorithms


- Use atomic operations (compare-and-swap) to avoid locks; important for high-
scalability.

5. CRITICAL SECTION PROBLEM


Definition: Ensure that when one process is executing in its critical section (accessing
shared data),
no other process can execute in its critical section.

Desired properties:
- Mutual exclusion: only one process in critical section.
- Progress: if no process in critical section, some process should be able to enter.
- Bounded waiting (Fairness): limit on how many times other processes can enter before a
waiting process.

5.1 Classic Solutions (for two processes)


- Peterson's algorithm: software solution using turn and flag arrays. Correct under
sequential consistency.
Pseudocode (for process i):

flag[i] = true;
turn = j;
while (flag[j] && turn == j) { /* busy wait */ }
// critical section
flag[i] = false;

- Dekker's algorithm: more complex, with busy waiting and a favored process variable.

5.2 Hardware Solutions


- Test-and-Set instruction, Swap, Compare-and-Swap (CAS), Fetch-and-Increment.
- These atomic instructions help implement mutexes and semaphores safely.

6. LANGUAGE MECHANISMS FOR SYNCHRONIZATION


Programming languages and runtime libraries provide abstractions:

- Java: synchronized blocks, volatile keyword, java.util.concurrent library (Locks,


Atomic classes).
- POSIX Threads (pthreads): pthread_mutex, pthread_cond, semaphores.
- .NET: lock statement, Monitor, SemaphoreSlim.
- Ada: protected objects and rendezvous.
- Go: channels and goroutines for message-passing style concurrency.

7. PROCESS DEADLOCKS
Deadlock: a set of processes each waiting indefinitely for resources held by others.
7.1 Necessary Conditions (Coffman conditions)
- Mutual exclusion
- Hold and wait
- No preemption
- Circular wait

7.2 Resource Allocation Graph (RAG)


- Vertices: processes and resource types.
- Directed edges show allocation and request; cycles indicate possible deadlock.

7.3 Deadlock Handling Strategies


- Prevention: ensure at least one Coffman condition cannot hold.
* Eliminate mutual exclusion (not practical for many resources).
* Eliminate hold-and-wait: require processes to request all resources at once.
* Preemption: forcibly take resources from processes.
* Break circular wait: impose ordering of resource types.

- Avoidance: dynamically examine resource-allocation state to ensure safe states


(Banker's algorithm).
* Banker's algorithm for single instance resources generalized for multiple instances.
* Safe state definition and safe sequence.

- Detection and Recovery:

* Use detection algorithms periodically (e.g., wait-for graph).


* Recovery by process termination (abort) or resource preemption and rollback.

7.4 Deadlock Detection Algorithms


- Single instance: cycle detection in RAG.
- Multiple instances: resource-allocation graph with claim and available matrices; use
algorithms similar to Banker's algorithm.

7.5 Recovery Techniques


- Process termination (terminate all or some processes in cycle).
- Resource preemption and rollback (save process states, restart later).
- Killing processes with minimal cost heuristic.

8. VIRTUALIZATION
Virtualization abstracts hardware to run multiple operating systems or isolated
environments.

8.1 Types of Virtualization


- Full virtualization: hypervisor fully emulates hardware (VMs run unmodified guest OS).
- Para-virtualization: guest OS modified to cooperate with hypervisor (e.g., Xen PV).
- OS-level virtualization (containers): lightweight isolation using namespaces and
cgroups (Linux containers, Docker).

8.2 Virtual Machine Monitor (VMM)


- Responsibilities: CPU virtualization, memory virtualization, I/O virtualization.
- Techniques: binary translation, hardware-assisted virtualization (Intel VT-x, AMD-V).

8.3 Containerization
- Namespaces provide isolation (pid, net, mnt, ipc, uts).
- cgroups provide resource limits (CPU shares, memory limits).
- Benefits: lightweight, faster startup, smaller overhead.
- Limitations: less isolation than hypervisor-based VMs.

8.4 Virtualization Performance Considerations


- I/O and device passthrough (SR-IOV, virtio).
- Memory ballooning, overcommit, page sharing.
- Live migration: moving VMs between hosts with minimal downtime.

-----------------------------------------------------------------------

UNIT–II: DISTRIBUTED OPERATING SYSTEMS


-----------------------------------------------------------------------
1. INTRODUCTION TO DISTRIBUTED OPERATING SYSTEMS
A Distributed OS manages a collection of independent computers and makes them appear
to the users as a single coherent system. Objective: resource sharing, scalability,
transparency,
fault tolerance, and concurrency.

2. ISSUES IN DISTRIBUTED OPERATING SYSTEMS


- Transparency: access, location, migration, replication transparency.
- Scalability: performance with growing nodes/users.
- Reliability and fault-tolerance: masking partial failures.
- Heterogeneity and interoperability.
- Concurrency control and synchronization.
- Resource management and load balancing.
- Security across distributed components.
- Naming and binding services.

3. ARCHITECTURE
3.1 Client-Server Model
- Clients request services; servers provide services. Stateless vs stateful servers.

3.2 Peer-to-Peer Model


- Nodes (peers) act as both clients and servers; decentralized.

3.3 Layered and Modular Architectures


- RPC layer, transport, naming, directory, resource managers.

3.4 Middleware
- Distributed objects, message queues, RPC frameworks (gRPC, Thrift), distributed file
systems (NFS, AFS).

4. COMMUNICATION PRIMITIVES
4.1 Message Passing
- Synchronous vs asynchronous messaging.
- Blocking and non-blocking send/receive.
- Buffering: zero-capacity (rendezvous), bounded, unbounded.

4.2 Remote Procedure Call (RPC)


- Abstract remote calls as local function calls. Client sends request; server executes and
returns reply.
- Marshalling/unmarshalling, stubs, binding, handling failures and timeouts.

4.3 Remote Method Invocation (RMI)


- Higher-level abstraction for object-oriented distributed systems.

4.4 Group Communication


- Multicast and broadcast primitives, reliable multicast protocols.

5. LAMPORT'S LOGICAL CLOCKS


- Logical clocks provide a way to order events in distributed systems where no global
clock exists.
- Lamport's happens-before relation (→): if a -> b, then a must happen before b.
Rules to maintain clocks:

* Each process increments its counter before each event.


* When sending a message, include the counter value.
* On receiving, set local clock = max(local clock, timestamp) + 1.

Properties:

- If a → b then C(a) < C(b), but C(a) < C(b) does not imply a → b (logical clocks are not
totally ordered).

6. CAUSAL ORDERING OF MESSAGES


- Causal ordering preserves the happens-before relation for message delivery.
- Vector clocks provide stronger ordering: vector time V[i] maintained for n processes.
* On local event: V[i] += 1.
* On send: include V.
* On receive: for each k, V[k] = max(V[k], V_received[k]); then V[i] += 1.
- Vector clocks can detect concurrent events and causality precisely.
7. LAMPORT'S DISTRIBUTED MUTUAL EXCLUSION
Lamport's algorithm uses logical clocks and message passing to ensure mutual exclusion.

Outline:

- Requesting process increments logical clock and sends timestamped request to all.
- Other processes enqueue the request and reply with acknowledgement.
- A process can enter critical section when it has received acknowledgements from all
and its request is at head of local queue.
- Upon exit, it removes its request and broadcasts release message.

Complexity:

- Message complexity: O(N) request + O(N) release + O(N) replies = O(N) messages per
critical section.

8. SUZUKI-KASAMI BROADCAST ALGORITHM


- Token-based algorithm for distributed mutual exclusion.
- There is a unique token; a process can enter critical section only if it has the token.
- If a process needs the token and does not have it, it broadcasts a REQUEST to everyone
with a sequence number.
- Processes maintain an array of highest seen requests and a queue for token passing.
- When token holder finishes, it passes token to next requester in queue.
Advantages:

- Amortized message complexity often lower for systems with low contention.

9. AGREEMENT PROTOCOLS
Consensus and agreement are fundamental in coordinating distributed processes.

9.1 Definitions
- Consensus problem: processes propose values and must agree on a single one
satisfying:
* Termination: all correct processes decide.
* Agreement: no two correct processes decide differently.
* Validity: decided value is one of proposed values.

9.2 Synchronous vs Asynchronous Models


- Synchronous: bounds on message delivery and processing times.
- Asynchronous: no timing assumptions; consensus is harder or impossible in certain
failure models.

9.3 FLP Impossibility


- In purely asynchronous systems with even one crash failure, deterministic consensus is
impossible (FLP result).
- Practical systems use randomized algorithms or make timing assumptions (partial
synchrony).

9.4 Paxos (brief)


- Paxos is a family of protocols for reaching consensus in a network of unreliable
processors.
- Roles: proposers, acceptors, learners.
- Two phases: prepare and accept.
- Paxos ensures safety under arbitrary message delays; liveness requires some synchrony
and leader.

9.5 Raft (brief)


- Easier-to-understand consensus algorithm with leader election, log replication and
safety properties.
- Used commonly in modern distributed systems.

10. BYZANTINE AGREEMENT PROBLEM


Byzantine failures: arbitrary/malicious failures where nodes can send conflicting info.

10.1 Byzantine Generals Problem


- Agreement in presence of traitors requires majority of loyal generals; classic result: with
oral messages, need n >= 3f + 1 processes to tolerate f Byzantine faults.
10.2 Solutions and Protocols
- Practical Byzantine Fault Tolerance (PBFT): works in asynchronous networks with
primary/replica model; tolerates f faulty replicas with 3f + 1 total.
- Byzantine consensus protocols used in blockchain contexts (e.g., Tendermint,
HotStuff).

10.3 Classification of Agreement Protocols


- Deterministic synchronous, randomized asynchronous, cryptographic (using signatures)
approaches.

10.4 Practical Considerations


- Performance (message complexity), cryptographic assumptions, authentication, view
changes, state machine replication.

11. ADDITIONAL TOPICS (BRIDGING THEORY TO PRACTICE)


11.1 Distributed File Systems
- NFS, AFS, Google File System (GFS), HDFS: design goals, consistency models,
replication.

11.2 Clock Synchronization


- Cristian's algorithm, Berkeley algorithm, Network Time Protocol (NTP),
precision/accuracy trade-offs.

11.3 Distributed Transactions and Two-Phase Commit (2PC)


- Atomic commit across distributed nodes, coordinator-based commit protocol, blocking
behavior on failures.
- Three-Phase Commit (3PC) to reduce blocking under certain synchrony assumptions.

11.4 Fault Tolerance and Replication


- Active vs passive replication, primary-backup, quorum-based replication.

11.5 Distributed Scheduling and Load Balancing


- Task assignment strategies, distributed queues, work stealing, fairness.
-----------------------------------------------------------------------

EXERCISES, ILLUSTRATIONS, AND EXAM PREPARATION


-----------------------------------------------------------------------

- Multiple worked examples: Banker's algorithm example, Peterson's algorithm proof


sketch, Lamport logical clock event ordering, Suzuki-Kasami step-by-step run.
- Practice questions for university exams (short answer, long answer, algorithm tracing,
proof-based).
- Suggested further reading and references: Tanenbaum & Van Steen, Distributed
Systems; Silberschatz, Galvin & Gagne; Coulouris et al.

-----------------------------------------------------------------------

REFERENCES
-----------------------------------------------------------------------

1. Abraham Silberschatz, Peter Baer Galvin, Greg Gagne - Operating System Concepts.
2. Andrew S. Tanenbaum, Herbert Bos - Modern Operating Systems.
3. Andrew S. Tanenbaum, Maarten van Steen - Distributed Systems: Principles and
Paradigms.
4. Leslie Lamport - Time, Clocks, and the Ordering of Events in a Distributed System.
5. Maurice Herlihy & Nir Shavit - The Art of Multiprocessor Programming.
PCS24102 - ADVANCED OPERATING SYSTEMS
UNIT I & II

Detailed Notes for Postgraduate Students


(Prepared by ChatGPT)

You might also like