Operating Systems Theory (4th Sem) .
Operating Systems Theory (4th Sem) .
IL
COLLEGE ROLL NO: 226617
Program BCA
➖
Semester 4th.
Course Name Operating Systems (Theory).
UNIT ➖01
Fundamentals of Operating system :
● Introduction to Operating system ➖
An operating system (OS) is a fundamental software component that manages computer hardware and
provides services for computer programs. It serves as an intermediary between the hardware and the user
applications, ensuring efficient and organised use of the computer system resources. The primary functions
of an operating system include:
IL
1. File management
2. Memory management
3. Process management
4. Handling input and output
5. Controlling peripheral devices like disk drives and printers
The OS also manages data processing, running applications, and handling memory.
H
Examples of operating systems include:
6. Security ➖
An OS has an inbuilt security function that helps users browse more securely.
IL
8. Special control programs
An OS makes automatic changes to tasks through specific control programs.
9. Networking ➖
An OS manages the internet and network connection inside the computer system.
H
SA
2. Memory: ➖
Operating systems manage the computer's memory, ensuring that each process gets the
necessary space to store its code and data. This involves memory allocation, deallocation, and
protection mechanisms to prevent processes from interfering with each other.
IL
conflicts.
4. Storage: ➖
Operating systems manage storage resources, including hard drives, solid-state drives, and
other storage media. This involves organizing data into files, managing file systems, and handling
read and write operations.
➖
H
5. Network Resources:
In a networked environment, the operating system manages network resources, including
network interfaces and communication protocols. It facilitates data transfer between devices and
ensures proper network configuration.
6. Time: ➖
SA
The OS maintains system time and provides a clock to synchronize processes. It is crucial for
various tasks such as scheduling, timestamping files, and coordinating events within the system.
1. Kernel:➖
The kernel is the core component of the operating system. It is responsible for managing the
hardware resources and providing essential services to other parts of the system, including user
applications. The kernel operates in privileged mode, allowing it direct access to the hardware.
➖
5
❖ Key functions of the kernel include:
B. Memory Management: Allocating and deallocating memory, enforcing memory protection,
and handling virtual memory.
C. File System Management: Managing file operations, organising data on storage devices,
and providing a file system interface to user programs.
D. Device Drivers: Interfacing with hardware devices through device drivers, which are
modules that enable communication between the kernel and specific hardware
components.
IL
E. Interrupt Handling: Managing hardware interrupts and exceptions to ensure proper
system operation.
F. Security and Access Control: Enforcing security policies, user authentication, and
access control to protect system resources.
➖
2. Shell:
H
The shell is the user interface to the operating system. It is a command-line interpreter or
graphical user interface (GUI) that allows users to interact with the system by entering
commands. The shell interprets these commands and executes them by interacting with the
kernel and other system components.
➖
SA
❖ Key functions of the shell include:
A. Command Interpretation: Parsing and interpreting user commands entered through the
command line or GUI.
C. File Management: Facilitating operations such as copying, moving, deleting, and
renaming files and directories.
D. User Interface: Providing a means for users to interact with the system, whether through a
text-based command line or a graphical interface.
In summary, the kernel is the core of the operating system, managing hardware resources and
providing essential services, while the shell is the user interface that allows users to interact with
the system through commands and scripts. Together, they form the foundation of the operating
system's structure and functionality.
➖
6
● Views of operating system
The fundamentals of an operating system (OS) encompass several key concepts and
functionalities that are essential for understanding its role and operation. Here are some
fundamental aspects of operating systems:
IL
physically available.
7. Networking: ➖
Definition: Networking features enable communication between computers. The OS supports
network protocols and provides tools for network configuration and communication.
➖
Here are some types of operating systems:
1. Batch operating systems
These systems were popular in the 1950s and 1960s and allowed many users to share a single
computer. They were designed to run a series of programs in order and did not allow user
IL
interaction.
➖
H
3. Multitasking operating systems
These systems allow the operating system to run multiple processes or applications
simultaneously, and switch between them rapidly.
❖ Programs are passive and don't have an active presence in the system. They become
active when a user or the operating system initiates their execution. A program is
essentially a sequence of instructions that define how a task should be performed.
2. Process: ➖
❖ A process, on the other hand, is the execution of a program. It represents the active state
IL
of a program in memory, along with the resources (CPU, memory, I/O) allocated to it during
its execution.
❖ When a program is loaded into memory and is actively being executed, it becomes a
process. Multiple processes can run concurrently in a multitasking environment.
❖ A process has its own memory space, program counter, registers, and other attributes that
H
define its current state. Processes can communicate with each other and share data, or
they may operate independently.
❖ Example: If you open a word processing application, a process is created to execute the
corresponding program. If you open multiple instances of the same application, each
instance is a separate process.
SA
● PCB ➖
Process Control Block (PCB) is a data structure that contains information about a specific
process in the system. The PCB is a fundamental concept for process management, and it
serves as a repository of key information needed by the operating system to manage and control
processes effectively. The information stored in a PCB includes:
IL
6. Accounting Information:
Various statistics related to the process, such as the amount of CPU time used, elapsed time,
and other accounting details. This information is useful for performance monitoring and resource
allocation.
➖
SA
9. Process Control Information:
Flags and information that control the behaviour of the process, such as whether it can be
preempted, whether it's in the foreground or background, etc.
The PCB is typically stored in the kernel space of the operating system and is associated with
each process in the system. When a context switch occurs, which happens when the operating
system switches the CPU from one process to another, the information in the PCB of the
currently running process is saved, and the PCB of the next process to be executed is loaded.
This ensures that the system can efficiently manage and switch between different processes
while preserving their states
In a state transition diagram, circles represent states and arcs represent transitions between
states.
10
In an operating system, a state transition diagram describes all of the states that an object can
have, the events under which an object changes state, the conditions that must be fulfilled before
the transition will occur, and the activities undertaken during the life of an object.
A process must go through a minimum of four states to be considered complete: New state, Run
state, Ready state, Terminate state.
However, in case a process also requires I/O, the minimum number of states required is 5.
A process can transition from ready to running when the scheduler selects it for execution, or
from running to waiting when it requests an input/output operation.
● Scheduling Queues ➖
Scheduling queues in an operating system play a crucial role in managing the execution of
IL
processes and determining which process gets access to the CPU at any given time. Depending
on the scheduling algorithm employed, processes are placed in different queues to represent
their current state and priority. Here are some common scheduling queues:
● Types of schedulers ➖
Schedulers in operating systems are responsible for determining the order in which processes
are executed and managing the allocation of system resources. There are typically three types of
schedulers, each serving a specific purpose:
IL
1. Long-Term Scheduler (Job Scheduler): ➖
The long-term scheduler is responsible for selecting processes from the job queue and admitting
them to the ready queue. Its primary goal is to control the degree of multiprogramming, deciding
how many processes should be in the main memory at any given time.
Characteristics:
H
❖ Invoked less frequently, usually when a process terminates or a new process arrives.
❖ Determines which processes are brought into the ready queue from the job pool.
❖ Focuses on optimising overall system performance.
Characteristics:
❖ Invoked frequently, potentially on every clock tick or when a process transitions to a
blocked state.
❖ Decides which process in the ready queue gets access to the CPU.
❖ Aims to provide fair and efficient CPU utilisation.
Characteristics:
❖ Invoked less frequently than the short-term scheduler but more frequently than the
long-term scheduler.
12
❖ Decides which processes are moved to the backing store to free up main memory.
❖ Helps manage the system's degree of multiprogramming and prevents excessive demand
for main memory.
These schedulers work together to ensure effective process management, resource allocation,
and system performance. The long-term scheduler determines when new processes are brought
into the system, the medium-term scheduler handles processes in different states, and the
short-term scheduler focuses on the immediate allocation of the CPU.
The scheduling algorithms used by these schedulers can vary, and different operating systems
may employ different strategies based on the system's goals and requirements. Common
scheduling algorithms include First-Come-First-Serve (FCFS), Shortest Job Next (SJN), Round
Robin, Priority Scheduling, and Multilevel Queue Scheduling.
IL
● Concept of Thread
A thread is a single sequence of activities that are executed within a process. It is also known as
the thread of execution or the thread of control.
Threads are also called lightweight processes because they have some of the properties of
processes. Each thread belongs to only one process. In an operating system that supports
multithreading, a process can have many threads.
H
Threads are used to improve the performance of applications. Each thread has its own program
counter, stack, and set of registers.
Threads can be of the same or different types. Multiple threads can run simultaneously and share
resources with each other within a process.
SA
Threads can be managed by a larger process of the operating system. They take much less
resources to run and much less time to switch contexts.
Kernel-level threads can be scheduled more efficiently, resulting in better resource utilisation and
reduced overhead. If a kernel-level thread is blocked, the kernel can still schedule another thread
for execution.
➖
➖
● Benefits, Types of threads in operating system
1. Concurrency:
Threads allow multiple tasks to execute concurrently within a single process. This enables
efficient utilisation of CPU resources and enhances system responsiveness.
2. Responsiveness: ➖
By using threads, an application can remain responsive to user input even while performing
time-consuming tasks. User interface operations can be executed in one thread while
background tasks run in another.
➖
13
3. Resource Sharing:
Threads within the same process share the same address space and resources, such as file
handles and open sockets. This facilitates easy communication and data sharing between
threads.
IL
❖ Supported and managed by the operating system kernel.
❖ Kernel schedules and switches between threads.
❖ Provides better concurrency as the kernel can schedule threads independently.
❖ More overhead due to kernel involvement.
The choice of thread type and model depends on the specific requirements of the application, the
desired level of concurrency, and the characteristics of the underlying hardware and operating
system.
➖
14
● Process synchronisation
Process synchronisation is a crucial concept in operating systems, especially in
multi-programming and multi-processing environments, where multiple processes may run
concurrently. It involves coordinating the execution of processes to ensure proper order of
execution, prevent data inconsistencies, and avoid conflicts for shared resources. Here are
some key mechanisms for process synchronisation:
2. Semaphore: ➖
IL
❖ Purpose: A more general synchronisation primitive that can be used for signalling and
mutual exclusion.
❖ Mechanism: A semaphore is an integer variable that can be incremented or decremented.
Processes can wait for a semaphore to become positive or signal (release) it to increment
its value.
➖
● CPU Scheduling:
● Need of CPU scheduling
CPU scheduling is a critical component of operating systems, and its primary purpose is to
manage and allocate the CPU (Central Processing Unit) resources efficiently among multiple
processes. Several factors necessitate the need for CPU scheduling in an operating system:
1. Multiprogramming: ➖
In a multiprogramming environment, multiple processes are loaded into the main memory
simultaneously. CPU scheduling allows the operating system to switch between these processes,
ensuring that each process gets a fair share of CPU time.
➖
15
2. Concurrency:
Modern computer systems are designed to support concurrent execution of multiple processes.
CPU scheduling is required to allow processes to run concurrently and make progress without
waiting for one process to complete before starting another.
IL
5. Response Time: ➖
Users expect prompt responses from interactive applications. CPU scheduling helps in
minimising response time by quickly switching between processes, allowing for smooth and
responsive interactions with the system.
6. Throughput: ➖
H
CPU scheduling influences the system's throughput, which is the number of processes
completed in a given time period. A well-designed scheduling algorithm can enhance system
throughput by optimising the order in which processes are executed.
In summary, CPU scheduling is crucial for efficient and effective utilisation of CPU resources in
modern operating systems. It plays a vital role in managing processes, optimising system
performance, and providing a responsive and equitable computing environment for users.
Characteristics:
❖ During the CPU burst, the process executes instructions and performs computations.
16
❖ The length of the CPU burst varies from process to process and depends on the nature of
the computation being performed.
❖ After completing the CPU burst, the process typically transitions to an I/O-bound state or
waits for an external event, such as user input or data from an I/O device.
Characteristics:
❖ During the I/O burst, the process is blocked, and the CPU is idle as the process awaits the
completion of an I/O operation (e.g., reading from disk, receiving data from a network).
❖ The duration of the I/O burst is determined by the speed of the I/O device and the specific
I/O operation being performed.
❖ Once the I/O operation is complete, the process transitions back to the CPU burst phase
IL
and resumes execution.
This CPU I/O burst cycle repeats throughout the lifetime of a process. The process continues to
alternate between CPU bursts and I/O bursts until it completes its execution. The behaviour of
processes and their mix of CPU and I/O operations have implications for system performance
and the effectiveness of CPU scheduling algorithms.
H
Understanding the CPU I/O burst cycle is crucial for designing efficient scheduling strategies.
Processes that alternate between CPU and I/O operations are classified as I/O-bound, and those
with more extended CPU bursts are classified as CPU-bound. Different scheduling algorithms
and strategies may be employed based on the characteristics of the processes in the system to
optimise system performance and responsiveness.
➖
SA
● Pre-emptive vs. Non-pre-emptive scheduling
Pre-emptive and non-pre-emptive (or preemptive and cooperative) scheduling are two different
approaches to managing the execution of processes in an operating system. These scheduling
strategies determine how the operating system decides when to switch between different tasks or
processes.
Comparison:
IL
1. Responsiveness:
Pre-emptive scheduling provides better responsiveness since the operating system can interrupt
a process and switch to another one, ensuring that no process monopolizes the CPU for too
long.
Non-pre-emptive scheduling relies on processes voluntarily giving up the CPU, which may lead
to slower response times, especially if a process does not yield the CPU.
➖
2. Complexity:
H
Non-pre-emptive scheduling is often simpler to implement since it doesn't require forcibly
interrupting running processes.
Pre-emptive scheduling introduces the complexity of managing context switches and ensuring
fairness among processes.
➖
SA
3. Fairness:
Pre-emptive scheduling tends to be fairer as it prevents a single long-running process from
hogging the CPU.
Non-pre-emptive scheduling might lead to unfairness if a process doesn't release the CPU in a
reasonable time.
➖
● Different scheduling criteria, scheduling algorithms (FCSC, SJF, Round-Robin,
Multilevel Queue)
Scheduling criteria and algorithms play a crucial role in managing the execution of processes in
an operating system. Different scheduling algorithms are designed to achieve various goals
based on different criteria. Here are some common scheduling criteria and algorithms:
Scheduling Criteria: ➖
1. CPU Utilisation: ➖
➢ Goal: Keep the CPU as busy as possible.
➢ Criterion: Maximise CPU utilisation.
➖
18
2. Throughput:
➢ Goal: Maximise the number of processes completed per unit of time.
➢ Criterion: Maximise the number of processes finished.
IL
➢ Criterion: Minimise the time between submitting a request and getting the first response.
6. Fairness: ➖
➢ Goal: Ensure fair allocation of CPU time among competing processes.
➢ Criterion: Avoid situations where one process dominates the CPU.
Scheduling Algorithms:
H
1. First-Come-First-Serve (FCFS): ➖
➔ Principle: The first process that arrives is the first to be executed.
➔ Advantage: Simple to understand and implement.
➔ Disadvantage: Poor turnaround time and waiting time, known as the "convoy effect."
➖
SA
2. Shortest Job Next (SJN) or Shortest Job First (SJF):
➔ Principle: Execute the process with the shortest burst time first.
➔ Advantage: Minimises waiting time and turnaround time.
➔ Disadvantage: Difficult to predict the burst time accurately.
IL
H
SA
➖02
20
UNIT :
Memory Management:
● Introduction➖
Memory management in operating systems is a critical aspect that involves handling the
computer's primary memory, also known as RAM (Random Access Memory). It is responsible for
efficient allocation and deallocation of memory to processes, ensuring that the system operates
smoothly and optimally. Memory management plays a crucial role in the overall performance and
➖
stability of an operating system.
Here's an introduction to the key aspects of memory management:
IL
1. Relocation:
➔ Goal: Allow programs to be placed anywhere in the memory.
2. Protection:
➔ Goal: Ensure that one process cannot interfere with the execution of another process or
the operating system.
3. Sharing:
H
➔ Goal: Enable multiple processes to share a portion of the memory.
9. Swapping:
➔ Goal: Move a process from main memory to secondary storage (and vice versa) to allow
more processes to fit into memory.
➔ Swapping Algorithm: Determines which processes to swap in and out.
10. Partitioning:
➔ Fixed Partitioning: Memory is divided into fixed-sized partitions, and each partition can hold
one process.
IL
➔ Dynamic Partitioning: Memory is divided into variable-sized partitions to accommodate
processes of different sizes.
11.Fragmentation:
➔ Internal Fragmentation: Wasted memory within a partition due to the allocation of more
space than needed.
➔ External Fragmentation: Free memory blocks scattered throughout the system that are too
H
small to be allocated.
1. Paging:
➔ Pages: Fixed-size blocks in both physical and logical memory.
SA
➔ Page Table: Maps logical to physical addresses.
2. Segmentation:
➔ Segments: Variable-sized blocks representing different parts of a program (code, data,
stack).
➔ Segment Table: Maps each segment to its physical location.
● Address Binding ➖
Address binding in memory management refers to the process of associating a logical address
(also known as a virtual address) with a physical address in the computer's memory. The primary
goal of address binding is to provide a means for processes to access memory in a controlled
and organised manner. There are different phases of address binding in the context of
program execution:
22
1. Compile-Time Address Binding:
➔ Static Addresses: Addresses are assigned to program variables and instructions during
the compilation phase.
➔ Advantage: Fast execution, as the addresses are known beforehand.
➔ Disadvantage: Lack of flexibility, as the program cannot adapt to changes in memory
availability.
IL
program.
➖ int x = 10;
Example:
Consider a simple program with a variable x
IL
In modern operating systems, dynamic binding and virtual memory techniques are commonly
used to provide flexibility, efficient memory utilisation, and isolation between processes. These
methods involve the use of page tables, segmentation, and demand paging to manage the
mapping between logical and physical addresses dynamically.
➖
SA
2. Loading
The process of loading a program from secondary memory to the main memory for execution.
3. Linking ➖
The process of collecting and combining various pieces of code and data into a single file that
can be loaded into memory and executed.
5. Protection ➖
A mechanism that controls the access of programs, processes, or users to the resources defined
by a computer system. Protection enforces memory protection by preventing processes from
accessing or modifying memory regions that do not belong to them.
Memory management facilitates data sharing among processes, enabling more efficient resource
utilisation.
➖
24
● Paging and Segmentation
Paging and segmentation are two fundamental memory management techniques used in
operating systems to manage the allocation of memory to processes efficiently. Each technique
offers unique advantages and is suited to different types of systems and applications.
Let's explore both paging and segmentation:
1. Paging: ➖
Definition: Paging is a memory management scheme that divides physical memory into fixed-size
blocks called "pages" and logical memory into fixed-size blocks called "frames." Processes are
also divided into fixed-size blocks called "pages."
Key Features: ➖
➔ Fixed-size Blocks: Both physical memory and logical memory are divided into fixed-size
blocks.
IL
➔ Address Translation: Logical addresses generated by the CPU are divided into page
numbers and page offsets. Page numbers are used to index into a page table, which
translates them into physical frame numbers.
➔ Flexible Allocation: Allows processes to be allocated non-contiguous memory locations.
➔ Simplifies Memory Management: Eliminates external fragmentation by using fixed-size
pages.
➔ Page Faults: When a required page is not present in memory, a page fault occurs, and the
H
operating system brings the required page into memory from secondary storage (e.g.,
disk).
Advantages of Paging: ➖
➔ Simplicity: Paging is simpler to implement compared to segmentation.
➔ Flexible Allocation: Allows processes to be allocated memory in non-contiguous chunks.
SA
➔ Solves Fragmentation: Eliminates external fragmentation by using fixed-size pages.
Disadvantages of Paging: ➖
➔ Internal Fragmentation: Can suffer from internal fragmentation, where the last page of a
process may not be fully utilised.
➔ Complexity in Page Table Management: Large page tables can be complex to manage,
especially in systems with large address spaces.
2. Segmentation: ➖
Definition: Segmentation is a memory management technique that divides the logical address
space of a process into variable-sized segments, such as code, data, stack, etc. Each segment is
treated as a logical unit and is assigned a base address and length.
Key Features: ➖
➔ Variable-sized Segments: Allows for a more flexible allocation of memory compared to
paging.
➔ Logical Organisation: Corresponds to the logical structure of the program, such as code,
data, and stack segments.
25
➔ Address Translation: Each segment is assigned a base address and length. Translation of
logical addresses to physical addresses involves adding the base address of the segment
to the logical address.
Advantages of Segmentation: ➖
➔ Logical Organisation: Reflects the logical structure of programs, making it easier to
manage and understand memory usage.
➔ Supports Growing Data Structures: Allows data structures to grow dynamically without the
need for contiguous memory allocation.
Disadvantages of Segmentation: ➖
➔ Fragmentation: Can suffer from external fragmentation, where free memory is fragmented
into small blocks that are unusable.
➔ Complex Address Translation: Requires additional hardware support or software overhead
IL
for address translation, especially in systems with large numbers of segments.
Comparison: ➖
1. Flexibility: Segmentation offers more flexibility in memory allocation due to variable-sized
segments, while paging uses fixed-size blocks.
2. Fragmentation: Paging eliminates external fragmentation but may suffer from internal
fragmentation. Segmentation can suffer from external fragmentation.
H
3. Address Translation: Paging involves translation of page numbers to frame numbers using
a page table. Segmentation involves translation of segment numbers to base addresses.
4. Implementation Complexity: Segmentation can be more complex to implement and
manage due to variable-sized segments and potential fragmentation issues.
➖
● Virtual memory: basic concepts of demand paging, page replacement
SA
algorithms
Virtual memory is a crucial concept in modern operating systems, allowing processes to access
more memory than physically available and providing several benefits such as efficient memory
utilisation, protection, and simplifying programming. Two fundamental concepts in virtual memory
management are demand paging and page replacement algorithms.
Key Features: ➖
A. Lazy Loading: Pages are loaded into memory only when they are accessed, reducing initial
memory overhead.
26
B. Efficient Use of Memory: Only the pages needed for execution are loaded, allowing for
efficient memory utilisation.
C. Reduced I/O Overhead: Pages are loaded into memory on-demand, reducing the initial I/O
overhead compared to loading the entire process into memory upfront.
IL
feasible.
2. FIFO (First-In-First-Out):
➔ Principle: Evicts the page that was brought into memory earliest.
➔ Advantage: Simple implementation.
➔ Disadvantage: Suffers from the "Belady's Anomaly," where increasing the number of
frames can increase the page fault rate.
H
3. LRU (Least Recently Used):
➔ Principle: Evicts the page that has not been accessed for the longest time.
➔ Advantage: Tends to perform well in practice.
➔ Disadvantage: Requires maintaining a record of the access history for each page, which
can be costly.
SA
4. LFU (Least Frequently Used):
➔ Principle: Evicts the page that has been accessed the least frequently.
➔ Advantage: Suitable for scenarios where repeated accesses to the same pages occur.
➔ Disadvantage: May suffer from the "frequency anomaly," where a page that was heavily
used in the past but not recently is evicted.
UNIT
Input / Output Device Management:
IL
I/O Controllers:
I/O controllers (also called I/O processors or I/O interfaces) act as intermediaries between I/O
devices and the CPU.
They manage the communication between the CPU and the I/O devices, handling data transfer,
status monitoring, and error handling.
Device Drivers: ➖
H
1. Definition: Device drivers are software components that facilitate communication between
the operating system and hardware devices.
2. Responsibilities:
➔ Initializing and configuring devices during system startup.
➔ Handling device-specific commands and requests.
SA
➔ Managing data transfer between devices and memory.
➔ Handling interrupts and managing device status.
Disk Storage: ➖
1. Hard Disk Drives (HDDs) and Solid State Drives (SSDs): ➖
HDDs use rotating magnetic disks to store data, while SSDs use flash memory.
Both provide non-volatile storage for operating system files, user data, and applications.
IL
Disk Formatting: Preparing a disk for use by initialising its file system structures.
Disk Maintenance: Tasks such as disk defragmentation, error checking, and bad sector detection.
RAID (Redundant Array of Independent Disks): Techniques for combining multiple disks into a
single logical unit to improve performance, reliability, or both.
Conclusion: ➖
I/O devices and controllers, device drivers, and disk storage are integral components of operating
H
systems, facilitating communication between the hardware and software layers and enabling
efficient input/output operations. Understanding these components is essential for designing and
managing robust and high-performance operating systems.
➖
● File Management: Basic concepts, file operations, access methods, directory
SA
structures and management, remote file systems;
File management is a core aspect of operating systems, responsible for organising and
manipulating data stored on secondary storage devices such as hard drives and SSDs. Here are
the basic concepts, operations, access methods, directory structures, and management
techniques, as well as remote file systems commonly found in operating systems:
Basic Concepts: ➖
➖
1. File:
A collection of related information stored on secondary storage.
Files can represent documents, programs, images, databases, etc.
Access Methods: ➖
IL
1. Sequential Access: ➖
A. Accessing data in a linear manner, from the beginning to the end of the file.
B. Suitable for tasks like reading logs or processing data sequentially.
IL
File Management Functions:
Conclusion: ➖
File management is a fundamental aspect of operating systems, responsible for organizing,
accessing, and manipulating data stored on secondary storage devices. Understanding basic file
concepts, operations, access methods, directory structures, remote file systems, and
management techniques is essential for efficient and secure data handling in operating systems.
● File Protection ➖
File protection is important for data security and ensures that sensitive information remains
confidential and secure. Operating systems provide various mechanisms and techniques to
protect files, such as:
1. File permissions
2. Encryption
3. Access control lists
4. Auditing
5. Physical file security
➖
31
Here are some other file protection features in operating systems:
1. Secure File System (SFS): Uses cryptographic techniques to provide file data security.
2. NTFS: The default file system for the Windows operating system family. It offers a flexible
security model that allows administrators to control how users and groups can interact with
folders and files.
3. Linux: Built on a Unix-like architecture, which is known for its security features and
robustness. It also has built-in security features like file permissions and user accounts that
help to prevent unauthorised access to system files and resources.
IL
H
SA
➖04
32
UNIT
Advanced Operating systems:
➖
● Introduction to Distributed Operating system, Characteristics, architecture, Issues,
Communication & Synchronisation;
IL
1. Resource Sharing: ➖
Distributed operating systems enable sharing of hardware resources such as processors,
memory, and storage across multiple nodes in the network.
2. Transparency: ➖
Users perceive the distributed system as a single, cohesive entity, hiding the complexities of the
underlying network and hardware.
3. Concurrency: ➖
H
Multiple processes can execute concurrently on different nodes, increasing system throughput
and performance.
4. Scalability:➖
SA
Distributed systems can easily scale by adding or removing nodes, allowing for increased
processing power and storage capacity.
6. Heterogeneity: ➖
Distributed systems can consist of diverse hardware and software platforms, allowing integration
of different technologies.
IL
performance.
Protecting data and resources from unauthorised access, interception, and tampering.
5. Scalability: ➖
SA
Managing system growth and ensuring that performance scales with the number of nodes.
➖
● Introduction Multiprocessor Operating system, Architecture, Structure,
Synchronisation & Scheduling;
IL
A Multiprocessor Operating System (MPOS) is an operating system designed to run on systems
with multiple processors (also known as multiprocessor systems or parallel systems). These
systems have more than one central processing unit (CPU) capable of executing multiple tasks
simultaneously. Here's an introduction to multiprocessor operating systems, covering
architecture, structure, synchronisation, and scheduling:
IL
Synchronisation in Multiprocessor Operating Systems:
Conclusion:
Multiprocessor operating systems provide a scalable and efficient platform for parallel computing
by harnessing the power of multiple processors. They manage system resources, coordinate task
execution, and ensure synchronisation and scheduling across multiple processors. With their
sophisticated architecture, structure, synchronisation mechanisms, and scheduling algorithms,
multiprocessor operating systems enable high-performance computing and support a wide range
of parallel applications and workloads.
➖
● Introduction to Real-Time Operating System, Characteristics, Structure &
Scheduling
IL
A Real-Time Operating System (RTOS) is an operating system designed to meet the stringent
timing requirements of real-time systems, where tasks must be completed within specific
deadlines. RTOSes are commonly used in embedded systems, industrial automation, automotive
systems, aerospace applications, and other environments where timing predictability is critical.
1. Determinism: ➖
I. RTOSes provide deterministic behaviour, where the timing of task execution and system
SA
response is predictable and consistent.
5. Minimal Overhead: ➖ RTOSes are designed to have low overhead, minimising
context-switching time and maximising system responsiveness.
➖
37
Structure of Real-Time Operating Systems:
1. Kernel:➖
I. The kernel of an RTOS provides core services such as task scheduling, interrupt handling,
memory management, and interprocess communication.
II. Kernels in RTOSes are typically small and optimised for fast response times.
IL
4. Timers and Clocks: ➖
I. RTOSes include timers and clock management facilities for scheduling periodic tasks and
tracking time-sensitive operations.
Conclusion: ➖
Real-Time Operating Systems play a crucial role in systems where timing predictability and
responsiveness are critical requirements. They provide deterministic behaviour, task
prioritisation, efficient interrupt handling, and low overhead to meet the timing constraints of
real-time applications. With their specialised structure, scheduling algorithms, and support for
task management, RTOSes enable the development of reliable, high-performance systems
across a wide range of domains.
➖
38
● Case study of Linux operating system
Linux is one of the most widely used operating systems in the world, powering everything from
personal computers and servers to embedded systems and supercomputers. It has a rich history
and a vibrant open-source community driving its development.
Let's delve into a case study of the Linux operating system:
Background: ➖
Linux was created by Linus Torvalds in 1991 as a Unix-like operating system kernel. Initially
developed as a hobby project, it quickly gained popularity due to its open-source nature and the
collaborative efforts of developers worldwide. Today, Linux distributions (distros) like Ubuntu,
Fedora, Debian, and CentOS are used across various industries and platforms.
IL
1. Community-Driven Development:
I. Linux development follows a collaborative model where thousands of developers
contribute to the kernel's development.
II. The Linux community actively reviews, tests, and submits patches to improve the kernel's
performance, security, and features.
III. Torvalds oversees the development process, maintaining the mainline kernel tree and
H
releasing stable versions.
2. Modular Architecture: ➖
I. The Linux kernel follows a modular design, consisting of various subsystems like process
management, memory management, filesystems, networking, and device drivers.
II. Each subsystem is responsible for a specific functionality, allowing for easier maintenance,
SA
debugging, and scalability.
3. Continuous Improvement: ➖
I. The Linux kernel is constantly evolving, with new features and improvements being added
regularly.
II. The development process includes rigorous testing, code review, and performance
optimization to ensure stability and reliability.
5. Security Focus: ➖
I. Linux prioritises security, with features like access control, memory protection, and secure
boot mechanisms.
39
II. The kernel development process includes regular security audits and vulnerability patches
to address emerging threats.
Conclusion: ➖
The Linux operating system stands as a testament to the power of open-source collaboration and
community-driven development. Its modular architecture, continuous improvement, wide
hardware support, security focus, and adaptability make it a preferred choice for a diverse range
of applications and industries. As Linux continues to evolve, it remains a cornerstone of modern
IL
computing, driving innovation and empowering users worldwide.