E-Content OS Full Notes Unit - 1 To 4
E-Content OS Full Notes Unit - 1 To 4
UNIT – I: INTRODUCTION
Definition of OS – Mainframe Systems Desktop Systems – Multi processor System –
Distributed systems – Real time Systems – Handheld Systems – Operating System
Structure – System Components – Operating System Services – System Calls – System
Programs.
UNIT – II: PROCESS MANAGEMENT
Process Concept – Process Scheduling – Operations on Processes – Co – operating
Processes – Inter Process Communication.CPU Scheduling: Scheduling Concepts-
Scheduling Criteria – Scheduling Algorithms – Multiprocessor Scheduling – Real time
Scheduling.
UNIT – III: PROCESS SYNCHRONIZATION
The Critical Section Problem – Semaphores – Critical Regions – Monitors – Deadlocks
Characterization – Handling Deadlocks – Deadlock Prevention – Deadlock Avoidance –
Deadlock Detection – Deadlock Recovery
UNIT – IV: MEMORY MANAGEMENT
Swapping-Contiguous Memory Allocation – Paging – Segmentation. Virtual Memory
– Demand Paging – Page replacement. File System Interface: File Concept – Access
Methods- Directory Structure. File – System Implementation: File – System Structure
– Allocation Methods – Free space Management.
UNIT – V: PROTECTION AND SECURITY
Protection: Goals of Protection – Access Matrix – Implementation of Access matrix –
Security – The Security Problem – User Authentication – System Threats. Case Study:
Linux System
TEXT BOOK:
1. Silberschatz Galvin and Gagne, Operating System Concepts, 6th Edition, John Wiley &
Sons, Inc. , 2004
REFERENCE BOOKS:
1. Operating System Concepts and Design by Milankovic M., 2nd Edition, McGraw
Hill, 1992)
OPERATING SYSTEMS
Unit – I
Introduction:
An operating System (OS) is an intermediary between users and computer hardware. It
provides users an environment in which a user can execute programs conveniently and
efficiently.
Definition of OS
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
Evolution of OS:
1.Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically
transfers control from one job to another. First rudimentary operating system. Resident
monitor
• initial control in monitor
• control transfers to job
• when job completes control transfers pack to monitor
➢ In distributed system, the different machines are connected in a network and each
machine has its own processor and own local memory.
➢ In this system, the operating systems on all the machines work together to manage
the collective network resource.
➢ It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
• With resource sharing facility user at one site may be able to use the resources
available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially continue
operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
5. Real Time Operating Systems
Real time system means that the system is subjected to real time, i.e., response should be
guaranteed within a specified timing constraint or system should meet the specified
deadline. For example: flight control system, real time monitors etc.
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
system Secondary storage is limited or missing with data stored in ROM. In these systems
virtual Memory is almost never found.
Soft real time systems are less restrictive. Critical real-time task gets priority over other
tasks and Retains the priority until it completes. Soft real-time systems have limited utility
than hard real-Time systems. For example, Multimedia, virtual reality, Advanced Scientific
Projects like Undersea exploration and planetary rovers etc.
6. Handheld Systems
Operating-System Structure
Simple Structure
Such operating systems do not have well defined structure and are small, simple and limited
systems. The interfaces and levels of functionality are not well separated. MS-DOS is an
example of such operating system. In MS-DOS application programs are able to access the
basic I/O routines. These types of operating system cause the entire system to crash if one
of the user programs fails.
Layered structure:
An OS can be broken into pieces and retain much more control on system. In this structure
the OS is broken into number of layers (levels). The bottom layer (layer 0) is the hardware
and the topmost layer (layer N) is the user interface. These layers are so designed that each
layer uses the functions of the lower level layers only. This simplifies the debugging
process as if lower level layers are debugged and an error occurs during debugging then
the error must be on that layer only as the lower level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified
and passed on which adds overhead to the system. Moreover careful planning of the layers
is necessary as a layer can use only lower level layers. UNIX is an example of this structure.
Micro-kernel:
This structure designs the operating system by removing all non-essential components from
the kernel and implementing them as system and user programs. This result in a smaller
kernel called the micro-kernel.
Advantages of this structure are that all new services need to be added to user space and
does not require the kernel to be modified. Thus it is more secure and reliable as if a service
fails then rest of the operating system remains untouched. Mac OS is an example of this
type of OS.
Although Mac, Unix, Linux, Windows, and other OS do not have the same structure,
most of the operating systems share similar OS system components like File, Process,
Memory, I/O device management.
➢ Memory Management
➢ Processor Management.
➢ Device Management
➢ File Management
➢ Security Management
➢ I/O Device Management
➢ Secondary-Storage Management
➢ Network Management
Memory Management
Main memory provides a fast storage that can be access directly by the CPU. So for a
program to be executed, it must in the main memory. Operating System does the following
activities for memory management.
• Keeps tracks of primary memory i.e. what part of it are in use by whom, what part
are not in use.
• In multiprogramming, OS decides which process will get memory when and how
much.
• Allocates the memory when the process requests it to do so.
• De-allocates the memory when the process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, OS decides which process gets the processor when and
how much time. This function is called process scheduling. Operating System does the
following activities for processor management.
• Keeps tracks of processor and status of process. Program responsible for this task is
known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when processor is no longer required.
Device Management
OS manages device communication via their respective drivers. Operating System does the
following activities for device management.
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Operating System does the following
activities for file management.
• Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.
Security Management
The various processes in an operating system need to be secured from each other’s
activities. For that purpose, various mechanisms can be used to ensure that those processes
which want to operate files, memory CPU, and other hardware resources should have
proper authorization from the operating system.
For example, Memory addressing hardware helps you to confirm that a process can be
executed within its own address space. The time ensures that no process has control of the
CPU without renouncing it.
Lastly, no process is allowed to do its own I/O, to protect, which helps you to keep the
integrity of the various peripheral devices.
I/O Device Management
One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.
Secondary-Storage Management
The most important task of a computer system is to execute programs. These programs,
along with the data, helps you to access, which is in the main memory during execution.
This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory. Today
modern computers use hard drives/SSD as the primary storage of both programs and data.
However, the secondary storage management also works with storage devices, like a USB
flash drive, and CD/DVD drives.
Programs like assemblers, compilers, stored on the disk until it is loaded into memory, and
then use the disk as a source and destination for processing.
• Storage allocation
• Free space management
• Disk scheduling
Network Management
Network management is the process of administering and managing computer
networks. It includes performance management, fault analysis, provisioning of
networks, and maintaining the quality of service.
A distributed system is a collection of computers/processors that never share their
own memory or a clock. In this type of system, all the processors have their local
Memory, and the processors communicate with each other using different
communication lines, like fiber optics or telephone lines.
An Operating System provides services to both the users and to the programs.
Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated
as a process. A process includes the complete execution context (code to execute, data to
manipulate, Registers, OS resources in use). Following are the major activities of an
operating system with respect to program management.
I/O Operation
I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers
hides the peculiarities of specific hardware devices from the user as the device driver knows
the peculiarities of the specific device.Operating System manages the communication
between user and device drivers. Following are the major activities of an operating system
with respect to I/O Operation.
• I/O operation means read or write operation with any file or any specific I/O device.
• Program may require any I/O device while running.
• Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose. Few examples of storage media are
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media
has its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications
between processes. Multiple processes with one another through communication lines in
the network. OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication.
Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect
to error handling.
Resource Management
Considering computer systems having multiple users the concurrent execution of multiple
processes, then the various processes must be protected from each another’s activities.
System Calls
System Programs
At the lowest level is hardware. Next are the operating system, then the system programs,
and finally the application programs. System programs provide a convenient environment
for program development and execution. Some of them are simply user interfaces to system
calls; others are considerably more complex.
• File management. These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
• Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are
more complex, providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal or other output
devices or files or display it in a window of the GUI. Some systems also support a registry,
which is used to store and retrieve configuration information.
• File modification. Several text editors may be available to create and modify the content
of files stored on disk or other storage devices. There may also be special commands to
search contents of files or perform transformations of the text.
A process includes:
• program counter
• stack
• data section
Process in Memory
Process State
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-state process model refers to running and non-running states which are described
below –
1. Running
When a new process is created, it enters into the system as in the running
state.
2. Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue
is implemented by using linked list. Use of dispatcher is as follows. When
a process is interrupted, that process is transferred in the waiting queue. If
the process has completed or aborted, the process is discarded. In either
case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types –
➢ Long-Term Scheduler
➢ Short-Term Scheduler
➢ Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such
as I/O bound and processor bound. It also controls the degree of multiprogramming. If
the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are ready
to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows –
Process Creation
Processes need to be created in the system for different operations. This can be done
by the following events –
▪ System initialization
A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,
they have distinct address spaces. A diagram that demonstrates process creation using
fork() is as follows –
Process Preemption
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O
as the I/O events are executed in the main memory and don’t require the processor.
After the event is complete, the process again goes to the ready state.
Process Termination
After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
Cooperating Processes
➢ Information sharing
➢ Computation speed-up
➢ Modularity
➢ Convenience
Interprocess communication
• Semaphore
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at
a time. This is useful for synchronization and also prevents race conditions.
• Barrier
A barrier does not allow individual processes to proceed until all the processes reach
it. Many parallel languages and collective routines impose barriers.
• Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.
Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-
way data channel between two processes. This uses standard input and output methods.
Pipes are used in all POSIX systems as well as Windows operating systems.
Socket
The socket is the endpoint for sending or receiving data in a network. This is true for
data sent between processes on the same computer or data sent between different
computers on the same network. Most of the operating systems use sockets for
interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use files
for data storage.
Signal - Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals are not
used to transfer data but are used for remote commands between processes.
Shared Memory
Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are used by
most operating systems.
CPU Scheduling
Arrival Time: Time at which the process arrives in the ready queue.
Waiting Time(W.T): Time Difference between turn around time and burst time.
In Multiprogramming, if the long term scheduler picks more I/O bound processes then
most of the time, the CPU remains idol. The task of Operating system is to optimize the
utilization of resources.
If most of the running processes change their state from running to waiting then there
may always be a possibility of deadlock in the system. Hence to reduce this overhead,
the OS needs to schedule the jobs to get the optimal utilization of CPU and to avoid the
possibility to deadlock.
Scheduling Criteria
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
• Max throughput
There are various algorithms which are used by the Operating System to schedule the
processes on the processor in an efficient way.
3. Maximum throughput
There are the following algorithms which can be used to schedule the jobs.
It is the simplest algorithm to implement. The process with the minimal arrival time
will get the CPU first. The lesser the arrival time, the sooner will the process gets the
CPU. It is the non-preemptive type of scheduling.
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All
the processes will get executed in the cyclic way. Each of the process will get the CPU
for a small amount of time (called time quantum) and then get back to the ready queue
to wait for its next turn. It is a preemptive type of scheduling.
The job with the shortest burst time will get the CPU first. The lesser the burst time,
the sooner will the process get the CPU. It is the non-preemptive type of scheduling.
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according
to the remaining time of the execution.
In this algorithm, the priority will be assigned to each of the processes. The higher the
priority, the sooner will the process get the CPU. If the priority of the two processes is
same then they will be scheduled according to their arrival time.
In this scheduling Algorithm, the process with highest response ratio will be scheduled
next. This reduces the starvation in the system.
Multiple-Processor Scheduling
✓ Load sharing
The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes from
entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that
the Race condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical
section problem. We need to provide a solution in such a way that the following
conditions can be satisfied.
Primary
1. Mutual Exclusion
Progress means that if one process doesn't need to execute into critical section
then it should not stop other processes to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into the
critical section. The process must not be endlessly waiting for getting into the
critical section.
2. Architectural Neutrality
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.
• Wait
while (S<=0);
S--;
• Signal
The signal operation increments the value of its argument S.
signal(S)
S++;
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
• Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are added,
semaphore count automatically incremented and if the resources are removed,
the count is decremented.
• Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted
to 0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.
Advantages of Semaphores
• Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
Disadvantages of Semaphores
Critical Regions
• Regions referring to the same shared variable exclude each other in time.
• When a process tries to execute the region statement, the Boolean expression B
is evaluated. If B is true, statement S is executed. If it is false, the process is
delayed until B becomes true and no other process is in the region associated
with v.
Monitors vs Semaphores
Monitors and semaphores are used for process synchronization and allow processes to
access the shared resources using mutual exclusion. However, monitors and
semaphores contain many differences. Details about both of these are given as follows−
Monitors
Monitors are a synchronization construct that were created to overcome the problems
caused by semaphores such as timing errors.
Monitors are abstract data types and contain shared data variables and procedures. The
shared data variables cannot be directly accessed by a process and procedures are
required to allow a single process to access the shared data variables at a time.
monitor monitorName
data variables;
Procedure P1(....)
Procedure P2(....)
{
}
Procedure Pn(....)
Initialization Code(....)
Only one process can be active in a monitor at a time. Other processes that need to
access the shared variables in a monitor have to line up in a queue and are only provided
access when the previous process release the shared variables.
Deadlock Characterization
A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
A deadlock occurs if the four Coffman conditions hold true. But these conditions are
not mutually exclusive. They are given as follows −
Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for
the resource held by the third process and so on, till the last process is waiting for a
resource held by the first process. This forms a circular chain. For example: Process 1
is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated
Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Handling Deadlocks
Deadlock prevention and deadlock avoidance, deadlock detection are the main methods
for handling deadlocks. Details about these are given as follows −
Deadlock Prevention
It is important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is
even a slight possibility that a transaction may lead to deadlock, it is never allowed to
execute.
Some deadlock prevention schemes that use timestamps in order to make sure that a
deadlock does not occur are given as follows –
• In the wait - die scheme, if a transaction T1 requests for a resource that is held
by transaction T2, one of the following two scenarios may occur −
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred.
The wait for graph can be used for deadlock avoidance. This is however only useful for
smaller databases as it can get quite complex in larger databases.
The wait for graph shows the relationship between the resources and transactions. If a
transaction requests a resource or if it already holds a resource, it is visible as an edge
on the wait for graph. If the wait for graph contains a cycle, then there may be a deadlock
in the system, otherwise not.
Deadlock Detection
1. If resources have single instance:
In this case for Deadlock detection we can run an algorithm to check for cycle in
the Resource Allocation Graph. Presence of cycle in the graph is the sufficient
condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock
recovery as it is time and space consuming process. Real-time operating systems use
Deadlock recovery.
Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give
it to the other process with the expectation that it will complete the execution and will
release this resource sooner. Well, choosing a resource which will be snatched is
going to be a bit difficult.
Rollback to a safe state
System passes through various states to get into the deadlock state. The operating
system canrollback the system to the previous safe state. For this purpose, OS needs to
implement check pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the
previous safe state.
For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which
process to kill. Generally, Operating system kills a process which has done least
amount of work until now.
Swapping
The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well as
the time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second. The
actual transfer of the 1000K process to or from memory will take
Memory Allocation
1 Single-partition allocation
2 Multiple-partition allocation
partition. When the process terminates, the partition becomes available for another
process.
Paging
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard that's set up to emulate the computer's RAM. Paging technique plays an important
role in implementing virtual memory.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
• Due to equal size of the pages and frames, swapping becomes very easy.
• Page table requires extra memory space, so may not be good for a system having
small RAM.
Segmentation
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.
Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset.
Virtual Memory
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
• User written error handling routines are used only when an error occurred
in the data or computation.
• Many tables are assigned a fixed amount of address space even though
only a small amount of the table is actually used.
• Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in
advance. When a context switch occurs, the operating system does not copy any of the
old program’s pages out to the disk or any of the new program’s pages into the main
memory Instead, it just begins executing the new program after loading the first page
and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in
the main memory because it was swapped out a little ago, the processor treats this
invalid memory reference as a page fault and transfers control from the program to
the operating system to demand the page back into the memory.
Advantages
Disadvantages
• Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
Page Replacement Algorithms
The page replacement algorithm decides which memory page is to be replaced. The
process of replacement is sometimes called swap out or write to disk. Page replacement
is done when the requested page is not found in the main memory (page fault).
• a directory structure, which organizes and provides information about all the files
in the system.
In this chapter, you will learn about the different file tribute, concepts of file and its
storage along with operations on files.
File Concept
Computer store information in storage media such as disk, tape drives, and optical disks.
The operating system provides a logical view of the information stored in the disk. This
logical storage unit is a file.
The information stored in files are non-volatile, means they are not lost during power
failures. A file is named collection of related information that is stored on physical
storage.
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e.,
the information in the file is processed in order, one record after the other. This access
method is the most primitive one. Example: Compilers usually access files in this
fashion.
Direct/Random access
• Each record has its own address on the file with by the help of which it can be
directly accessed for reading or writing.
• The records need not be in any sequence within the file and they need not be in
adjacent locations on the storage medium.
Directory Structure
Directory can be defined as the listing of the related files on the disk. The directory
may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard
disk can be divided into the number of partitions of different sizes. The partitions are
also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the partition
can be listed. A directory entry is maintained for each file in the directory which
stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch of files.
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Allocation Methods
Space Allocation
Files are allocated disk spaces by operating system. Operating systems deploy
following three main ways to allocate disk space to files.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
Contiguous Allocation
Linked Allocation
Indexed Allocation
The system keeps tracks of the free disk blocks for allocating space to files when they
are created. Also, to reuse the space released from deleting the files, free space
management becomes crucial. The system maintains a free space list which keeps track
of the disk blocks that are not allocated to some file or directory. The free space list can
be implemented mainly as: