Puter Science Semester III Operating System
Puter Science Semester III Operating System
)
SEMESTER - III (CBCS)
OPERATING SYSTEM
SUBJECT CODE: USCS303
© UNIVERSITY OF MUMBAI
Programme Co-ordinator : Shri Mandar Bhanushe
Head, Faculty of Science and Technology IDOL,
Univeristy of Mumbai – 400098
Course Co-ordinator : Mr Sumedh Shejole
Assistant Professor,
IDOL, University of Mumbai- 400098
Editor : Ms. Priya Jadhav,
Assistant Professor,
N. G. Acharya & D. K. Marathe College,
Chembur, Mumbai, Maharashtra 400071
Course Writers : Dr. Ninad More
Chhatrapati Shivaji Maharaj University,
Panvel, Navi Mumbai
1. Operating System.....................................................................................................1
2. Process Synchronization........................................................................................55
3. Main Memory........................................................................................................64
4. Raspberry PI Interfaces..........................................................................................89
5 File System..........................................................................................................105
Unit III AWT: Introduction, Components, Event-Delegation-Model, Listeners, Layouts, 15L
Individual components Label, Button, CheckBox, Radio Button, Choice, List,
Menu, Text Field, Text Area
OPERATING SYSTEM
Additional Reference(s):
1) E. Balagurusamy, Programming with Java, Tata McGraw-Hill Education India, 2014
SYLLABUS
2) Programming in JAVA, 2nd Ed, Sachin Malhotra & Saurabh Choudhary, Oxford Press
3) The Java Tutorials: http://docs.oracle.com/javase/tutorial/
Additional Reference(s):
1. Achyut S. Godbole, Atul Kahate, Operating Systems, Tata McGraw Hill
2. Naresh Chauhan, Principles of Operating Systems, Oxford Press
3. Andrew S Tanenbaum, Herbert Bos, Modern Operating Systems, 4e Fourth Edition,
Pearson Education, 2016
1
OPERATING SYSTEM
Unit Structure
1.0 Objectives of Operating System
1.1 Introduction to Operating Systems
1.2 Introduction and Operating-Systems Structures
1.2.1 Definition of Operating System
1.2.2 Operating System Role’s
1.2.3 Operating system operations
1.2.4 Function of Operating System
1.2.5 Computing Environments
1.2.6 Types of operating system
1.3 Operating-System Structures
1.3.1 Operating System Services
1.3.2 User and Operating system interface
1.3.3 System Calls
1.3.4 Types of system calls
1.3.5 Operating-System Structure
1.4 Processes
1.4.1 Open-Source Operating Systems
1.4.2 Mobile OS (Operating System
1.4.3 Booting Process of Operating System
1.4.4 Components of Operating System
1.4.5 Processes
1.4.6 Process State
1.4.7 Process Control Block
1.4.8 Process Scheduling
1.4.9 Operations on processes
1.4.9 Inter-process communication
1.5 Threads
1.5.1 Types of Thread
1.5.2 Multithreading Models
1.5.3 Multicore Programming
1.6 Conclusion
1.7 Summary
1.8 Exercise
1.9 References:
1
Operating System 1.0 Objectives of Operating System
Operating System
➢ It is a program that controls the execution of application program.
➢ Act as an interface between application and hardware.
➢ Execute user programs and make solving user problems easier
➢ Make the computer system convenient to use
➢ Use the computer hardware in an efficient manner
3
Operating System
Here, User can easily manage the hardware by using operating system.
User 1……….user n
• Computer-system operation
5
Operating System 1. View of operating system
• User view: The user view of the computer varies by the interface
being used. The examples are -windows XP, vista, windows 7 etc.
Most computer user sit in the in front of personal computer (pc) in
this case the operating system is designed mostly for easy use with
some attention paid to resource utilization. Some client sit at a
terminal associated with a centralized server/minicomputer. For this
situation different clients are getting to similar PC through different
terminals. Their clients are share assets and may trade the data. The
operating system in this case is designed to maximize resources
utilization to assume that all available CPU time, memory and I/O
are utilized proficiently and no singular client takes more than
his/her reasonable and share. The different clients sit at workstations
associated with network of other workstations and servers. These
users have dedicated resources but they share resources such as
networking and servers like file, compute and print server. Here the
operating system is designed to compromise between individual
usability and resource utilization.
• System view: From the computer point of view the operating system
is the program which is most intermediate with the hardware. An
operating system has assets as equipment and programming which
might be needed to tackle an issue like CPU time, memory space,
file storage space and I/O devices and so on. That’s why the
operating system acts as manager of these resources. Another view
of the operating system is it is a control program. A control program
manages the execution of user programs to present the errors in
proper use of the computer. It is especially concerned of the user the
operation and controls the I/O devices.
66
Operating System
2. History of operating system:
The First Generation (1940’s early 1950’s):
7
Operating System Different versions of Microsoft Windows
Windows ME -
Millennium Edition
September 2000
Windows NT 1993-1996
31 - 4.0
88
Operating System
Windows 7
October, 2009
October, 2012
Windows 8 and 8.1
July 2015
Windows 10
Windows 11 2021
9
Operating System 1.2.2 Operating System Role’s:
An operating system provides an environment in which application
software’s such as( word processors, Database S/W, Graphic S/W, Spread
sheet S/W, Presentation S/W, Web browsers, Microsoft access, Photoshop
etc.) can be installed.
So an operating system (Like Windows, Ubuntu, and Linux, CentOS,
Debian and so on) provides an environment in which a user can perform a
task. Example formulate my information in suitable format i.e. say a word
file using an application software like Microsoft word and save it onto my
hardware (Hard Disk) in a convenient and efficient manner.
Computer-System Operation
1) Input /Output devices, and the CPU can execute concurrently
2) Every device controller is in charge of a particular device type
3) All the device controller has a local buffer
4) CPU moves data from/to main memory to/from local buffers
5) The Input/Out is from the device to the local buffer of the controller
6) The Device controller informs the CPU that it has completed its
activity by causing an interrupt
Common Functions of Interrupts
1) The Interrupt transfers handle to the interrupt service routine
generally, through the interrupt vector, which carries the addresses
of all the service routines
2) The Interrupt architecture must save the address of the interrupted
instruction
3) The Incoming interrupts are disabled while another interrupt is
being- processed to prevent a lost interrupt. A trap is a software-
generated interrupt caused either by an error or a user request an OS
is interrupt-driven
11
Operating System enhancement is useful for many other aspects of system operation as
well.
• At system boot time, the hardware starts in kernel mode.
• The operating system (OS) is then loaded and starts the user
applications in the user mode.
• Whenever a trap or interrupt occurs, the hardware switches from the
user mode to kernel mode (that is, changes the state of the mode bit
to 0).
• Thus, whenever the operating system gains control of the computer,
it is in kernel mode.
• The system always switches to user mode (by setting the mode bit
to 1) before passing control to a user program.
The dual mode of operation provides us with the means for protecting the
operating system (OS) from deviant clients and wayward clients from one
another. We achieve this insurance by assigning a portion of the machine
directions that might cause hurt as favoured guidelines. The equipment
permits advantaged directions to be executed uniquely in bit mode. In the
event that an endeavour is made to execute a favoured guidance in client
mode, the equipment doesn't execute the guidance yet rather regards it as
unlawful and traps it to the operating system. The instruction to switch to
user mode is an example of a privileged instruction. Some other examples
include I/O (Input/ Output) control, timer management, and interrupt
management. As we shall see throughout the text, there are many
additional privileged instructions.
14
14
Types of Computing Environments: Operating System
Advantage:
It is useful when we working with large files which can take move
time to execute.
Disadvantage:
Once the job is submitted user did not have any interaction with it.
2) Time sharing operating system:
The time sharing sharing operating system works with time sharing
concept. Here CPU will provide a same time period to each and
every process to complete its task as soon as possible weather it is a
long process or short process.
Advantage:
If a node is overused the distributed operating system shares that
load to other node of the network.
Disadvantage:
If there is a problem in the communication network then the whole
communication will be broken.
4) Network operating system:
Network operating system has a server that connects many client
computers. So we can easily share own files, resources and many
more from the server machine to all machine which are connected
through a server.
Disadvantage:
Advantage:
Disadvantage:
• All the positions that enter the framework are put away in the
work pool (in plate). The working framework stacks a bunch
of occupations from work pool into primary memory and starts
to execute.
• during execution, the work might need to sit tight for some
undertaking, like an I/O activity, to finish. In a
multiprogramming framework, the working framework just
changes to another work and executes. At the point when that
work needs to stand by, the CPU is changed to another work,
etc.
Operating-System Structures
Operating system services are:
1) Program execution
2) I/O Operation
3) Error Detection
4) Resource Allocation
5) File Management
6) Memory Management
7) Communication
1) Program execution:
The system must be able to load a program into memory and to run
that program.
2) I/O Operation:
A running programs my required I/O. It allocates and interacts of
various I/O devices when difficult programs are being executed.
3) Error Detection:
The operating system needs to be aware of possible error. Error may
be occurring in the CPU, memory, hardware, I/O devices and in user
programs. It is duty of operating system that appropriate error
message whenever an error occurs and takes the suitable action for
this error.
4) Resource Allocation:
When multiple users logged on the system or multiple jobs are
running on the same time then resources must be allocated to each of
them.
22
22
5) File management: Operating System
This is service or function of the operating system to keeping the
record of files on various storage devices and more all this files from
one device to another.
6) Memory Management
It includes the assignment of main memory and other storage areas
to the system programs, user program and data.
7) Communication:
The principle objective of the working frameworks is to give
collaboration among client and equipment. One more arrangement
of OS capacities exists for guaranteeing the productive activity of
the actual framework by means of asset sharing
Bookkeeping - To monitor which clients use how a lot and what sorts of
PC assets
Assurance and security - The proprietors of data put away in a multiuser
or organized PC framework might need to control utilization of that data,
simultaneous cycles ought not to meddle with one another
Assurance includes guaranteeing that all admittance to framework assets is
controlled
Security of the framework from outcasts requires client confirmation,
reaches out to protecting outer I/O gadgets from invalid access endeavours
On the off chance that a framework is to be ensured and secure, safety
measures should be organized all through it. A chain is just pretty much as
solid as its most fragile connection.
1.3.2 User and Operating system interface
There are two fundamental approaches for users to interface with
operating system.
1) Provide a command line interface (CLI) or command interpreter that
allows the users directly enter commands that are to be performed by
the operating system.
2) Allows the user to interface with operating system via a graphical
user interface or GUI.
3) Some operating system includes the command interpreter in the
kernel.
4) Others such as windows and UNIX treat the command interpreter as
a special program (Like CMD in window and terminal in LINUX).
5) On the system with multiple command interpreters to choose from,
the interpreter as known as shells.
Example:
1) Bourne Shell
2) C Shell
3) Bourne Again shell (BASH)
4) Korn shell etc.
23
Operating System So there are two approaches in which actually command interpreter
execute the task.
1) Code for perform the certain task is included command interpreter
itself.
2) Command interpreter does not contain any code itself but the code
are written in certain programs.
The choice of whether to use a command line or GUI interface is
mostly one of the personal preference. System administrators who
manages computers and power users who have deep knowledge of a
system frequently use in command line interface.
Why use APIs instead of framework calls?(Note that the framework call
names utilized all through this text are conventional)
For performing any operation a user must have to request for a service call
which is also known as the SYSTEM CALL or we can say
24
24
• Programming interface to the services provided by the operating Operating System
system.
• They are typically written in a high level language (C or C++).
• There are two modes in the operation of system which user more or
system mode.
1) In user mode: All user processes are executed.
2) In system mode: All privileged operations are executed.
The user program and kernel functions are being executed in their
respective space allotted in the main memory partitions.
System calls expose all kernel functionalities that user mode program’s
require basically the system call is an instruction that request operating
system to perform the desired operation that needs hardware access and
other privileged operations.
System call generates an interrupt that causes the operating to gain control
of the CPU. The operating system to find out the types of system call and
corresponding interrupt handler routine is executed to perform the
operation.
System calls are inherently used for security reasons. Due to the use of
system calls user is not able to enter into operating system or any others
user regions similarly I/O devices are also safe from any misuse by the
user. Thus through the system calls kernel, other users programs and I/O
devices are safe and secure malicious user programs.
25
Operating System
• Request of device
• Release of device
• Read and Write operation and so on.
4) Information maintenance system calls:
Many system calls exits simply for the purpose of transferring
information between the user program and the operating system.
28
28
Operating System
1.4 Processes:
29
Operating System
Operating-System Structure
Simple structure:
Such operating systems do not have well defined structure and are small,
simple and limited systems. The interfaces and levels of functionality are
not well separated. The MS-DOS is an example of such operating
system. In the MS-DOS application programs are able to access the basic
input/output (I/O) devices. These types of operating system cause the
entire system to crash if one of the user programs fails.
30
30 • Easy for kernel developers to develop such an operating system.
Disadvantages of Simple structure: Operating System
31
Operating System It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only set of core components and other
services are added as dynamically loadable modules to the kernel either
during run time or boot time. It resembles layered structure due to the
fact that each kernel has defined and protected interfaces but it is more
flexible than the layered structure as a module can call any other
module.
For example Solaris OS is organized as shown in the figure.
32
32
Operating System
33
Operating System 2) Complicated – It is not user-friendly as the closed ones. You need
to have the minimum technical knowledge to use this software
3) No support – If you meet with the problem, then there is no
customer support to help you out.
1.4.2 Mobile OS (Operating System
Design and capabilities of a Mobile OS (Operating System) is very
different than a general purpose OS running on desktop machines: –
mobile devices have constraints and restrictions on their physical
characteristic such as screen size, memory, processing power and etc. –
Scarce availability of battery power – Limited amount of computing and
communication capabilities.
Thus, they need different types of operating systems depending on the
capabilities they support. e.g. a PDA OS is different from a Smartphone
OS. • Operating System is a piece of software responsible for management
of operations, control, coordinate the use of the hardware among the
various application programs, and sharing the resources of a device.
A mobile OS is a software platform on top of which other programs called
application programs, can run on mobile devices such as PDA, cellular
phones, smartphone and etc.
There are many mobile operating systems. The followings demonstrate the
most important ones: –
1) Java ME Platform –
2) Palm OS –
3) Symbian OS –
4) Linux OS –
5) Windows Mobile OS –
6) BlackBerry OS –
7) iPhone OS –
8) Google Android Platform
34
34
1.4.3 Booting Process of Operating System: Operating System
When you start the computer, it is observed that some initial text
information is displayed on the screen. This is displayed by the firmware.
The booting instructions are stored in ROM (read-only memory). Then the
booting process starts. After booting, an operating system gets loaded in
the main memory (RAM) of the computer.
Let us understand the complete booting process.
1) When you power on the computer, the CPU (central processing unit)
activates the BIOS (basic input output system).
2) The first program activated is POST (power on selftest). Using the
CMOS (complementary metal oxide semiconductor) memory it
checks all the hardware and confirms that they are functioning
properly.
3) After that it reads the MBR (master boot record) in boot drive in
accordance with the firmware ‘bootstrap loader’ which is provided
by the computer manufacturer.
4) Then the computer loads in the operating system in boot drive to the
RAM.
5) Once this is performed, the operating system takes over the control
of the computer and displays a user interface to the user.
35
Operating System
2) The Kernel:
It is the core of the operating system. It performs all the major
functions of the operating system. It manages resources, controls
program execution, and schedules program execution. It is the main
operating system. It detects the new hardware when attached and
installs the device driver for it to function properly.
3) The Shell:
We identify the operating system by how the shell looks. It provides
the user interface to interact with the kernel and hardware. There are
two types of user interface— command line interface (CLI) and
graphical user interface (GUI) as explained in the Chapter earlier.
1.4.5 Processes:
Process Concept:
36
36
A process includes: Operating System
• program counter
• stack
• data section
Process: A process is an instance of a program in execution.
In batch operating systems work in terms of “jobs” used. In many
advanced process methods are till expressed in terms of jobs, example is
Job Scheduling.
The Process:
Process memory is mainly divided into four different categories as shown
in figure.
A process in memory
• The text section comprises the complied program code and read
from non-volatile storage when the actual program is launched.
• The data section stores static and global variables.
• The heap is used for dynamic memory allocation and is managed via
calls to delete, new, malloc, free, etc.
• The stack is used for mainly in local variables. Space on the stack is
reserved for local variables only when they are declared.
• When processes are swapped out of memory and later restored,
additional information must also be stored and restored.
1.4.6 Process State:
Processes may be in one of SIX states as shown in figure below:
1) New State
2) Ready State
3) Running State
4) Waiting or Block State
5) Suspended Ready State
37
Operating System 6) Terminated
• New: The process is in the stage of being created.
• Ready State: In first stage, a process moves from new state to ready
state after it is loaded into the main memory and this process is
ready for execution.
• Running state: A process moves from ready state to run state after it
is assigned the CPU for each process for execution.
• Waiting or Block State: The process cannot run at the time because
process is waiting for some I/O resources or some event to occur.
• Suspended Ready State: A process moves from ready state to
suspend ready state if a process with higher priority has to be
executed but the main memory is full.
• Terminated: the process has completed.
38
38
1.4.7 Process Control Block- Operating System
Process Control Block (PCB) is a data structure that stores all information
about a particular each process. This all information about a particular
each process is required by the CPU while executing the process time.
The Process Control Block diagram of a process looks like-
1. Process Id
• Process Id is a unique Id for each process that identifies
process of the system uniquely.
• A process Id is assigned to each process during its creation of
process.
2. Program Counter
• Before process execution, program counter is initialized with
the address of the first instruction of the program.
• After executing a first instruction, value of program counter is
automatically incremented to point to the next instruction of
process.
• This program counter process repeats till the end of the
program.
3. Process State
• Each process goes through different states (process state,
running state, waiting state etc.) during its lifetime.
• Process state specifies the current state of the process.
39
Operating System 4. Priority
• Process with the very highest priority is allocated the CPU first
among all the other processes.
5. General Purpose Registers
• General purpose registers are mainly used to hold the data of
process generated during its execution time.
• Each process has its own set of registers which are maintained
by its process control block (PCB).
6. List of Open Files
40
40
1.4.8 Process Scheduling: Operating System
41
Operating System
Comparison of Schedulers
43
Operating System Process Pre-emption
3) Process Blocking
The process is blocked if it is process is waiting for some event to
occur or process demanding is some I/O resources. This happening
may be I/O as the I/O events are executed in the main memory and
don't require the processor. After the event process is complete, the
process again goes to the ready state.
A diagram that determines process blocking is as follows −
Process Blocking
4) Process Termination
After the process has successfully completed the execution task of
its last instruction, it is terminated. The given resources held by a
process are released after it is terminated.
A child process can be terminated by its parent process if its task is
no longer used. The child process sends current status information to
the parent process before it termination. Also, when a parent process
is terminated, its child processes are terminated automatically as
well as the child processes cannot run if the parent processes are
already terminated.
Process Termination
44
44
1.4.10 Inter-process communication: Operating System
1.5 Threads
4.1) Thread:
• Responsiveness
• Resource Sharing
• Economy
• Scalability
Thread is a part of execution unit that are mainly consists of its own
program counter, a stack, and a set of registers where the program counter
mainly keeps track of which instruction to execute next, a stack mainly
contains the history of execution and a set of registers mainly hold its
current working variables. Threads are also called as Lightweight
processes method. Threads are a very popular way to improve the
performance of an application through parallelism. Threads are mainly
used to represent a software approach in order to increase the performance
of an operating system just by reducing the overhead thread that is mainly
equivalent to a classical process.
46
46
It is significant to note here that each thread goes to exactly one process Operating System
and outside a process no threads exist. Each thread basically represents the
flow of control separately. In the implementation of network servers and
web servers threads have been successfully used. Threads provide a
suitable foundation for the parallel execution of applications on shared-
memory multiprocessors.
The given below figure shows the working of a single-threaded process
and a multithreaded process:
0
Single process P with Single Thread Single Process P with three threads
Difference between Process VS Thread
47
Operating System Advantages of Thread:
• Threads require minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication between processes.
• It is more efficient to create and context switch threads.
• Threads allow using of multiprocessor architectures to a better scale
and efficiency.
1.5.1 Types of Thread
Threads are implemented in following two ways −
1) User Level Threads
2) Kernel Level Threads
1) User Level Threads
In the thread, the thread executive’s kernel is not aware of the occurrence
of threads. The string library covers code for making and obliterating
strings, for the passing message and information b/w strings, for planning
string execution, and for saving and re-establishing string settings. The
application begins with a solitary string.
Advantages
1) Thread switching does not need Kernel mode privileges.
2) User level thread can run on any type of operating system.
3) Scheduling can be application specific in the user level thread.
4) User level threads are fast to the create and manage very easily.
48
48
Disadvantages Operating System
50
50
Operating System
Windows XP Threads:
51
Operating System
Linux Threads
1.6 Conclusion:
1.7 Summary:
1.8 EXERCISE:
1.9 References:
54
54
2
PROCESS SYNCHRONIZATION
Unit Structure
2.1 General Structure if a Typical Process
2.2 Race Condition
2.3 The Critical-Section Problem
2.4 Peterson’s Solution
2.5 Synchronization Hardware
2.6 Mutex Locks
2.7 Semaphores
2.8 Classic Problems of Synchronization
2.9 Monitors
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
55
Operating System Heap
This is dynamically allocated memory to a process during its run time.
Heap
This is dynamically allocated memory to a process during its run time.
Data
This section contains the global and static variables.
56
56
In the entry section, the process requests for entry in the Critical Process Synchronization
Section.
Any solution to the critical section problem must satisfy three
requirements:
• Mutual Exclusion : If a process is executing in its critical section,
then no other process is allowed to execute in the critical section.
• Progress : If no process is executing in the critical section and
other processes are waiting outside the critical section, then only
those processes that are not executing in their remainder section
can participate in deciding which will enter in the critical section
next, and the selection can not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times
that other processes are allowed to enter their critical sections after
a process has made a request to enter its critical section and before
that request is granted.
57
Operating System Peterson’s Solution preserves all three conditions :
• Mutual Exclusion is assured as only one process can access the
critical section at any time.
• Progress is also assured, as a process outside the critical section
does not block other processes from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
• It involves Busy waiting
• It is limited to 2 processes.
wait(S)
{
while (S<=0);
S--;
}
signal(S)
{
S++;
}
59
Operating System There are mainly two types of semaphores i.e.
1. counting semaphores
2. binary semaphores.
The Counting Semaphores are integer value semaphores and have an
unrestricted value domain. These semaphores are used to coordinate the
resource access, where the semaphore count is the number of available
resources.
The Binary Semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore
is 1 and the signal operation succeeds when semaphore is 0.
If the processes are run simultaneously, they will not yield the
expected output.
60
60
The solution to this problem is creating two semaphores, one full Process Synchronization
and the other empty to keep a track of the concurrent processes.
Sleeping Barber Problem
This problem is based on a hypothetical barbershop with one barber.
When there are no customers the barber sleeps in his chair. If any
customer enters he will wake up the barber and sit in the customer
chair. If there are no chairs empty they wait in the waiting queue.
61
Operating System Readers and Writers Problem
This problem occurs when many threads of execution try to access
the same shared resources at a time. Some threads may read, and
some may write. In this scenario, we may get faulty outputs.
2.1.8 Monitors in Process Synchronization
The monitor is one of the ways to achieve Process synchronization. The
monitor is supported by programming languages to achieve mutual
exclusion between processes. For example Java Synchronized methods.
Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined
together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal
variable of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Monitor syntax:
Condition Variables:
Two different operations are performed on the condition variables of the
monitor.
1. Wait
2. Signal
Example for 2 condition variables x,y
condition x, y; // Declaring variable
Wait operation
x.wait() : Process performing wait operation on any condition variable
are suspended. The suspended processes are placed in block queue of
that condition variable.
62
62
Note: Each condition variable has its unique block queue. Process Synchronization
Signal operation
x.signal(): When a process performs signal operation on condition
variable, one of the blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and
less error prone than using techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language .
The compiler must generate code for them. This gives the compiler the
additional burden of having to know what operating system facilities are
available to control access to critical sections in concurrent processes.
Some languages that do support monitors are Java,C#,Visual Basic,Ada
and concurrent Euclid.
Please write comments if you find anything incorrect, or you want to
share more information about the topic discussed above
63
3
Operating System
MAIN MEMORY
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Background
3.2.1 Memory Hierarchy
3.2.2 Logical Versus Physical Address Space
3.3 Memory Management Unit
3.4 Swapping
3.5 Memory Allocation Techniques
3.6 Contiguous Storage Allocation
3.6.1 Storage Placement Policies
3.6.2 Fragmentation
3.6.3 Compaction
3.7 Paging
3.7.1 Address Translation
3.7.2 Page Table Implementation
3.7.3 Protection
3.7.4 Shared Pages
3.8 Structure of the Page Table
3.8.1 Hierarchical Paging
3.8.2 Hashed Page Tables
3.8.3 Inverted Page Tables
3.9 Segmentation
3.9.1 Address Translation
3.9.2 Segment Table Implementation
3.10 Virtual Memory
3.11 Demand Paging
3.11.1 Steps to handling a page fault
3.12 Copy-on-write
3.13 Page Replacement
3.13.1 FIFO Page Replacement Algorithm
3.13.2 Optimal Algorithm
3.13.3 LRU Page Replacement Algorithm
3.13.4 Second Chance Page Replacement
64
64
3.13.5 Clock Page Replacement Algorithm Main Memory
3.0 OBJECTIVES
3.1 INTRODUCTION
3.2 BACKGROUND
FIGURE 3.1
According to the memory hierarchy, the following occur:
FIGURE 3.2
67
Operating System 3.3 MEMORY MANAGEMENT UNIT
FIGURE 3.3
For example, the value in relocation register is 7000 and the value in the
logical address is 654. Then the memory management unit mapped the
logical address with the value in the relocation register and converts into
physical address. After the mapping it dynamically relocated the location
to 7654. The user only generates the logical address but they must be
mapped to physical address before execution.
3.4 SWAPPING
68
68
P1 in main memory, then process P2 is initiated and swapped in from Main Memory
secondary memory and P1 swapped out to secondary storage.
FIGURE 3.4
For example, in a multiprogramming environment, when the time expires
of any particular process it starts to swap out the process and swap in
another process for execution. If there is any process running and a higher
priority process arrives then the memory manager swaps the current
running process with high- priority process. When high priority process
completes its execution the lower priority processor swapped in. this is
also called roll-in, roll out.
69
Operating System FIGURE 3.5 Summary of Memory Allocation Techniques
(2) Non-Contiguous Memory Allocation: In this allocation, the
instructions and data may occupy non-continues storage area. The
schemes used in Non-Contiguous memory allocation are:
• Paging
• Segmentation
Advantages
Disadvantages
Every part of the memory contains some unused space. This wasted space
is referred as internal fragmentation. Early batch systems used this
scheme where space requirement of a process known beforehand.
Advantages
70
70
• This scheme is easy to implement. Main Memory
FIGURE 3.7
When a process completes its execution and free the occupied space, then
free space appeared as the series of ‘holes’ between the memory areas.
The operating system then loads another process in that area but
sometimes there is not enough space for the next process as the size of the 71
Operating System process is more than that free space. So, this free memory space is called
external fragmentation.
3.6.1 Storage Strategies or Placement Policies
The purpose of placement policy algorithm is to select all the free space
which will give maximum throughput for the system. When any new
process is loaded into memory using variable partition scheme, try to
select best location for that process. The following are some placement
policies.
FIGURE 3.8
• Worst Fit Policy: - The worst fit policy allocates largest hole to the
process from unallocated memory available. Using same example,
memory manager allocates 17KB block to 11KB process.
3.6.2 Fragmentation
The processes free the memory space after the completion of execution
and that memory space is not big enough for other processes, then there
occurs the problem of fragmentation. The cause of fragmentation is that
because of dynamic allocation of processes sometimes the free space is
insufficient for the particular process. There are two types of
fragmentation: -
72
72
Internal Fragmentation Main Memory
3.7 PAGING
Figure 3.10
The displacement or offset uses 11 bits of memory address and the page
number uses first 5 bits. The value of page number is from range 0 to
31(25 – 1) or 32 pages and the range of displacement is from 0 to 2023 (211
– 1). According to this it has 32 pages each of 2024 locations. The
relocation problem is simply solved by page table because it contains each
process’s page number and its corresponding frame number. Figure 3.11
shows the overall paging model of address translation.
74
74
Main Memory
• The next method is using page table base register (PTBR). The
page table is store in the main memory. PTBR points to page table,
so it requires changes only at one base register. But access time can
be increased and it can slow the system.
75
Operating System
FIGURE 3.12
3.7.4 Shared Pages
The paging method supports the possibility of sharing common program
code. It can be helpful in time sharing environment. This code is also
called re-entrant, which means it will never change during execution by
any write operation. For example, if 15 users execute a text editor, only
one copy of text editor needs to be stored in the main memory. So, to
support 15 users only one copy of the editor is required.
The following are the techniques for structuring the page table.
3.8.1 Hierarchical Paging
• To handle larger than 32-bit address space hashed page table is used.
• It handles with the virtual page number. The table contains linked
list of elements hashing to same location.
• Every element contains three fields:
o The Virtual page numbers.
o The Value of the mapped page frame.
o The Pointer to the next element
• The virtual page numbers are compared with the linked list, if the
match is found, then that frame is extracted.
3.8.3 Inverted Page Tables
3.9 SEGMENTATION
FIGURE 3.16
Characteristics of segmentation
• From the logical address take out the segment number and the
displacement.
• To get the segment base address use the segment number to index
the segment table.
• Offset should not greater than the length. If the offset is greater then
an invalid address occurs.
• By adding the base address and offset, it generates the physical
address.
FIGURE 3.17
Advantages
When a process with large size than physical memory can be executed by
loading the parts of the process refers to the concept of virtual memory. It
is not an existing memory but only a scheme. Sometimes the size of the
data, programs and stacks may be bigger than physical memory, so the
concept of virtual memory comes into picture. Only the part of the
program that required execution kept into virtual memory and the rest part
of the program is on the disk. The separation of user’s logical memory
from physical memory is the virtual memory. When only small amount of
main memory is available then, then this separation allows large amount
of virtual memory for users or programmers as shown in figure 3.18
FIGURE 3.18
Advantages
• The programmer does not need to care about the physical memory,
as virtual memory makes the task much easier.
• It increases the throughput and utilization of CPU.
• It reduces external fragmentation, as process can be loaded into
arbitrary size space.
• The processes with high priority can run faster.
Disadvantages
79
Operating System During the translation it checks whether the page is in physical memory or
not. Valid (IN) and invalid (OUT) bits are attached with the table. If the
bit is set to IN, then the page is in the main memory and if bit is OUT the
page is not in the primary memory rather it is in secondary memory. Due
to the OUT bit, an interrupt occurs which is called page fault.
Page Frame Map Table (PFMT) (an additional data structure in virtual
memory) is used to store secondary storage addresses of the pages. The
operating system consults PFMT, when a page fault occurs.
FIGURE 3.19
3.11.1 Steps to handling a page fault
When there is a page fault, the currently executing process must stop until
page is loaded into main memory. The page fault handler which is a
module of the operating system gets the page number and address of page
table of the missing page to bring into the main memory from secondary
storage. Here are the steps followed by operating system to handle page
fault:
80
80
3.12 COPY-ON-WRITE Main Memory
If there are multiple processes executing in main memory and the page
fault occurs and if no free page frame is available then operating system
has to delete or remove currently loaded page so that new page can be
loaded. This technique is called page replacement. Which page should be
replaced from memory? This can be decided by some of the page
replacement algorithms.
3.13.1 FIFO Page Replacement Algorithm
First-in, First-out (FIFO), algorithms remove the page from the memory
which is residing from longest time. Due to the probability its performance
is poor as some of the processes are in constant use throughout life. FIFO
queue is used to hold the page in the memory. A page is loaded at the rear
and replaced at the front of the queue. In this algorithm sometime there is
belady anomaly. This anomaly occurs when page fault increases due to
number of allocated frame increases.
Example 3.1
Consider three frames with the following reference string:
2 1 2 3 4 2 5 1 6 3
The following figure shows how the FIFO algorithm works:
FIGURE 3.20
81
Operating System 3.13.2 Optimal Algorithm
This algorithm replaces the page that will not be used for longest period of
time. Page fault rate is lowest in this algorithm. Belady’s anomaly never
affects this algorithm but it needs to know the reference string in advance
so it is difficult to implement.
Example 3.2
Consider three frames with the following reference string:
7 1 4 3 4 2 5 1 3 6
The following figure shows how the Optimal algorithm works:
FIGURE 3.21
3.13.3 LRU Page Replacement Algorithm
Least recently used (LRU) algorithm replaces the page that has nor been
used from longest period of time. It is implemented by using stack. In the
top of the stack there is always most recently used page and, in the bottom,
least recently used page. It is best implemented with doubly linked list
with a head and a tail.
Example 3.3
Consider three frames with the following reference string:
2 1 2 3 2 4 2 6
The following figure shows how the LRU algorithm works:
FIGURE 3.22
82
82
Main Memory
FIGURE 3.23
3.13.5 Clock Page Replacement Algorithm
The pages move around on its list in the second chance page replacement
algorithm. To avoid this circular list is maintained with a pointer to the
page in clock page replacement algorithm.
Example 3.4
FIGURE 3.24
83
Operating System 3.14 ALLOCATION OF FRAMES
3.15 THRASHING
Sometimes the memory is full with pages and no free space left for new
required pages then the page fault occurs. Even after swapping the pages,
page fault can be occurring continuously due to the swapped-out page
required for the execution. This situation is called thrashing where system
is continuously processing page faults and executing instructions.
Where many processes are running at the same time like in multitasking
and multiprogramming operating systems, thrashing occurs. It lesser the
performance of the system.
3.15.1 Cause of Thrashing
In multiprogramming, many processes are introduced to system so that
CPU utilization can be increased. Now many processes need pages in the
memory and sometime there is no free space in the memory then the
system continuously swapped-in and swapped-out the pages from the
memory. This leads to more page faults. When CPU scheduler tries to
increase the degree of multiprogramming, then the utilization of CPU
decreases as shown in figure. this situation leads to thrashing where
system continuously working on page faults.
3.15.2 Methods to Handle Thrashing
The following are some methods to handle thrashing:
Local replacement algorithm: - By using local replacement algorithm,
the process swaps the page which belongs to same process and it does not
take frames form another process. So that the process can execute
properly.
84
84
Main Memory
FIGURE 3.25
Allocating frames to process as required: - If a process have as many
frames it requires, then the thrashing can be prevented. There are several
techniques to do this. Working-set model is one of them. It is based on
locality. It is a collection of pages referred by process at a recent interval
of time.
3.10 SUMMARY
Cache: -The CPU can access the registers in one cycle of CPU clock, but
in case of main memory it takes many cycles of the CPU clock because
main memory can be accessed via memory bus. To overcome this problem
a fast memory called cache, is added between main memory and CPU.
Logical Address Space: - When a program generates the set of all logical
addresses is called Logical Address Space.
85
Operating System Physical Address Space: - The set of physical addresses corresponding to
logical addresses of program is called Physical address space of that
program.
Page Fault: - If the bit is set to IN, then the page is in the main memory
and if bit is OUT the page is not in the primary memory rather it is in
secondary memory. Due to the OUT bit, an interrupt occurs which is
called page fault.
Thrashing: - Sometimes the memory is full with pages and no free space
left for new required pages then the page fault occurs. Even after
swapping the pages, page fault can be occurring continuously due to the
swapped-out page required for the execution. This situation is called
thrashing where system is continuously processing page faults and
executing instructions.
86
86
3.11 EXERCISES Main Memory
87
4
Operating System
MASS-STORAGE STRUCTURE
Unit Structure
4.0 Objectives
4.1 Overview of Mass Storage Structure
4.1.1 Magnetic Disks
4.1.2 Magnetic Tapes
4.2 Disk Structure
4.3 Disk Attachment
4.3.1 Host-Attached Storage
4.3.2 Network-Attached Storage
4.3.3 Storage-Area Network
4.4 Disk Scheduling
4.5 Disk Scheduling Algorithms
4.5.1 First-cum, First-Serve Scheduling
4.5.2 Shortest-Seek Time First Scheduling
4.5.3 SCAN Scheduling
4.5.4 Circular SCAN Scheduling
4.5.5 LOOK Scheduling
4.5.6 Circular LOOK Scheduling
4.5.7 Selection of Disk Scheduling Algorithms
4.6 Disk Management
4.6.1 Disk Formatting
4.6.2 Boot Block
4.6.3 Bad Block
4.7 Summary
4.8 Exercises
4.0 OBJECTIVES
88
88
4.1 MASS-STORAGE STRUCTURE: OVERVIEW Mass-Storage Structure
Main memory is the temporary storage in the computer but mass storage
devices retain the data even when system is switched off. Mass storage
devices are used to store large amount of data permanently. These devices
are also called secondary storage or auxiliary storage. Let us discuss the
physical structure of basic secondary storage devices.
4.1.1 Magnetic Disks
Magnetic disk is a secondary storage device. The shape of magnetic disk
is like a CD. The flat area of disk is covered with magnetic material.
Magnetic disks can store large amount of data and its price is low as
compared to primary memory. The disk has many platters of circular
shape. Average range of platter in diameters is from 1.8 to 5.25 inches.
The information is stored magnetically on platters. Every platter has its
read-write head. Platters are logically divided into tracks and sectors. In
this type of storage data can be deleted or modified. Magnetic disk also
allows to access data randomly but its access rate is slower than main
memory.
89
Operating System 4.1.2 Magnetic Tapes
Magnetic tape is also a secondary storage device. This memory is useful to
access sequential data. A ribbon that is coated with magnetic oxide is used
to store the data. The speed of magnetic tape memory is slow due to its
sequential access. Magnetic tapes can also store large amount of data. This
memory has the storage capacity from range 100 MB to 200 GB and the
size of ribbon varies from 4mm to 1 inch.
The disk is made up from multiple platters and platters are divided into
multiple tracks which are then divided into multiple sectors. The platters
are accessed through head arms connected with head and can move only in
two directions. All the heads move together on the platter and head is
90
90
always located at same track number or cylinder number. The set of all Mass-Storage Structure
tracks where the head is located currently is referred as cylinder. The
speed of the disk drive rotating on the disk is constant. The read/write
operation is performed when head is positioned at a particular location.
• Seek time: - The time need to position the read or write head at a
particular track is called the seek time. For example, if the read/write
head is on track 3 and it has to be positioned at 7, then read/write
head has to move from track 3 to 7.
• Latency time: - The time need to position the read or write head at
the desired sector, when the head is already at the particular track is
called latency time or rotational delay.
• Data Transfer Time: - How much actual time is required to transfer
the data is called the data transfer time.
91
Operating System 4.3 DISK ATTACHMENT
Disk storage can be accessed in two ways by the system. One is through
I/O port or host-attached storage and other is through remote host or
network attached storage.
4.3.1 Host-Attached Storage
It is a simple way used in small systems. This storage can be accessed via
local I/O ports. These ports have different technologies. The system uses
I/O bus architecture called IDE. The architecture of this supports
maximum two drives per bus. Some systems use more advanced
architecture such as fiber channel. The devices like hard disk drives, CD,
DVD and tape drives can be used as host-attached storage.
4.3.2 Network-Attached Storage (NAS)
This is a special purpose storage device that can be remotely accessed over
a network. Different protocols like TCP or UDP are used for procedure
call on same LAN. The computer that are connected on a LAN as shown
in figure 4.3, network-attached storage gives a simpler way to share a
space for storage with name and accessed through host-attached storage. It
is less-efficient and its performance is also lower as compared to some
directly attached storage options.
92
92
Mass-Storage Structure
To use the hardware efficiently, disk scheduling plays the major role. This
scheduling is also called I/O scheduling. The purpose of disk scheduling is
to increase the access time. When there are many requests from processes
for CPU or I/O. The process requests operating system to access the disk
for I/O. Sometime it is tough for the operating system to satisfy the request
of every process, then the technique disk scheduling is used. The several
disk scheduling algorithms are used so that the CPU can make the choice
from the pending processes.
FIGURE 4.6
4.5.2 SSTF (Shortest-Seek-Time-First) Scheduling
In SSTF scheduling, the request with shortest seek distance is served first,
even if it comes last in the queue. The disk controller selects the next
request which is close to the current location of the head. In this algorithm,
innermost and outermost tracks have poor service as compared to mid-
range tracks. The advantage of this scheduling is increased throughput and
mean-response time. The disadvantage is starvation as innermost and
outermost tracks may have to wait for long time.
94
94
Example 4.2 Mass-Storage Structure
FIGURE 4.7
The closest request to head cylinder 50 is at cylinder 40. From cylinder 40
closest is 20 then next closest is 70, then 118, 125 and finally 150 as
shown in figure 4.3
= 10 + 20 + 30 + 48 + 7 + 25
= 140
FIGURE 4.8
As in figure 4.4, first the head moves from 50 to 40, the closest one. Then
head moves from 40 to 20. From cylinder 0, the head arm will reverse its
position and moves to tracks 70, 118, 125 and 150.
Let us calculate the head movements as:
| 50 - 40 | + | 40 – 20 | +|20 – 0| + | 0 – 70 | + | 70 – 118 | + | 118 – 125 | + |
125 – 150 |
= 10 + 20 + 20 + 70 + 48 + 7 + 25
= 200
4.5.4 C-SCAN Scheduling
C-SCAN or Circular SCAN is an improvement in SCAN scheduling
algorithm. In C-SCAN, the head moves from current position to forward
direction position, servicing the requests till it reaches to last track, then it
immediately returns to the first track of the disk, without servicing any
request at the time of return.
Example 4.4
Disk Queue with requests for I/O on cylinders is in following order:
70, 125, 40, 150, 20,118
FIGURE 4.9
In this algorithm, the head will move from track 50 to 70, then to 118,
125,150 and the outermost track 199. After reaching to the end of disk it
will return to starting of the disk at track 0, then it will move from 0 to 20
and 20 to 40 as shown in figure 4.5.
In LOOK scheduling, the head arm moves in one direction servicing every
request till no more request pending, then it reverses the direction. LOOK
is the modified version of SCAN scheduling. The difference is that in
SCAN head moves to innermost and outermost cylinders but in LOOK it
only serves the requests in one direction than the other direction.
Example 4.5
97
Operating System
FIGURE 4.10
Let us calculate the head movements as:
| 50 - 40 | + | 40 – 20 | + | 40 – 70 | + | 70 – 118 | + | 118 – 125 | + | 125 –
150 |
= 10 + 20 + 30 + 48 + 7 + 25
= 140
4.5.6 C-LOOK Scheduling
Circular LOOK scheduling is a modified version of Circular SCAN
scheduling. The difference between both the scheduling is that in C-SCAN
scheduling head arm moves across the full disk from outermost to
innermost cylinders. But in C-LOOK scheduling, head only serves request
in each direction.
Example 4.6
Disk Queue with requests for I/O on cylinders is in following order:
70, 125, 40, 150, 20,118
• Disk head is at 50.
• Cylinders are 200 numbered from 0-199
FIGURE 4.10
98
98
Let us calculate the head movements as: Mass-Storage Structure
99
Operating System • In this step, an operating system data structure of file-system on the
disk.
• The data structure of file system includes the information about free
space, allocated space or empty directory.
4.6.2 Boot Block
4.7 SUMMARY
• FCFS Scheduling
• SSJF Scheduling
• SCAN Scheduling
• C-SCAN Scheduling
• LOOK Scheduling
• C-LOOK Scheduling
Selection of disk scheduling algorithm: - There can be different
parameters for selection of a disk scheduling algorithm. It depends on the
system and number of requests arrived.
Disk Management: - It is the responsibility of the operating system to
manage the disk. These includes boot block, bad blocks, Disk formatting.
It manages the disk as how the data will store on disk.
Access Time: - The time needs to access the particular location and
transfer the data.
Seek time: - The time need to position the read or write head at a
particular track is called the seek time. For example, if the read/write head
is on track 3 and it has to be positioned at 7, then read/write head has to
move from track 3 to 7.
Latency time: - The time need to position the read or write head at the
desired sector, when the head is already at the particular track is called
latency time or rotational delay.
Data Transfer Time: - How much actual time is required to transfer the
data is called the data transfer time.
4.8 EXERCISES
• FCFS
• SSJF
• SCAN
• C-SCAN
• LOOK
• C-LOOK
• FCFS
• SSJF
• SCAN
• C-SCAN
• LOOK
• C-LOOK
13. What do you mean by disk formatting?
14. Explain the Disk Structure.
15. A Disk has 200 cylinders, from 0 to 199. The head is currently at
cylinder 55. The following queue is in FIFO order:
70, 20, 130,40, 155, 10, 185
What is the total distance that the disk arm moves for all of the following
Algorithms?
• FCFS
• SSJF
• SCAN
• LOOK
102
102
5
FILE SYSTEM
Unit Structure
5.0 Objectives
5.1 File Management- Introduction
5.2 File Concept
5.2.1 File Naming
5.2.2 File Attributes
5.2.3 File Operations
5.2.4 File Types
5.2.5 File Structure
5.3 Access Methods
5.3.1 Sequential Access
5.3.2 Random Access
5.3.3 Indexed Sequential Access
5.4 Directory and Disk Structure
5.4.1 Single-level Directory
5.4.2 Two-Level Directory
5.4.3 Hierarchical Level Directory
5.4.4 Acyclic Graph Directory Structure
5.4.5 General Graph Directory
5.5 File-System Mounting
5.6 File Sharing
5.6.1 I-Node Linking
5.6.2 Symbolic Linking
5.7 File-System Structure
5.8 File System Implementation
5.8.1 Overview
5.9 Directory Implementation
5.9.1 Linear List
5.9.2 Hash Table
5.10 Allocation Methods 103
Operating System 5.10.1 Contiguous Allocation
5.10.2 Linked Allocation
5.10.3 Indexed or i-node Allocation
5.11 Free - Space Management
5.11.1 Bit Vector
5.11.2 Linked List
5.11.3 Grouping
5.11.4 Counting
5.12 Summary
5.13 Exercises
5.0 OBJECTIVES
In the computer system, large amount of data is stored and retrieved. All
the applications need to store and fetch the information in the computer. It
is essential to store the data for long-term and make data sharable so that
various processes can access the data at any time. The size of data is large
so some appropriate storage devices must be used.
To match these requirements, the information needs to store on disk in
units called files. The data stored in files must be persistent, so that it is
not affected by the processes. Processes can read/write the files according
to their need or requirement. Files are handled by the operating system.
Operating system design deals with how the files are structured, used,
accessed, named and implemented. That part of operating system which
deals with files is known as file system.
104
104
5.2.1 File Naming File System
The following are some rules for naming a file that varies from system to
system:
Table 5.1
When a file is created, a lot of information regarding the file is created like
its name, date of creation, size and so on. These items are called file’s
attributes. The following are some of the file attributes which varies from
system to system:
105
Operating System a) File name: - A file name is referred by its name and created for the
convenience of its users. It is a string of characters like “name.c”.
The rules for the file name depend upon the operating system.
b) File Size: - The size of the file can be in bytes, words or blocks.
c) File Type: - This attribute is needed where system supports different
file types.
d) Location: - This attribute stores the location of the file where it is
stored in secondary storage.
e) Access Rights: - This attribute stores the information of how the file
can be accessed and who can access the file.
f) Time, Date and identification: - This attribute stores the
information about the creation of file, last modification and last use.
This can be useful for the protection and security of files.
g) Flags: - Flags are set by operating system for some specific
properties. The following are some flags attributes:
Table 5.2
5.2.3 File Operations
There are some operations that can be performed on files to defined them
properly. Different operating systems provide different operations to store
and fetch the data or information. The following are some common
operations of files:
• Creating a file: - First find a space for the file and then new file is
created in the directory.
• Writing a file: - The pointer must be at the location where next file
is to be written.
• Reading a file: - System call is used to read a file; it identifies the
name of the file and specifies the next block of the file should be
place.
• Appending a file: - This operation can add the data to the end of an
existing file.
• Repositioning within a file: - This is also called seek operation. In
this current-file-position is given value and search for appropriate
entry.
106
106 • Renaming: - This is used to rename the file.
• Deleting a file: - This system call is used to erase the directory and File System
delete the files.
• Truncating a file: - This is used to remove the contents of the files
without deleting its attributes.
• Copying the file: - This operation is used to create a copy of file
and reading from old file and writing to the new file.
• Open: - The purpose of this operation is to fetch the lists and
attributes of disk into main memory. A process must open a file
before using it.
• Close: - To free up the internal space, the file must close after it
finishes all its accesses.
5.2.4 File Types
When designing a operating system, it should be considered that operating
system can recognize and support file types. So that it can operate on file
in different ways. The extension is used to indicate the file type and
operations that can be execute on that file. Following are some common
file types:
107
Operating System
FIGURE 5.1
b) Record Sequence: It is a sequence of fixed length records of a file
with some internal structure as shown in figure 5.2. In this structure,
the read operation returns the whole record and write operation
appends one record. CP/M OS supports this structure.
FIGURE 5.2
FIGURE 5.3
108
108
5.3 ACCESS METHODS File System
FIGURE 5.4
5.3.3 Indexed Sequential Access
This method combines the advantages of both direct access and sequential
access methods. In this method records are in sequence manner but they
can be accessed randomly through their index number. Index uses pointers
to every record. To search a record, first it search the index then use the
pointer to access that particular record.
The secondary storage devices that support the indexed sequential access
method is Magnetic disk and magnetic drum.
109
Operating System 5.4 DISK AND DIRECTORY STRUCTURE
FIGURE 5.5
b) Disk has several partitions.
Sometimes there are several partitions within one disk. Each
partition can be treated as individual storage device as shown in
figure 5.6.
FIGURE 5.6
110
110
File System
FIGURE 5.7
Second Part
Directories contains the information of all the file like its name, size,
location, type etc. It can be thought as a symbol table. Let us discuss the
methods for identifying the logical structure of a directory.
The following are the levels of Directory Structure:
5.4.1 Single-Level Directory: It is the simplest structure of directory. All
the files kept in same directory as shown in figure 5.8.
Advantages
• Easy to understand.
• Easy to support.
Disadvantages
FIGURE 5.8
FIRGURE 5.9
Advantages
• When user’s wants to access one another’s file, they cannot as they
are separate from each other.
• No logical grouping is there other than by user.
5.4.3 Hierarchical structure: Hierarchical structure directory is also
called tree-structure directory. This directory structure allows the users to
create their own sub directories and they can organize file accordingly. It
has a root directory. All the files have unique names. A directory contains
files and subdirectories as shown in figure 5.10. Internal format of all the
directories is same. Bit 0 defines the file and 1 defines the subdirectory.
Directories can be created and deleted through system calls. Example is
UNIX file system.
FIGURE 5.10
112
112
Advantages File System
Disadvantages
Advantages
• It is more flexible.
• Common subdirectory can be shared.
Disadvantages
FIGURE 5.11
5.4.5 General Graph Directory: In general directory structure, a simple
graph structure can be made by adding links to existing tree structure
directory as shown in figure 5.12.
113
Operating System
FIGURE 5.12
Advantages
• It is more flexible.
Disadvantages
When users working together, they need to share files. It is convenient for
files to appear at the same time in different directories which belongs to
different users as shown in figure 5.13
114
114
File System
FIGURE 5.13
Issues due to sharing files
User can create link between directory and the shared file by copying the
address of records. Now if both the directories make changes to the
records, that new records will be listed only that particular directory. The
change will not show to other user, it creates the issue of sharing.
The following are two solutions to this problem.
5.6.1 i-node Linking
In index-node (i-node) linking, in directories i-node is listed instead of
disk blocks. The directories then point to i-node. This technique is used in
UNIX.
Drawbacks
• If any directory links to the shared file, the i-node records the owner
as its previous directory. Ownership is not changed after creating the
link but in i-node link counts are increased as shown in figure 5.14
FIGURE 5.14
115
Operating System • If one directory removes the file and clear the i-node, then another
directory will have point to an invalid i-node.
5.6.2 Symbolic Linking
In this approach, when one directory links to file of another directory.
Then a new file is created of type LINK. The new file contains the path
name of the file with whom it is linked. If any directory wants to read
from the linked file, the OS looks the file is of type LINK and read that
file.
Advantages
The operating system uses layered file structure for some tasks in the file
system. Every layer has their own responsibilities to do some particular
tasks. For example, to store the data, to locate the data, to retrieve the data.
The figure 5.15 shows the basic layered file structure.
• The application program is the first layer. When this layer asks for a
file, first request is sent to logical file system.
• Logical file system has the meta data. If it does not give permission
to application program, then this layer gives the error. This layer
also verifies the path.
FIGURE 5.15
116
116
• In the file organization module mapping is done. Logical blocks are File System
mapped into physical blocks.
• The basic file system takes the information from file organization
module and issues the commands to I/O control to fetch the blocks.
• I/O control layer contains the coding (called device drivers) which
can be used to access hard disk. This layer also handles interrupt.
5.8.1 Overview
The data structure store on disk by file system:
On disk Structure
FIGURE 5.16
117
Operating System 5.9 DIRECTORY IMPLEMENTATION
The allocation methods are used so that files can be accessed easily and
disk space can be utilized effectively. The following are the three methods
of allocating files on disk:
5.10.1 Contiguous Allocation
• In contiguous allocation method, all blocks of files are kept in
continuous manner on the disk.
• Its speed is fast because disk head requires no movements.
• The file is defined by the length of first block and the disk address.
• The problem in continuous allocation occurs when data of any file is
extended or size of the file is not known beforehand.
• So, there can be a problem of external and internal fragmentation.
• It combines all the indexes of files into a single block for accessing.
• The whole block is given to each file so, some space is wasted.
• Linked scheme: One disk block is an index block and single disk
operation is enough for read and write. The first block contains block
address, header information or may be a pointer.
• It is a simple approach to use, in this each bit shows s disk block, set
to 0 if allocated and 1 if it is free.
• For quickly finding continues blocks fast algorithms are used.
5.11.2 Linked List
• The free list stores the addresses of k (say) blocks in first free block.
• First k-1 blocks are free and last block contains data of another free
k blocks.
• Addresses can be found easily.
5.11.4 Counting
5.12 SUMMARY
Files: - All the applications need to store and fetch the information in the
computer. It is essential to store the data for long-term and make data
sharable so that various processes can access the data at any time. The size
of data is large so some appropriate storage devices must be used. To
match these requirements, the information needs to store on disk in units
called files.
File System: - Files are handled by the operating system. Operating
system design deals with how the files are structured, used, accessed,
named and implemented. That part of operating system which deals with
files is known as file system
Mounting: - Operating system make directories and file on memory
devices, so that users can access that data through file system. This
process is called File-system mounting.
120
120
Unmounted: - When operating system makes the memory device for safe File System
removal and disconnect all user access to directories and file on the mount
point is called unmounted.
File’s Attributes: - When a file is created, a lot of information regarding
the file is created like its name, date of creation, size and so on. These
items are called file’s attributes.
5.13 EXERCISES
121