[go: up one dir, main page]

0% found this document useful (0 votes)
70 views40 pages

Operating Systems

Uploaded by

kidusyared455
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views40 pages

Operating Systems

Uploaded by

kidusyared455
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

2.2.

Operating Systems
2.2.1. Introduction
An Operating System (OS) acts as an interface connecting a computer user with the hardware of
the computer. An operating system falls under the category of system software that performs all
the fundamental tasks like file management, memory handling, process management, handling
the input/output, and governing and managing the peripheral devices like disk drives, networking
hardware, printers, etc.

Some well-liked Operating Systems are Linux, Windows, OS X, Solaris, Chrome OS, etc.

What is operating system?

A program that acts as intermediary between a user of a computer and the computer hardware. A
set of programs that coordinates all activities among computer hardware resources. An operating
system is a program that acts as an interface between the user and the computer hardware and
controls the execution of all kinds of programs.

In general, operating system is manager of the computer system

Operating system provides two important things

Abstraction

 Hides details of difference hardware configurations

 Applications do not need tailored for each possible device that might be present on a
system

Arbitration

 Manages access to shared hardware resources

 Enables multiple applications to share the same hardware simultaneously

Operating System goals

 Execute user programs and make user problems easier

 Make the computer system convenient to use

Page 119 of 248


 Use the computer hardware in efficient mannerperating System goals

The layers in systems

 Applications

 Operating system

 Organization -memory, decode unit

 VLSI- logical gates (OR, AND, NOT)

 Transistors – cmos transistors

Figure: System layers

The Three Elements of an OS

 User Interface
o The part of the OS that you interface with.
 Kernel
o The core of the OS. Interacts with the BIOS (at one end), and the UI (at the other
end).
 File Management System
o Organizes and manages files.

2.2.2. Basic elements of computer system

At an upper level of any computer architecture, a computer is supposed to have a processor,


memory and some I/O components, with one or more quantities of each type. These components
are interrelated and connected in a way to achieve the major function of the computer, which is
to execute programs. So, there are four key structural elements of any computer. These are:

 Processor: It controls the processes within the computer and carries out its data
processing functions. When there is only one processor available, it is in combination
termed as the central processing unit (CPU), which you must be familiar with.

Page 120 of 248


 Main memory: It stores data and programs within it. This memory is typically volatile
and is also called primary memory. This is because when the computer is shut down, the
contents that are within the memory gets lost. In contrast, the contents of disk memory
are kept hold of even when the computer system is turned off which you call as shutting
down of Operating system or computer. Main memory is also termed as real memory.
 I/O modules: This moves the data within the computer to its peripheral external
environment. The external environment supposed to have a variety of devices, including
secondary memory devices (e.g., pen drives, CDs, etc.), communications equipment
(such as LAN cable), terminals, etc.
 System bus: It provides communication between processors, main memory, and I/O
modules.

2.2.3. Operating system Services

An Operating System supplies different kinds of services to both the users and to the programs as
well. It also provides application programs (that run within an Operating system) an environment
to execute it freely. It provides users the services run various programs in a convenient manner.

Here is a list of common services offered by an almost all operating systems:

 User Interface
 Program Execution
 File system manipulation
 Input / Output Operations
 Communication
 Resource Allocation
 Error Detection
 Accounting
 Security and protection

This chapter will give a brief description of what services an operating system usually provides
to users and those programs that are and will be running within it.

Page 121 of 248


 User Interface of operating system

Usually, Operating system comes in three forms or types. Depending on the interface their types
have been further subdivided. These are:

 Command line interface


 Batch based interface
 Graphical User Interface
 Let's get to know in brief about each of them.

The command line interface (CLI) usually deals with using text commands and a technique for
entering those commands. The batch interface (BI): commands and directives are used to manage
those commands that are entered into files and those files get executed. Another type is the
graphical user interface (GUI): which is a window system with a pointing device (like mouse or
trackball) to point to the I/O, choose from menus driven interface and to make choices viewing
from a number of lists and a keyboard to entry the texts.

 Program Execution in Operating System

The operating system must have the capability to load a program into memory and execute that
program. Furthermore, the program must be able to end its execution, either normally or
abnormally / forcefully.

 File system manipulation

Programs need has to be read and then write them as files and directories. File handling portion
of operating system also allows users to create and delete files by specific name along with
extension, search for a given file and / or list file information. Some programs comprise of
permissions management for allowing or denying access to files or directories based on file
ownership.

 Input / Output Operations

A program which is currently executing may require I/O, which may involve file or other I/O
device. For efficiency and protection, users cannot directly govern the I/O devices. So, the OS
provide a means to do I/O Input / Output operation which means read or write operation with any
file.

Page 122 of 248


 Communication

Process needs to swap over information with other process. Processes executing on same
computer system or on different computer systems can communicate using operating system
support. Communication between two processes can be done using shared memory or via
message passing.

 Resource Allocation

When multiple jobs running concurrently, resources must need to be allocated to each of them.
Resources can be CPU cycles, main memory storage, file storage and I/O devices. CPU
scheduling routines are used here to establish how best the CPU can be used.

 Error Detection

Errors may occur within CPU, memory hardware, I/O devices and in the user program. For each
type of error, the OS takes adequate action for ensuring correct and consistent computing.

 Accounting

This service of the operating system keeps track of which users are using how much and what
kinds of computer resources have been used for accounting or simply to accumulate usage
statistics.

 Security and protection

Protection includes in ensuring all access to system resources in a controlled manner. For making
a system secure, the user needs to authenticate him or her to the system before using (usually via
login ID and password).

Page 123 of 248


Operating system Categories

2.2.4. Operating system categories

Category Name
Desktop  Windows
 OS X
 UNIX
 Linux
 Chrome OS

Server  Windows server


 Mac OS X Server
 UNIX
 lINUX

Mobile  Google android


 Apple iOS
 Windows phone

Page 124 of 248


Desktop operating system

A desktop operating system is a complete operating system that works on desktops, laptops, and
some tablets

The Macintosh operating system has earned a reputation for its ease of use Latest version is OS
X..Chrome OS is a Linux-based operating system designed to work primarily with web apps.

Page 125 of 248


Server Operating System

Mobile Operating Systems

The operating system on mobile devices and many consumer electronics is called a mobile
operating system and resides on firmware.

Android is an open source, Linux-based mobile operating system designed by Google for
smartphones and tablets.

Page 126 of 248


iOS, developed by Apple, is a proprietary mobile operating system specifically made for Apple‘s
mobile devices

Windows Phone, developed by Microsoft, is a proprietary mobile operating system that runs on
some smartphones.

Page 127 of 248


2.2.5. Operating System Types

1. Single – user single tasking

This type manages the computer so that one user

can effectively do one thing at a time.

Example: DOS

2. Single user multitasking

 A single user can perform many tasks at a time

Example: Windows

3. Multi-user multi-taking

Allows two or more users to run programs at the same time. Some operating systems permit
hundreds or even thousands of concurrent users.

Example : Unix , Linux

4. Real time operating System(RTOS)

 Often used as a control device in a dedicated application such as


 controlling scientific experiments
 medical imaging systems
 industrial control systems
 some display systems.
 Well-defined fixed-time constraints.
 Real-Time systems may have either hard or soft real-time.

Example : QNX, Vxworks, RTLinux

Page 128 of 248


5. Embedded system

 Personal Digital Assistants (PDAs)


 Cellular telephones

Issues:

 Limited memory
 Slow processors
 Small display screens.
 Usually most features of typical OS‘s are not included at the expense of the developer.
 Emphasis is on I/O operations.
 Memory Management and Protection features are usually absent.

Example : kontiki os

Distributed Operating System

 Network operating system


 Runs on one computer but allows its processes to access remote resources
 Distributed operating system
 Single OS manages resources on more than one computer

Characteristics of Modern Operating Systems

 microkernel architecture
 multithreading
 symmetric multiprocessing
 distributed operating systems
 object-oriented design

2.2.6. Process Management

During the olden days, computer systems allowed only one program to be executed at one time.
This is why that program had complete power of the system and had access to all or most of the

Page 129 of 248


system's resources. In contrast, nowadays, current-day computer systems let multiple programs
to be loaded into memory and execute them concurrently. This massive change and development
required rigid control and more compartmentalization in various programs.

The more fused or complex the operating system is, the more it is expected to do on behalf of its
users. Even though its main concern is the execution of user programs, it also requires taking
care of various system tasks which are better left outside the kernel itself. So a system must
consist of a set of processes: operating system processes, executing different system code and
user processes which will be executing user code. In this chapter, you will learn about the
processes that are being used and managed by the operating system.

What is Process?

A process is mainly a program in execution where the execution of a process must progress in
sequential order or based on some priority or algorithms. In other words, it is an entity that
represents the fundamental working that has been assigned to a system.

When a program gets loaded into the memory, it is said to as a process. This processing can be
categorized into four sections. These are:

 Heap
 Stack
 Data
 Text

Process Concept

There's a question which arises while discussing operating systems that involves when to call all
the activities of the CPU. Even on a single-user operating system like Microsoft Windows, a user
may be capable of running more than a few programs at one time like MS Word processor,
different web browser(s) and an e-mail messenger. Even when the user can execute only one
program at a time, the operating system might require maintaining its internal programmed
activities like memory management. In these respects, all such activities are similar, so we call
all of them as 'processes.'

Page 130 of 248


Again another term - "job" and process are used roughly replacing each other. Much of the
operating - system theory and terminology was developed during a time when the main action of
operating systems was job processing; so the term job became famous gradually. It would be
confusing to avoid the use of commonly accepted terms which include the word job like 'job
scheduling.'

Process state of Operating System

As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:

 New: In this state, the process is being created.


 Running: In this state, instructions are being executed.
 Waiting: In this state, the process is waiting for the different event to occur like I/O
completion or treatment of a signal.
 Ready: In this state, the process waits to assign a processor.
 Terminated: In this state, the process has finished executing.

Operating System Thread

The process model that has been discussed in previous tutorials described that a process was an
executable program that is having a single thread of control. The majority of the modern
operating systems now offer features enabling a process for containing multiple threads of
control. In this tutorial, there are many concepts associated with multithreaded computer
structures. There are many issues related to multithreaded programming and how it brings effect
on the design of any operating systems. Then you will learn about how the Windows XP and
Linux OS maintain threads at the kernel level.

A thread is a stream of execution throughout the process code having its program counter which
keeps track of lists of instruction to execute next, system registers which bind its current working
variables. Threads are also termed as lightweight process. A thread uses parallelism which
provides a way to improve application performance.

Major Types of Threads

Page 131 of 248


Let us take an example where a web browser may have one thread to display images or text
while another thread retrieves data from the network. Another example can be a word processor
that may have a thread for displaying the UI or graphics while a new thread for responding to
keystrokes received from the user and another thread is to perform spelling and grammar
checking in the background. In some cases, a single application may be required to perform
several similar tasks.

Advantages / Benefits of Threads in Operating System

The advantages of multithreaded programming can be categorized into four major headings -

 Responsiveness: Multithreading is an interactive concept for an application which may


allow a program to continue running even when a part of it is blocked or is carrying a
lengthy operation, which increases responsiveness to the user.
 Resource sharing: Mostly threads share the memory and the resources of any process to
which they fit in. The advantage of sharing code is that it allows any application to have
multiple different threads of activity inside the same address space.
 Economy: In OS, allocation of memory and resources for process creation seems costly.
Because threads can distribute resources of any process to which they belong, it became
more economical to create and develop context-switch threads.
 Utilization of multiprocessor architectures: The advantages of multithreading can be
greatly amplified in a multiprocessor architecture, where there exist threads which may
run in parallel on diverse processors.
Multithreading Models

All the threads must have a relationship between them (i.e., user threads and kernel threads).
Here is a list which tells the three common ways of establishing this relationship.

 Many-to-One Model: In the many-to-one model plots several user-level threads to a


single kernel thread.
 One-to-One Model: In the one-to-one model maps every particular user thread to a
kernel thread and provides more concurrency compare to many-to-one model.

Page 132 of 248


 Many-to-Many Model: In the many-to-many model, many user-level threads get
mapped to a smaller or equal quantity of kernel threads. The number of kernel threads
might be exact to either a particular application or to a particular machine.

2.2.7. Operating System Scheduling Techniques


CPU scheduling is the foundation or starting concept of multi-programmed operating systems
(OSs). By toggling the CPU with different processes, the operating system can make the
computer and its processing power more productive. In this tutorial, you will learn about the
introductory basic of CPU-scheduling concepts.
What is CPU/ Process Scheduling?
The CPU scheduling is the action done by the process manager to handle the elimination of the
running process within the CPU and the inclusion of another process by certain specific
strategies.

Reason Behind the Use of CPU Scheduling

In a single-processor system, only one job can be processed at a time; rest of the job must wait
until the CPU gets free and can be rescheduled. The aim of multiprogramming is to have some
process to run at all times, for maximizing CPU utilization. The idea is simple. In this case, the
process gets executed until it must wait, normally for the completion of some I/O request.

In a simple operating system, the CPU then just stands idle. All this waiting time is wasted; no
fruitful work can be performed. With multiprogramming, you can use this time to process other
jobs productively.

CPU-I/O Burst Cycle

The success of CPU scheduling varies on an experiential property of processes: Process


execution holds a cycle of CPU execution and Input / Output wait. Processes get to swap
between these two states. Process execution begins with a burst of CPU. That is followed by an
Input / Output burst and goes after by one more CPU burst, then one more Input / Output burst,
and it continues. Eventually, the final or last CPU burst finish with a system request for
terminating execution.

Page 133 of 248


CPU Schedulers

Whenever the CPU gets idle, the operating system (OS) has to select one of the processes in the
ready queue for execution. The selection process is performed by the short-term scheduler (also
known as CPU scheduler). The scheduler picks up a process from the processes in memory
which are ready to be executed and allocate the CPU with that process.

Preemptive Scheduling

CPU scheduling choices may take place under the following four conditions:

 When a process toggles from the running state to its waiting state

 When a process toggles from the running state to its ready state (an example can be when
an interrupt occurs)

 When a process toggles from the waiting state to its ready state (for example, at the
completion of Input / Output)

 When a process terminates (example when execution ends)

Scheduling Algorithms of Operating System

CPU scheduling treats with the issues of deciding which of the processes in the ready queue
needs to be allocated to the CPU. There are several different CPU scheduling algorithms used
nowadays within an operating system. In this tutorial, you will get to know about some of them.

The most common CPU scheduling Algorithms

1.1 First Come First Serve (FCFS) scheduling

1.2. Shortest Job First (SJF) Scheduling

1.3 Priority scheduling

1.4 Round Robin (BB) Scheduling

1.5 Multilevel Queue Scheduling

Terminology

Arrival time(AT)

Page 134 of 248


 The time when the process is entered in to the queue

Burst Time (BT)

 The time that the CPU requires actually to complete the process

Completion Time (CT)

 Turn Around Time(TAT)

Waiting Time (WT)

1. First- Come, First-Served (FCFS) Scheduling

The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling


algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When a process
enters the ready queue, its PCB is linked onto the tail of the queue. The code for FCFS
scheduling is simple to write and understand.

On the negative side, the average waiting time under the FCFS policy is often quite long. First-
Come, First-Served (FCFS) Scheduling

Page 135 of 248


Problems of FCFS
 It is non-preemptive algorithm
 Improper process scheduling
 Resource utilization in parallel not possible, which leads to convey effect and hence poor
resource utilization.
Convey effect
 A situation in which the whole OS slows down due to few slow process
FCFS Scheduling Exercise

Page 136 of 248


2. Shortest Job First scheduling

A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the process‘s next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.

Shortest Job First scheduling – non preemptive

Page 137 of 248


Advantages
 Maximum throughput

Page 138 of 248


 Minimum average WT and TAT
Disadvantage
 Starvation to longer jobs
 It cannot be implementable because burst time of processes can not known ahead
Solution: shortest Job with predicted BT
Predicted techniques
Static
 Process size
 Process type
Dynamic
 Simple averaging
 exponential averaging
3. Priority scheduling
 The SJF algorithm is a special case of the general priority-scheduling algorithm.
 A priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the priority (p)is the inverse of the
(predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice
versa.
 Note that we discuss scheduling in terms of high priority and low priority.
There are two types of priority scheduling
1. Static
 Does not change throughout the execution of the process
2. Dynamic
 Changes at regular intervals of time
Priority
 Preemptive
 Non-preemptive

Page 139 of 248


Page 140 of 248
Example 2

4. Round Robin (RR) scheduling algorithm

Round Robin algorithm is the most common of all algorithms. It uses quantum time (time
slice). Quantum time :the maximum time in which the CPU can give a process at a single point
of time. Before pause that process and move to another process inside the queue

A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is treated as
a circular queue. Round Robin scheduling algorithm is preemptive. The average waiting time
under the RR policy is often long

Page 141 of 248


Example 1
Consider the following set of processes that arrive at time 0, with the length of the CPU burst
given in milliseconds:

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it
requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is
given to the next process in the queue, process P2. Process P2 does not need 4 milliseconds, so it
quits before its time quantum expires. The CPU is then given to the next process, process P3.
Once each process has received 1 time quantum, the CPU is returned to process P1 for an
additional time quantum. The resulting RR schedule is as follows

Let‘s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds (10 - 4),
P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average waiting time is
17/3 = 5.66 milliseconds.

Page 142 of 248


Page 143 of 248
5. Multilevel Queue Scheduling

Another class of scheduling algorithms has been created for situations in which processes are
easily classified into different groups. For example, a common division is made between
foreground (interactive) processes and background (batch) processes.

These two types of processes have different response-time requirements and so may have
different scheduling needs. In addition, foreground processes may have priority (externally
defined) over background processes

Thread Scheduling

Distinction between user-level and kernel-level threads.When threads supported, threads


scheduled, not processes.Many-to-one and many-to-many models, thread library schedules user-
level threads to run on LWP.Known as process-contention scope (PCS) since scheduling
competition is within the process.Typically done via priority set by programmer

Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition
among all threads in system.

Page 144 of 248


2.2.8. Inter-process communication (IPC)
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other
processes executing in the system. Any process that does not share data with any other process is
independent. A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.

IPC methods

 File : resource for storing information

 Signal: used in Unix operating system

 socket :provides point to point communication and two way communication between two
processes

 Message queue: it provides asynchronous and communication protocol

 Pipe :the output of each process(stdout) fits directly as input(stdin)

Dead Locks

In a multiprogramming system, numerous processes get competed for a finite number of


resources. Any process requests resources, and as the resources aren't available at that time, the
process goes into a waiting state. At times, a waiting process is not at all able again to change its
state as other waiting processes detain the resources it has requested. That condition is termed as
deadlock. In this chapter, you will learn about this issue briefly in connection with semaphores.

System Model

A system model or structure consists of a fixed number of resources to be circulated among some
opposing processes. The resources are then partitioned into numerous types, each consisting of
some specific quantity of identical instances. Memory space, CPU cycles, directories and files,
I/O devices like keyboards, printers and CD-DVD drives are prime examples of resource types.
When a system has 2 CPUs, then the resource type CPU got two instances.

Page 145 of 248


Under the standard mode of operation, any process may use a resource in only the below-
mentioned sequence:

1. Request: When the request can't be approved immediately (where the case may be when
another process is utilizing the resource), then the requesting job must remain waited
until it can obtain the resource.

2. Use: The process can run on the resource (like when the resource is a printer, its
job/process is to print on the printer).

 Release: The process releases the resource (like, terminating or exiting any specific
process).

Necessary Conditions and Preventions for Deadlock

A deadlock state can occur when the following four circumstances hold simultaneously within a
system:

 Mutual exclusion: At least there should be one resource that has to be held in a non-
sharable manner; i.e., only a single process at a time can utilize the resource. If other
process demands that resource, the requesting process must be postponed until the
resource gets released.

 Hold and wait: A job must be holding at least one single resource and waiting to obtain
supplementary resources which are currently being held by several other processes.

 No preemption: Resources can't be anticipated; i.e., a resource can get released only
willingly by the process holding it, then after that, the process has completed its task.

 Circular wait: The circular - wait situation implies the hold-and-wait state or condition,
and hence all the four conditions are not completely independent. They are
interconnected among each other.

Methods for Handling Deadlocks

Normally you can deal with the deadlock issues and situations in one of the three ways
mentioned below:

Page 146 of 248


 You can employ a protocol for preventing or avoiding deadlocks, and ensure that the
system will never go into a deadlock state.

 You can let the system to enter any deadlock condition, detect it, and then recover.

 You can overlook the issue altogether and assume that deadlocks never occur within the
system.

But is recommended to deal with deadlock, from the 1st option

Interprocess Communication (IPC)

In this chapter, you will learn about the various working capabilities of IPC (Inter-process
communication) within an Operating system along with usage. Processes executing concurrently
in the operating system might be either independent processes or cooperating processes. A
process is independent if it cannot be affected by the other processes executing in the system.

Basics of Interprocess Communication

There are numerous reasons for providing an environment or situation which allows process co-
operation:

 Information sharing: Since some users may be interested in the same piece of information
(for example, a shared file), you must provide a situation for allowing concurrent access
to that information.

 Computation speedup: If you want a particular work to run fast, you must break it into
sub-tasks where each of them will get executed in parallel with the other tasks. Note that
such a speed-up can be attained only when the computer has compound or various
processing elements like CPUs or I/O channels.

 Modularity: You may want to build the system in a modular way by dividing the system
functions into split processes or threads.

 Convenience: Even a single user may work on many tasks at a time. For example, a user
may be editing, formatting, printing, and compiling in parallel.

Page 147 of 248


Working together with multiple processes, require an interprocess communication (IPC) method
which will allow them to exchange data along with various information. There are two primary
models of interprocess communication:

1. shared memory and

2. message passing.

In the shared-memory model, a region of memory which is shared by cooperating processes gets
established. Processes can be then able to exchange information by reading and writing all the
data to the shared region. In the message-passing form, communication takes place by way of
messages exchanged among the cooperating processes.

The two communications models are contrasted in the figure below:

Shared Memory

Interprocess communication (IPC) usually utilizes shared memory that requires communicating
processes for establishing a region of shared memory. Typically, a shared-memory region resides
within the address space of any process creating the shared memory segment. Other processes
that wish for communicating using this shared-memory segment must connect it to their address
space.

Page 148 of 248


More on Inter Process Shared Memory

Note that, normally what happens, the operating system tries to check one process from
accessing other's process's memory. Shared memory needs that two or more processes agree to
remove this limitation. They can then exchange information via reading and writing data within
the shared areas.

The form of the data and the location gets established by these processes and are not under the
control of the operating system. The processes are also in charge to ensure that they are not
writing to the same old location simultaneously.

2.2.9. Memory Management


In this chapter, you will learn about a variety of ways of managing memory along with its
working phenomenon. The memory management algorithms differ from a primitive bare -
machine technique to different paging and segmentation policies. Each approach has its benefit
and demerits. Selection of a varied memory management technique for a specific system depends
largely on various factors, particularly on the hardware design of the system. Now you will see
many techniques that require hardware support, recent designs that have closely incorporated the
hardware and operating system.

Basic Hardware
Main memory and different registers built inside the processor itself are the only primary storage
that the CPU can have the right to use directly by accessing. There are some machine
instructions which take memory addresses as arguments or values, but none of them take disk
addresses. So, any instructions in implementation and any data which is used by the instructions
should have to be in one of these direct accessing storage devices. When the data are not in
memory, they have to be moved there before the CPL can work on them.

Registers which are built into the CPU are accessible within one single cycle of the CPU clock.
Most CPUs' can interpret those instructions and carry out simple operations on register contents
at the rate of 1 or more process per clock tick. The same may not be said for main memory,
which gets accessed via a transaction on the memory bus.

Page 149 of 248


Address Binding

Usually, a program inhabits on a disk in a binary executable form of a file. For executing, the
program must be fetched into memory and positioned within a process (list in the queue).
Depending on the usage of memory management, the process may get moved between disk and
memory at the time of its execution. The processes on the disk are then waiting to be brought
into main memory for implementing form the input queue. The normal method is to choose any
one of the processes in the input queue and to load that process into the memory.

As the process gets executed, it is able now to access instructions and data from memory.
Ultimately, the process expires, and its memory space is declared as available/free. Most systems
let user process to exist in any part of the physical memory. Therefore, even if the address space
of the computer begins at 00000, the first address of the user process need not have to be 00000.
This approach can affect the addresses which the user program can use.

Normally, the binding of instructions and data onto memory addresses can be done at any of the
step given below:

 Compile time: Compile time is the phase where the process will reside in memory and
eventually absolute code can be generated.
 Load time: At compile time, when the process will reside in memory, the compiler must
generate relocatable code. In that case, final binding gets delayed until load time.
 Execution time: Execution time is the time that a program or instruction takes for
executing a particular task.

Virtual Memory

In this chapter, you will gather knowledge about what virtual memory is and how they are being
managed within the operating system, along with its working. Virtual memory is a technical
concept that lets the execution of different processes which are not totally in memory. One main
benefit of this method is that programs can be larger than the physical memory.
Also, virtual memory abstracts primary memory into a very large, consistent array of storage that
divides logical memory as viewed by the user from that of physical memory. This technique is
used to free programmers from the anxiety of memory-storage limitations.

Page 150 of 248


Uses of Virtual Memory

Virtual memory also permits processes for sharing files easily and for implementing shared
memory. Moreover, it offers a well-organized mechanism for process creation. Virtual memory
is not that easy to apply and execute. However, this technique may substantially decrease
performance if it is not utilized carefully.

What is Virtual Address Space (VAS)?


The virtual address space of any process is defined as the logical (or virtual) view of how any
process gets stored in memory. Normally, this view is where a process begins at a certain logical
address — say, addresses location 0—and then exists in contiguous memory. Although, the fact
is physical memory might be structured in the form of page frames arid where the physical page
frames are assigned to a process that may not be adjacent to each other. It depends on to the
memory management unit (MMU) which maps logical pages to physical page frames in
memory.

The Concept of Demand Paging

Think of how an executable program could have loaded from within a disk into its memory. One
choice would be to load the complete program in physical memory at a program at the time of
execution. However, there is a problem with this approach, which you may not at first need the
entire program in memory. So the memory gets occupied unnecessarily.

An alternative way is to load pages only when they are needed/required initially. This method is
termed as demand paging and is commonly utilized in virtual memory systems. Using this
demand-paged virtual memory, pages gets only loaded as they are demanded at the time of
program execution; pages which are never accessed will never load into physical memory.

A demand - paging scheme is similar to a paging system with swapping feature where processes
exist in secondary memory (typically in a disk). As you want to execute any process, you swap it
into memory internally. Rather than swapping the complete process into memory, you can use a
"lazy swapper." A "lazy swapper" in no way swaps a page into memory unnecessarily unless that
page required for execution.

Page 151 of 248


Hardware Required for the Concept of Demand Paging

The hardware required for supporting demand paging is the same that is required for paging and
swapping:

 Page table: Page table can mark an entry invalid or unacceptable using a valid-invalid bit.
 Secondary memory: Secondary memory retains those pages which are not there in main
memory. The secondary memory is generally a high-speed disk. It is also known as a
swap device, and the segment of disk used for this purpose is termed as swap space.

2.2.10.File system Interface


 For the majority of users, the file system is the most obvious aspect of any operating
system. This provides users the method for storage and access to data as well as programs
of the operating system where all the users of the computer system can use it.

The file system consists of 2 distinct parts:

 a collection of files, that store related data, and


 a directory structure, which organizes and provides information about all the files in the
system.

In this chapter, you will learn about the different file tribute, concepts of file and its storage
along with operations on files.

File Attributes

A file is named, for the ease of its users and is referred by its name. A name is usually a string of
characters like filename.cpp, along with an extension which designates the file format. Some
systems (like Linux) distinguish between uppercase and lowercase characters in names, whereas
other systems don't. When a file is given a name, it becomes independent of the process, the user
and also the system which created it. Let's suppose, one user might make the file filename.cpp,
and another user might be editing that file by deducing its name. The file's owner may write the
file to a compact disk (CD) or send it via an e-mail or copy it across a network, and it could still
be called filename.cpp on the destination system.

Page 152 of 248


Fundamental Components of a File

A file's attributes vary from one operating system to another but typically consist of these:

 Name: Name is the symbolic file name and is the only information kept in human
readable form.
 Identifier: This unique tag is a number that identifies the file within the file system; it is
in non-human-readable form of the file.
 Type: This information is needed for systems which support different types of files or its
format.
 Location: This information is a pointer to a device which points to the location of the file
on the device where it is stored.
 Size: The current size of the file (which is in bytes, words, etc.) which possibly the
maximum allowed size gets included in this attribute.
 Protection: Access-control information establishes who can do the reading, writing,
executing, etc.
 Date, Time & user identification: This information might be kept for the creation of the
file, its last modification and last used. These data might be useful for in the field of
protection, security, and monitoring its usage.

File Operations

A file is an abstract data type. For defining a file properly, we need to consider the operations
that can be performed on files. The operating system can provide system calls to create, write,
read, reposition, delete, and truncate files. There are six basic file operations within an Operating
system. These are:

 Creating a file: There are two steps necessary for creating a file. First, space in the file
system must be found for the file. We discuss how to allocate space for the file. Second,
an entry for the new file must be made in the directory.
 Writing a file: To write to a file, you make a system call specify about both the name of
the file along with the information to be written to the file.
 Reading a file: To read from a file, you use a system call which specifies the name of the
file and where within memory the next block of the file should be placed.

Page 153 of 248


 Repositioning inside a file: The directory is then searched for the suitable entry, and the
'current-file-position' pointer is relocating to a given value. Relocating within a file need
not require any actual I/O. This file operation is also termed as 'file seek.'
 Deleting a file: For deleting a file, you have to search the directory for the specific file.
Deleting that file or directory release all file space so that other files can re-use that
space.
 Truncating a file: The user may wish for erasing the contents of a file but keep the
attributes same. Rather than deleting the file and then recreate it, this utility allows all
attributes to remain unchanged — except the file length — and let the user add or edit the
file content.

Input Output Management

The three major jobs of a computer are Input, Output, and Processing. In a lot of cases, the most
important job is Input / Output, and the processing is simply incidental. For example, when you
browse a web page or edit any file, our immediate attention is to read or enter some information,
not for computing an answer. The primary role of the operating system in computer Input /
Output is to manage and organize I/O operations and all I/O devices. In this chapter, you will
learn about the various uses of input output devices concerning the operating system.

Overview of Input / Output system

The controlling of various devices that are connected to the computer is a key concern of
operating-system designers. This is because I/O devices vary so widely in their functionality and
speed (for example a mouse, a hard disk and a CD-ROM), varied methods are required for
controlling them. These methods form the I/O sub-system of the kernel of OS that separates the
rest of the kernel from the complications of managing I/O devices.

I/O Hardware

Computers operate many huge kinds of devices. The general categories of storage devices are
like disks, tapes, transmission devices (like network interface cards, modems) and human
interface devices (like screen, keyboard, etc.).

Page 154 of 248


A device communicates with the operating system of a computer by transferring signals over
cable or even through the air. The peripheral devices communicate with the machine through a
connection point also called ports— (one example is a serial port). When devices use a set of
wires or cables, that connecting cables are called a "bus." A bus is a collection of wires and a
firmly defined protocol which specifies a set of messages that can be sent on the wires.
Operating System Using, I/O Port
An I/O port usually consists of four different registers. These are (1) status, (2) control, (3) data-
in, and (4) data-out registers.
 The data-in register is read by the host for getting input.
 The data-out register is written by the host for sending output.
 The status register holds bits which can be read by the host.
 The control register is written by the host for starting a command or for changing the
mode of any device.
 The data registers are usually 1 to 4 bytes in size. Some of the controllers have FIFO
chips which hold several bytes of input or output data for expanding the capacity of the
controller beyond the size of the data register.
Polling
The complete protocol used to interact with the host and a controller can be difficult for the OS,
but the necessary handshaking notion is simple. You can express handshaking with an example.
You assume that 2 bits have been used for coordinating the producer - consumer relationship
among the controller and the host. The controller points to its state using the busy bit in the status
register.
The host marks output through a port, with the controller using handshaking like:
1. The host frequently reads the busy bit till that bit gets clear.
2. The host then sets the write, bit in the command register and then writes a byte to the
data-out register.
3. The host then puts the command-ready bit.
4. As the controller notices that the command - ready bit has been set, it sets the busy bit.
5. The controller then reads the command register. It reads the data-out register for getting
the byte and performs the I/O to the device.
6. The controller clears the command - ready bit then clears the error bit within the status
register for indicating that the device I/O succeeded.

Page 155 of 248


Review questions Operating systems

1. What is an operating system?


A. interface between the hardware and application programs
B. collection of programs that manages hardware resources
C. system service provider to the application programs
D. all of the mentioned
2. What is the main function of the command interpreter?
A. to provide the interface between the API and application program
B. to handle the files in the operating system
C. to get and execute the next user-specified command
D. none of the mentioned
3. To access the services of the operating system, the interface is provided by the _____
A. Library
B. System calls
C. Assembly instructions
D. API
4. CPU scheduling is the basis of ___________
A. multiprogramming operating systems
B. larger memory sized systems
C. multiprocessor systems
D. none of the mentioned
5. Which one of the following is not true?
A. kernel remains in the memory during the entire computer session
B. kernel is made of various modules which can not be loaded in running operating
system
C. kernel is the first part of the operating system to load into memory during
booting
D. kernel is the program that constitutes the central core of the operating system
6. Where is the operating system placed in the memory?
A. either low or high memory (depending on the location of interrupt vector)
B. in the low memory
C. in the high memory
D. none of the mentioned
7. If a process fails, most operating system write the error information to a ______
A. new file
B. another running process
C. log file
D. none of the mentioned
8. Which one of the following is not a real time operating system?
A. RTLinux
B. Palm OS
C. QNX
D. VxWorks

Page 156 of 248


9. What does OS X has?
A. monolithic kernel with modules
B. microkernel
C. monolithic kernel
D. hybrid kernel
10. In operating system, each process has its own __________
A. open files
B. pending alarms, signals, and signal handlers
C. address space and global variables
D. all of the mentioned
11. In a timeshare operating system, when the time slot assigned to a process is
completed, the process switches from the current state to?
A. Suspended state
B. Terminated state
C. Ready state
D. Blocked state
12. When a process is in a ―Blocked‖ state waiting for some I/O service. When the
service is completed, it goes to the __________
A. Terminated state
B. Suspended state
C. Running state
D. Ready state
13. The FCFS algorithm is particularly troublesome for ____________
A. operating systems
B. multiprocessor systems
C. time sharing systems
D. multiprogramming systems
14. For an effective operating system, when to check for deadlock?
A. every time a resource request is made at fixed time intervals
B. at fixed time intervals
C. every time a resource request is made
D. none of the mentioned
15. A deadlock avoidance algorithm dynamically examines the __________ to ensure
that a circular wait condition can never exist.
A. operating system
B. resources
C. system storage state
D. resource allocation state
16. Swapping _______ be done when a process has pending I/O, or has to execute I/O
operations only into operating system buffers.
A. must never
B. maybe
C. can
D. must

Page 157 of 248


17. The main memory accommodates ____________
A. cpu
B. user processes
C. operating system
D. all of the mentioned
18. The operating system is responsible for?
A. bad-block recovery
B. booting from disk
C. disk initialization
D. all of the mentioned
19. To obtain better memory utilization, dynamic loading is used. With dynamic loading,
a routine is not loaded until it is called. For implementing dynamic loading ________
A. special support from operating system is essential
B. special support from hardware is required
C. user programs can implement dynamic loading without any special support
from hardware or operating system
D. special support from both hardware and operating system is essential
20. The _________ presents a uniform device-access interface to the I/O subsystem,
much as system calls provide a standard interface between the application and the
operating system.
A. Device drivers
B. I/O systems
C. Devices
D. Buses
21. In real time operating system ____________
A. process scheduling can be done only once
B. all processes have the same priority
C. kernel is not required
D. a task must be serviced by its deadline period
22. In Unix, which system call creates the new process?
A. create
B. fork
C. new
D. none of the mentioned
Answer sheet (Operating Systems)
1. D 6. A 11. C 16. A 21. D 26. 31. 36. 41. 46.

2. C 7. C 12. D 17. C 22. B 27. 32. 37. 42.B 47.

3. B 8. B 13. C 18. D 23. 28. 33. 38. 43. 48.

4. A 9. D 14. A 19. C 24. 29. 34. 39. 44. 49.

5. B 10. D 15. D 20. A 25. 30. 35. 40. 45 50.

Page 158 of 248

You might also like