Operating Systems
Operating Systems
Operating Systems
2.2.1. Introduction
An Operating System (OS) acts as an interface connecting a computer user with the hardware of
the computer. An operating system falls under the category of system software that performs all
the fundamental tasks like file management, memory handling, process management, handling
the input/output, and governing and managing the peripheral devices like disk drives, networking
hardware, printers, etc.
Some well-liked Operating Systems are Linux, Windows, OS X, Solaris, Chrome OS, etc.
A program that acts as intermediary between a user of a computer and the computer hardware. A
set of programs that coordinates all activities among computer hardware resources. An operating
system is a program that acts as an interface between the user and the computer hardware and
controls the execution of all kinds of programs.
Abstraction
Applications do not need tailored for each possible device that might be present on a
system
Arbitration
Applications
Operating system
User Interface
o The part of the OS that you interface with.
Kernel
o The core of the OS. Interacts with the BIOS (at one end), and the UI (at the other
end).
File Management System
o Organizes and manages files.
Processor: It controls the processes within the computer and carries out its data
processing functions. When there is only one processor available, it is in combination
termed as the central processing unit (CPU), which you must be familiar with.
An Operating System supplies different kinds of services to both the users and to the programs as
well. It also provides application programs (that run within an Operating system) an environment
to execute it freely. It provides users the services run various programs in a convenient manner.
User Interface
Program Execution
File system manipulation
Input / Output Operations
Communication
Resource Allocation
Error Detection
Accounting
Security and protection
This chapter will give a brief description of what services an operating system usually provides
to users and those programs that are and will be running within it.
Usually, Operating system comes in three forms or types. Depending on the interface their types
have been further subdivided. These are:
The command line interface (CLI) usually deals with using text commands and a technique for
entering those commands. The batch interface (BI): commands and directives are used to manage
those commands that are entered into files and those files get executed. Another type is the
graphical user interface (GUI): which is a window system with a pointing device (like mouse or
trackball) to point to the I/O, choose from menus driven interface and to make choices viewing
from a number of lists and a keyboard to entry the texts.
The operating system must have the capability to load a program into memory and execute that
program. Furthermore, the program must be able to end its execution, either normally or
abnormally / forcefully.
Programs need has to be read and then write them as files and directories. File handling portion
of operating system also allows users to create and delete files by specific name along with
extension, search for a given file and / or list file information. Some programs comprise of
permissions management for allowing or denying access to files or directories based on file
ownership.
A program which is currently executing may require I/O, which may involve file or other I/O
device. For efficiency and protection, users cannot directly govern the I/O devices. So, the OS
provide a means to do I/O Input / Output operation which means read or write operation with any
file.
Process needs to swap over information with other process. Processes executing on same
computer system or on different computer systems can communicate using operating system
support. Communication between two processes can be done using shared memory or via
message passing.
Resource Allocation
When multiple jobs running concurrently, resources must need to be allocated to each of them.
Resources can be CPU cycles, main memory storage, file storage and I/O devices. CPU
scheduling routines are used here to establish how best the CPU can be used.
Error Detection
Errors may occur within CPU, memory hardware, I/O devices and in the user program. For each
type of error, the OS takes adequate action for ensuring correct and consistent computing.
Accounting
This service of the operating system keeps track of which users are using how much and what
kinds of computer resources have been used for accounting or simply to accumulate usage
statistics.
Protection includes in ensuring all access to system resources in a controlled manner. For making
a system secure, the user needs to authenticate him or her to the system before using (usually via
login ID and password).
Category Name
Desktop Windows
OS X
UNIX
Linux
Chrome OS
A desktop operating system is a complete operating system that works on desktops, laptops, and
some tablets
The Macintosh operating system has earned a reputation for its ease of use Latest version is OS
X..Chrome OS is a Linux-based operating system designed to work primarily with web apps.
The operating system on mobile devices and many consumer electronics is called a mobile
operating system and resides on firmware.
Android is an open source, Linux-based mobile operating system designed by Google for
smartphones and tablets.
Windows Phone, developed by Microsoft, is a proprietary mobile operating system that runs on
some smartphones.
Example: DOS
Example: Windows
3. Multi-user multi-taking
Allows two or more users to run programs at the same time. Some operating systems permit
hundreds or even thousands of concurrent users.
Issues:
Limited memory
Slow processors
Small display screens.
Usually most features of typical OS‘s are not included at the expense of the developer.
Emphasis is on I/O operations.
Memory Management and Protection features are usually absent.
Example : kontiki os
microkernel architecture
multithreading
symmetric multiprocessing
distributed operating systems
object-oriented design
During the olden days, computer systems allowed only one program to be executed at one time.
This is why that program had complete power of the system and had access to all or most of the
The more fused or complex the operating system is, the more it is expected to do on behalf of its
users. Even though its main concern is the execution of user programs, it also requires taking
care of various system tasks which are better left outside the kernel itself. So a system must
consist of a set of processes: operating system processes, executing different system code and
user processes which will be executing user code. In this chapter, you will learn about the
processes that are being used and managed by the operating system.
What is Process?
A process is mainly a program in execution where the execution of a process must progress in
sequential order or based on some priority or algorithms. In other words, it is an entity that
represents the fundamental working that has been assigned to a system.
When a program gets loaded into the memory, it is said to as a process. This processing can be
categorized into four sections. These are:
Heap
Stack
Data
Text
Process Concept
There's a question which arises while discussing operating systems that involves when to call all
the activities of the CPU. Even on a single-user operating system like Microsoft Windows, a user
may be capable of running more than a few programs at one time like MS Word processor,
different web browser(s) and an e-mail messenger. Even when the user can execute only one
program at a time, the operating system might require maintaining its internal programmed
activities like memory management. In these respects, all such activities are similar, so we call
all of them as 'processes.'
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:
The process model that has been discussed in previous tutorials described that a process was an
executable program that is having a single thread of control. The majority of the modern
operating systems now offer features enabling a process for containing multiple threads of
control. In this tutorial, there are many concepts associated with multithreaded computer
structures. There are many issues related to multithreaded programming and how it brings effect
on the design of any operating systems. Then you will learn about how the Windows XP and
Linux OS maintain threads at the kernel level.
A thread is a stream of execution throughout the process code having its program counter which
keeps track of lists of instruction to execute next, system registers which bind its current working
variables. Threads are also termed as lightweight process. A thread uses parallelism which
provides a way to improve application performance.
The advantages of multithreaded programming can be categorized into four major headings -
All the threads must have a relationship between them (i.e., user threads and kernel threads).
Here is a list which tells the three common ways of establishing this relationship.
In a single-processor system, only one job can be processed at a time; rest of the job must wait
until the CPU gets free and can be rescheduled. The aim of multiprogramming is to have some
process to run at all times, for maximizing CPU utilization. The idea is simple. In this case, the
process gets executed until it must wait, normally for the completion of some I/O request.
In a simple operating system, the CPU then just stands idle. All this waiting time is wasted; no
fruitful work can be performed. With multiprogramming, you can use this time to process other
jobs productively.
Whenever the CPU gets idle, the operating system (OS) has to select one of the processes in the
ready queue for execution. The selection process is performed by the short-term scheduler (also
known as CPU scheduler). The scheduler picks up a process from the processes in memory
which are ready to be executed and allocate the CPU with that process.
Preemptive Scheduling
CPU scheduling choices may take place under the following four conditions:
When a process toggles from the running state to its waiting state
When a process toggles from the running state to its ready state (an example can be when
an interrupt occurs)
When a process toggles from the waiting state to its ready state (for example, at the
completion of Input / Output)
CPU scheduling treats with the issues of deciding which of the processes in the ready queue
needs to be allocated to the CPU. There are several different CPU scheduling algorithms used
nowadays within an operating system. In this tutorial, you will get to know about some of them.
Terminology
Arrival time(AT)
The time that the CPU requires actually to complete the process
On the negative side, the average waiting time under the FCFS policy is often quite long. First-
Come, First-Served (FCFS) Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the process‘s next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.
Round Robin algorithm is the most common of all algorithms. It uses quantum time (time
slice). Quantum time :the maximum time in which the CPU can give a process at a single point
of time. Before pause that process and move to another process inside the queue
A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is treated as
a circular queue. Round Robin scheduling algorithm is preemptive. The average waiting time
under the RR policy is often long
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it
requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is
given to the next process in the queue, process P2. Process P2 does not need 4 milliseconds, so it
quits before its time quantum expires. The CPU is then given to the next process, process P3.
Once each process has received 1 time quantum, the CPU is returned to process P1 for an
additional time quantum. The resulting RR schedule is as follows
Let‘s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds (10 - 4),
P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average waiting time is
17/3 = 5.66 milliseconds.
Another class of scheduling algorithms has been created for situations in which processes are
easily classified into different groups. For example, a common division is made between
foreground (interactive) processes and background (batch) processes.
These two types of processes have different response-time requirements and so may have
different scheduling needs. In addition, foreground processes may have priority (externally
defined) over background processes
Thread Scheduling
Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition
among all threads in system.
IPC methods
socket :provides point to point communication and two way communication between two
processes
Dead Locks
System Model
A system model or structure consists of a fixed number of resources to be circulated among some
opposing processes. The resources are then partitioned into numerous types, each consisting of
some specific quantity of identical instances. Memory space, CPU cycles, directories and files,
I/O devices like keyboards, printers and CD-DVD drives are prime examples of resource types.
When a system has 2 CPUs, then the resource type CPU got two instances.
1. Request: When the request can't be approved immediately (where the case may be when
another process is utilizing the resource), then the requesting job must remain waited
until it can obtain the resource.
2. Use: The process can run on the resource (like when the resource is a printer, its
job/process is to print on the printer).
Release: The process releases the resource (like, terminating or exiting any specific
process).
A deadlock state can occur when the following four circumstances hold simultaneously within a
system:
Mutual exclusion: At least there should be one resource that has to be held in a non-
sharable manner; i.e., only a single process at a time can utilize the resource. If other
process demands that resource, the requesting process must be postponed until the
resource gets released.
Hold and wait: A job must be holding at least one single resource and waiting to obtain
supplementary resources which are currently being held by several other processes.
No preemption: Resources can't be anticipated; i.e., a resource can get released only
willingly by the process holding it, then after that, the process has completed its task.
Circular wait: The circular - wait situation implies the hold-and-wait state or condition,
and hence all the four conditions are not completely independent. They are
interconnected among each other.
Normally you can deal with the deadlock issues and situations in one of the three ways
mentioned below:
You can let the system to enter any deadlock condition, detect it, and then recover.
You can overlook the issue altogether and assume that deadlocks never occur within the
system.
In this chapter, you will learn about the various working capabilities of IPC (Inter-process
communication) within an Operating system along with usage. Processes executing concurrently
in the operating system might be either independent processes or cooperating processes. A
process is independent if it cannot be affected by the other processes executing in the system.
There are numerous reasons for providing an environment or situation which allows process co-
operation:
Information sharing: Since some users may be interested in the same piece of information
(for example, a shared file), you must provide a situation for allowing concurrent access
to that information.
Computation speedup: If you want a particular work to run fast, you must break it into
sub-tasks where each of them will get executed in parallel with the other tasks. Note that
such a speed-up can be attained only when the computer has compound or various
processing elements like CPUs or I/O channels.
Modularity: You may want to build the system in a modular way by dividing the system
functions into split processes or threads.
Convenience: Even a single user may work on many tasks at a time. For example, a user
may be editing, formatting, printing, and compiling in parallel.
2. message passing.
In the shared-memory model, a region of memory which is shared by cooperating processes gets
established. Processes can be then able to exchange information by reading and writing all the
data to the shared region. In the message-passing form, communication takes place by way of
messages exchanged among the cooperating processes.
Shared Memory
Interprocess communication (IPC) usually utilizes shared memory that requires communicating
processes for establishing a region of shared memory. Typically, a shared-memory region resides
within the address space of any process creating the shared memory segment. Other processes
that wish for communicating using this shared-memory segment must connect it to their address
space.
Note that, normally what happens, the operating system tries to check one process from
accessing other's process's memory. Shared memory needs that two or more processes agree to
remove this limitation. They can then exchange information via reading and writing data within
the shared areas.
The form of the data and the location gets established by these processes and are not under the
control of the operating system. The processes are also in charge to ensure that they are not
writing to the same old location simultaneously.
Basic Hardware
Main memory and different registers built inside the processor itself are the only primary storage
that the CPU can have the right to use directly by accessing. There are some machine
instructions which take memory addresses as arguments or values, but none of them take disk
addresses. So, any instructions in implementation and any data which is used by the instructions
should have to be in one of these direct accessing storage devices. When the data are not in
memory, they have to be moved there before the CPL can work on them.
Registers which are built into the CPU are accessible within one single cycle of the CPU clock.
Most CPUs' can interpret those instructions and carry out simple operations on register contents
at the rate of 1 or more process per clock tick. The same may not be said for main memory,
which gets accessed via a transaction on the memory bus.
Usually, a program inhabits on a disk in a binary executable form of a file. For executing, the
program must be fetched into memory and positioned within a process (list in the queue).
Depending on the usage of memory management, the process may get moved between disk and
memory at the time of its execution. The processes on the disk are then waiting to be brought
into main memory for implementing form the input queue. The normal method is to choose any
one of the processes in the input queue and to load that process into the memory.
As the process gets executed, it is able now to access instructions and data from memory.
Ultimately, the process expires, and its memory space is declared as available/free. Most systems
let user process to exist in any part of the physical memory. Therefore, even if the address space
of the computer begins at 00000, the first address of the user process need not have to be 00000.
This approach can affect the addresses which the user program can use.
Normally, the binding of instructions and data onto memory addresses can be done at any of the
step given below:
Compile time: Compile time is the phase where the process will reside in memory and
eventually absolute code can be generated.
Load time: At compile time, when the process will reside in memory, the compiler must
generate relocatable code. In that case, final binding gets delayed until load time.
Execution time: Execution time is the time that a program or instruction takes for
executing a particular task.
Virtual Memory
In this chapter, you will gather knowledge about what virtual memory is and how they are being
managed within the operating system, along with its working. Virtual memory is a technical
concept that lets the execution of different processes which are not totally in memory. One main
benefit of this method is that programs can be larger than the physical memory.
Also, virtual memory abstracts primary memory into a very large, consistent array of storage that
divides logical memory as viewed by the user from that of physical memory. This technique is
used to free programmers from the anxiety of memory-storage limitations.
Virtual memory also permits processes for sharing files easily and for implementing shared
memory. Moreover, it offers a well-organized mechanism for process creation. Virtual memory
is not that easy to apply and execute. However, this technique may substantially decrease
performance if it is not utilized carefully.
Think of how an executable program could have loaded from within a disk into its memory. One
choice would be to load the complete program in physical memory at a program at the time of
execution. However, there is a problem with this approach, which you may not at first need the
entire program in memory. So the memory gets occupied unnecessarily.
An alternative way is to load pages only when they are needed/required initially. This method is
termed as demand paging and is commonly utilized in virtual memory systems. Using this
demand-paged virtual memory, pages gets only loaded as they are demanded at the time of
program execution; pages which are never accessed will never load into physical memory.
A demand - paging scheme is similar to a paging system with swapping feature where processes
exist in secondary memory (typically in a disk). As you want to execute any process, you swap it
into memory internally. Rather than swapping the complete process into memory, you can use a
"lazy swapper." A "lazy swapper" in no way swaps a page into memory unnecessarily unless that
page required for execution.
The hardware required for supporting demand paging is the same that is required for paging and
swapping:
Page table: Page table can mark an entry invalid or unacceptable using a valid-invalid bit.
Secondary memory: Secondary memory retains those pages which are not there in main
memory. The secondary memory is generally a high-speed disk. It is also known as a
swap device, and the segment of disk used for this purpose is termed as swap space.
In this chapter, you will learn about the different file tribute, concepts of file and its storage
along with operations on files.
File Attributes
A file is named, for the ease of its users and is referred by its name. A name is usually a string of
characters like filename.cpp, along with an extension which designates the file format. Some
systems (like Linux) distinguish between uppercase and lowercase characters in names, whereas
other systems don't. When a file is given a name, it becomes independent of the process, the user
and also the system which created it. Let's suppose, one user might make the file filename.cpp,
and another user might be editing that file by deducing its name. The file's owner may write the
file to a compact disk (CD) or send it via an e-mail or copy it across a network, and it could still
be called filename.cpp on the destination system.
A file's attributes vary from one operating system to another but typically consist of these:
Name: Name is the symbolic file name and is the only information kept in human
readable form.
Identifier: This unique tag is a number that identifies the file within the file system; it is
in non-human-readable form of the file.
Type: This information is needed for systems which support different types of files or its
format.
Location: This information is a pointer to a device which points to the location of the file
on the device where it is stored.
Size: The current size of the file (which is in bytes, words, etc.) which possibly the
maximum allowed size gets included in this attribute.
Protection: Access-control information establishes who can do the reading, writing,
executing, etc.
Date, Time & user identification: This information might be kept for the creation of the
file, its last modification and last used. These data might be useful for in the field of
protection, security, and monitoring its usage.
File Operations
A file is an abstract data type. For defining a file properly, we need to consider the operations
that can be performed on files. The operating system can provide system calls to create, write,
read, reposition, delete, and truncate files. There are six basic file operations within an Operating
system. These are:
Creating a file: There are two steps necessary for creating a file. First, space in the file
system must be found for the file. We discuss how to allocate space for the file. Second,
an entry for the new file must be made in the directory.
Writing a file: To write to a file, you make a system call specify about both the name of
the file along with the information to be written to the file.
Reading a file: To read from a file, you use a system call which specifies the name of the
file and where within memory the next block of the file should be placed.
The three major jobs of a computer are Input, Output, and Processing. In a lot of cases, the most
important job is Input / Output, and the processing is simply incidental. For example, when you
browse a web page or edit any file, our immediate attention is to read or enter some information,
not for computing an answer. The primary role of the operating system in computer Input /
Output is to manage and organize I/O operations and all I/O devices. In this chapter, you will
learn about the various uses of input output devices concerning the operating system.
The controlling of various devices that are connected to the computer is a key concern of
operating-system designers. This is because I/O devices vary so widely in their functionality and
speed (for example a mouse, a hard disk and a CD-ROM), varied methods are required for
controlling them. These methods form the I/O sub-system of the kernel of OS that separates the
rest of the kernel from the complications of managing I/O devices.
I/O Hardware
Computers operate many huge kinds of devices. The general categories of storage devices are
like disks, tapes, transmission devices (like network interface cards, modems) and human
interface devices (like screen, keyboard, etc.).