[go: up one dir, main page]

0% found this document useful (0 votes)
51 views28 pages

01 - Operating System-1

Uploaded by

633611sse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views28 pages

01 - Operating System-1

Uploaded by

633611sse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapterl : Introduction

Content:
1.1 Introduction
1.2 History Of Operating Systems
1.3 Computer Hardware Review
1.4 What Is An Operating System?
1.5 What Operating Systems Do?
1.2 Computer System Organization
1.4 Computer System Operations
1.3 Computer System Architecture
1.5 Operating System Structure
1.6 Operating-System Operahons
c...ca ~e.9cf "(, ~

1
Introduction

An operating system is a program that manages a computer's hardware. It also


provides a basis for application programs and acts as an intermediary between
the computer user and the computer hardware. The purpose of an operating
system is to provide an environment in which a user can execute programs in a
convenient and efficient manner. An operating system is software that manages
the computer hardware. The hardware must provide appropriate mechanisms to
ensure the con-ect operation of the computer system and to prevent user
programs from interfering with the proper operation of the system. Mainframe
operating systems are designed primarily to optimize utilization of hardware.
Personal computer (PC) operating systems suppo1t complex games, business
applications, and everything in between. Operating systems for mobile
computers provide an environment in which a user can easily interface with the
computer to execute programs. Thus, some operating systems are designed to be
convenient, others to be efficient, and others to be some combination of the
two.

User
1

CO!j!p)_ler assembler text editor ' .. ·-0.atabaS'e


sys_tem
system and app1fcation programs

operating system

cOmp_uter hardware

Figure 1.1 Abstract view of the components of a computer system.

2
HISTORY OF OPERATING SYSTEMS
Operating systems have been evolving through the years.
- The First Generation (1945-55): Vacuum Tubes.
- The Second Generation (1955-65): Transistors and Batch Systems.
- The Third Generation (1965-1980): ICs and Multiprogramming.
- The Fourth Generation (1980-Present): Personal Computers.
- The Fifth Generation (1990-Present): Mobile Computers.

COMPUTER HARDWARE REVIEW

An operating system is intimately tied to the hardware of the computer it rnns


on. It extends the computer's instrnction set and manages its resources. The
CPU, memory, and I/0 devices are all connected by a system bus and
communicate with one another over it. Modem personal computers have a more
complicated strncture, involving multiple buses.

Operating system goals:

- Execute user programs and make solving user problems easier.


- Make the computer system convenient to use.
- Use the computer hardware in an efficient manner.

What is an Operating System?


- A more common definition of the Operating System is the one program
which is rnnning at all times on the computer, usually called the Kernal.

3
Along with the Kemal, there are two other type of programs: System
Programs, which are associated with the Operating System but are not
necessarily part of the Kemal, and application programs, which include
all programs not associated with the Operating System.
- Today's OSes for general purpose and mobile computing also include
middleware - a set of software frameworks that provide additional
services to application developers such as databases, multimedia,
graphics.

What Operating Systems Do?


A computer system has many resources that may be required to solve a
problem: CPU time, memory space, file-storage space, I/O devices, and so
on. The operating system acts as a manager of all those resources.
• OS is a resources allocator
- Manages all resources (I/O, memory, ... ) among the running
processes.
Decides between conflicting requests for efficient and fair
resources use.
• OS is a control program
Control execution of programs to prevent errors and improper
use of the computer.
Depends on the point of view:
• User View
- Users want convenience, ease of use and good performance
Don't care about resource utilization
But shared computer such as mainframes or servers must keep all
users happy. Multi user computers share same resources.

4
Embedded systems such as computers 111 home devices and
automobiles.
• System View

From the computer's point of view, the operating system is designed to


maximize resource utilization, and ensure that all available CPU time, memory,
and I/0 are fairly used and no individual users take more than others. In this
context, we can view an operating system as a resource allocator.

Computer System Organization


Organization means: How the computer system components work,
communicate and exchange data with each other. Computer system consists of
one or more CPUs, device controllers connect through common bus providing
access to shared memory.
• Computer System Operations
A modern general-purpose computer system consists of one or more CPUs
and a number of device controllers connected through a common bus that
provides access to shared memory. Each device controller is in charge of a
specific type of device (for example, disk drives, audio devices, or video
displays). The CPU and the device controllers can execute in parallel,
competing for memory cycles. To ensure orderly access to the shared
memory, a memory controller synchronizes access to the memory.
mouse keyboard printer monitor

A~~ c1""";"' 6n i •.••.......


disks

-a-a 6 11

CPU

system bus

memory

5
For a computer to start running-for instance, when it is powered up or
rebooted-it needs to have an initial program to run. This initial program, or
bootstrap program, tends to be simple. Typically, it is stored within the
computer hardware in read-only memory (ROM) or electrically erasable
programmable read-only memory (EEPROM), !mown by the general term
firmware. It initializes all aspects of the system, from CPU registers to
device controllers to memory contents. The bootstrap program must know
how to load the operating system and how to start executing that system. To
accomplish this goal, the bootstrap program must locate the operating-
system kernel and load it into memory.
Once the kernel is loaded and executing, it can start providing services to
the system and its users. Some services are provided outside of the kernel,
by system programs that are loaded into memory at boot time to become
system processes, or system daemons that run the entire time the kernel is
running. Once this phase is complete, the system is fully booted, and the
system waits for some event to occur.

- I/0 devices and the CPU can execute concurrently

- Each device controller is in charge of a particular device type


- Each device controller has a local buffer

- Each device controller type has an operating system device driver to


manage it

- CPU moves data from/to main memory to/from local buffers


I/0 is from the device to local buffer of controller

Device controller informs CPU that it has finished its operation by


causing an interrupt.

6
• Interrupt Handling and Interrupt Timeline
In digital computers, an interrupt (sometimes referred to as a trap) is a
request for the processor to interrupt currently executing code. If the request
is accepted, the processor will suspend its current activities, save its state, and
execute a function called an interrupt handler (or an interrupt service routine,
ISR) to deal with the event. This interruption is often temporary, allowing the
software to resume normal activities after the interrupt handler finishes.
In computer systems programming, an interrupt handler, also known as
an interrupt service routine or ISR, is a special block of code associated
with a specific interrupt condition.
The occurrence of an event is usually signaled by an interrupt from either
the hardware or the software. Hardware may trigger an intem1pt at any time
by sending a signal to the CPU, usually by way of the system bus. Software
may trigger an interrupt by executing a special operation called a system call
(also called a monitor call).
When the CPU is intem1pted, it stops what it 1s doing and immediately
transfers execution to a fixed location.
An intem1pt is a signal to the processor emitted by hardware or software
indicating an event that needs immediate attention. Whenever an intem1pt
occurs, the controller completes the execution of the current instruction and
starts the execution of an Interrupt Service Routine (ISR) or Interrupt
Handler. ISR tells the processor or controller what to do when the interrupt
occurs.

7
Program Execution wlthout Interrupts

Time

Progrnm Execution with Interrupts

;J,1ain

ISR: Interrupt service Routine

Hardware Interrupt

A hardware interrupt is an electronic alerting signal sent to the processor from


an external device, like a disk controller or an external peripheral. For example,
when we press a key on the keyboard or move the mouse, they trigger hardware
interrupts which cause the processor to read the keystroke or mouse position.

Software Interrupt

A software interrupt is caused either by an exceptional condition or a special


instrnction in the instruction set which causes an intemipt when it is executed
by the processor. For example, if the processor's arithmetic logic unit runs a
command to divide a number by zero, to cause a divide-by-zero exception, thus
causing the computer to abandon the calculation or display an error message.

8
CPU 110 controller

device driver initiates 1/0 2


ln!t!atesl/0

CPU executing checks for


interrupts between Instructions
3
i
CPU recei~i,19 interruP\, 4 input ready, out_put
transfers Control to Complete, or error
interrupftiandler , generates l_nt_errupt signal
7
5

interrupt handler ,
proces_ses data,
returns from, Interrupt

CPU resumes
processing of
interrupted task

Interrupt Service Routine

For every inte1Tt1pt, there must be an interrupt service routine (ISR),


or interrupt handler. When an interrupt occurs, the microcontroller runs the
interrupt service routine. For every intem1pt, there is a fixed location in memory
that holds the address of its interrupt service routine, ISR. The table of memory
locations set aside to hold the addresses of ISRs is called as the Interrupt Vector
Table.

CPU' user program


110 fnterl1.lpt pracess'inij

tr,an~farrln9

9
Storage Structure
The CPU can load instructions only from memory, so any programs to run must
be stored there. General-purpose computers run most of their programs from
rewritable memory, called main memory (also called ram.lom-access memory,
or RA.VI). Main memory commonly is implemented in a semiconductor
technology called dynamic random-access memory (DRAY]).
Computers use other fo1ms of memory as well. We have already mentioned
read-only memory, ROM) and electrically erasable programmable read-only
memory, EEPROM). Because ROM cannot be changed, only static programs,
such as the bootstrap program described earlier, are stored there.
All fo1ms of memory provide an an-ay of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to
specific memory addresses. The load instruction moves a byte or word from
main memory to an internal register within the CPU, whereas the store
instruction moves the content of a register to main memory. The CPU
automatically loads instructions from main memory for execution.
Most computer systems provide secondary storage as an extension of main
memory. The main requirement for secondary storage is that it be able to hold
large quantities of data permanently.
The most common secondary-storage device 1s a magnetic disk, which
provides storage for both programs and data. Most programs (system and
application) are stored on a disk tmtil they are loaded into memory.
The wide variety of storage systems can be organized in a hierarchy (Figure
below) according to speed and cost. The higher levels are expensive, but they
are fast. As we move down the hierarchy, the cost per bit generally decreases,
whereas the access time generally increases.
The top four levels of memory in Figure below may be constructed usmg
semiconductor memory. In addition to differing in speed and cost, the various

10
storage systems are either volatile or nonvolatile. As mentioned earlier, volatile
storage loses its contents when the power to the device is removed.
In the hierarchy shown in Figure below, the storage systems above the solid-
state disk are volatile, whereas those including the solid-state disk and below
are nonvolatile. Solid-state disks have several variants but in general are faster
than magnetic disks and are nonvolatile.

storage capacity access time

volat!le
storage
:J?r~,
main memory J)
primary
storage
~
~

..£5!

nonvolatile
storage nonvolatile ~emory
~-~--,,-
fsecond<1ry
storage

hard-disk drives

optical disk
tertiary
•storage
~•
'iii

magnetic tapes

I/O Structure
A large portion of operating system code is dedicated to managing I/O, both
because of its importance to the reliability and performance of a system and
because of the varying nature of the devices. A general-purpose computer
system consists of CPUs and multiple device controllers that are connected
through a common bus. Each device controller is in charge of a specific type of
device. A device controller maintains some local buffer storage and a set of
special-purpose registers. The device controller is responsible for moving the
data between the peripheral devices that it controls and its local buffer storage.

11
,,,,,=g•Oo•M,.,
""''""""'"''"

A device controller is a system that handles the incoming and outgoing signals
of the CPU by acting as a bridge between CPU and the I/O devices. A device is
connected to the computer via a plug and socket, and the socket is connected to
a device controller. Device controllers use binary and digital codes. An IO
device contains mechanical and electrical parts. A device controller is the
electrical part of the IO device.

- instruction execution

~
cycle
instructions
thread of execution ;:; .. and
data movement - <lata
CPU ('N)

~
il i•.. v~.

~
DMA

~~\~-
'- .("M)

The Device Controller receives the data from a connected device and stores it
temporarily in some special purpose registers (i.e. local buffer) inside the
controller. Then it communicates the data with a Device Driver. For each device
controller there is an equivalent device driver which is the standard interface
through which the device controller communicates with the Operating Systems
12
through Interrupts. Device controller is a hardware whereas device driver is a
software. A device driver is a program that lets the operating system
communicate with specific computer hardware.

Computer System Architecture


A computer system can be organized in a number of different ways, which we
can categorize roughly according to the number of general-purpose processors
used.
• Single-Processor Systems
Until recently, most computer systems used a single processor. On a single-
processor system, there is one main CPU capable of executing a general-
purpose instruction set, including instructions from user processes. Almost all
single-processor systems have other special-purpose processors as well. They
may come in the form of device-specific processors, such as disk, keyboard, and
graphics controllers; or, on mainframes, they may come in the form of more
general-purpose processors, such as I/0 processors that move data rapidly
among the components of the system.
All of these special-purpose processors run a limited instruction set and do not
run user processes. Sometimes, they are managed by the operating system, in
that the operating system sends them information about their next task and
monitors their status. For example, a disk-controller microprocessor receives a
sequence of requests from the main CPU and implements its own disk queue
and scheduling algorithm. This arrangement relieves the main CPU of the
overhead of disk scheduling. PCs contain a microprocessor in the keyboard to
convert the keystrokes into codes to be sent to the CPU. In other systems or
circumstances, special-purpose processors are low-level components built into
the hardware. The operating system cannot communicate with these processors;
they do their jobs autonomously. The use of special-purpose microprocessors is

13
common and does not turn a single-processor system into a multiprocessor. If
there is only one general-purpose CPU, then the system is a single-processor
system.

Multiprocessor Systems
:\lultiprocessor systems (also known as parallel systems or multi.core
systems) have two or more processors in close communication, sharing the
computer bus and sometimes the clock, memory, and peripheral devices.
Multiprocessor systems first appeared prominently appeared in servers and have
since migrated to desktop and laptop systems. Recently, multiple processors
have appeared on mobile devices such as smartphones and tablet computers.
Multiprocessor systems have three main advantages:
l. Increased throughput. By increasing the number of processors, we expect to
get more work done in less time. The speed-up ratio with N processors is not N,
however; rather, it is less than N. When multiple processors cooperate on a task,
a certain amount of overhead is incurred in keeping all the parts working
correctly. This overhead, plus contention for shared resources, lowers the
expected gain from additional processors. Similarly, N programmers working
closely together do not produce N times the amount of work a single
programmer would produce.
2, Economy of scale. Multiprocessor systems can cost less than equivalent
multiple single-processor systems, because they can share peripherals, mass
storage, and power supplies. If several programs operate on the same set of data,
it is cheaper to store those data on one disk and to have all the processors share
them than to have many computers with local disks and many copies of the
data.
3. Increased reliability. If functions can be distributed properly among several
processors, then the failure of one processor will not halt the system, only slow
it down. If we have ten processors and one fails, then each of the remaining nine

14
processors can pick up a share of the work of the failed processor. Thus, the
entire system runs only 10 percent slower, rather than failing altogether.
The multiple-processor systems in use today are of two types. Some systems
use asymmetric multiprocessing, in which each processor is assigned a
specific task. A boss processor controls the system; the other processors either
look to the boss for instruction or have predefined tasks. This scheme defines a
boss-worker relationship. The boss processor schedules and allocates work to

the worker processors.


The most common systems use symmetric multiprocessing (S:\-IP), in which
each processor perfom1s all tasks within the operating system. SMP means that
all processors are peers; no boss-worker relationship exists between processors.
Figure 1.6 illustrates a typical SMP architecture. Notice that each processor has
its own set of registers, as well as a private-or local cache. However, all
processors share physical memory.
An example of an SMP system is AIX, a commercial version of UNIX

designed by IBM. An AIX system can be configured to employ dozens of

processors.

memory

15
• Clustered Systems
A clustered system, which gathers together multiple CPUs. Clustered systems
differ from the multiprocessor systems described in Section 1.3.2 in that they
are composed of two or more individual systems-or nodes-joined together.
Such systems are considered loosely coupled. Each node may be a single
processor system or a multicore system. The generally accepted definition is
that clustered computers share storage and are closely linked via a local-area
network LAN or a faster interconnect,

interconnect interconnect
~oinputer computer computer

storage area
network

Figure 1.8 Genera! structure of a clustered system,

The operating system structure.


We want a clear structure to let us apply an operating system to our particular
needs because operating systems have complex structures. A common approach
is to partition the task into small components, or modules, rather than have one
monolithic system. Each of these modules should be a well-defined portion of
the system, with carefully defined inputs, outputs, and functions. Operating
system structure can be thought of as the strategy for connecting and
incorporating various operating system components within the kernel.
Operating systems are implemented using many types of structures, as will be
discussed below:
1. Monolithic (Simple) Structure

2. Layered Structure

16
3. Micro-Kernel Structure
4. Exo-Kernel Structure
5. Virtual Machines

1. Monolithic (Simple) Structure

Simple structure do not have well-defined structures. Frequently, such systems


started as small, simple, and limited systems and then grew beyond their
original scope. MS-DOS is an example of such a system. It was originally
designed and implemented by a few people who had no idea that it would
become so popular. It was written to provide the most functionality in the least
space, so it was not carefully divided into modules. In the monolithic approach,
the operating system organized as a hierarchy of layers, each one constructed
upon the one below it; the entire operating system runs as a single program in
kernel mode. The operating system is written as a collection of procedures,
linked together into a single large executable binary program. When this
technique is used, each procedure in the system is free to call any other one, if
the latter provides some useful computation that the former needs. Being able to
call any procedure you want is ve1y efficient, but having thousands of
procedures that can call each other without restriction may also lead to a system
that is unwieldy and difficult to understand. Also, a crash in any of these
procedures will take down the entire operating system.
In MS-DOS, the interfaces and levels of functionality are not well separated.
For instance, application programs are able to access the basic I/O routines to
write directly to the display and disk drives. Such freedom leaves MS-DOS
vulnerable to errant (or malicious) programs, causing entire system crashes
when user programs fail. Of course, MS-DOS was also limited by the hardware.

17
resident system program

ROM BIOS device drivers

MS-DOS layer structure

/the users)

shells and commands


compilers and interpreters
system libraries
system-call interface· to the kerne/

"iii
signals terminal file system CPU scheduling
E handling swapping block l/0 page replacement
"'
-"' character 1/0 system
terminal drivers
system
disk and tape drivers
tjemand !)aging
virtual memory

kernel]ntertace tqtheh~rclware
terminal controllers device controllers memory controlklrs
terminals disks<lnd tapes physical memory

Traditional UNIX system structure

Advantages of Simple/Monolithic structure


• It delivers better application perfom1ance because of the few
interfaces between the application program and the hardware.
• It is easy for kernel developers to develop such an operating system.
Disadvantages of Simple/Monolithic structure
• The stn1cture is very complicated, as no clear boundaries exist
between modules.
• It does not enforce data hiding in the operating system.

18
2. Layered Structure

• Another approach is to break the OS into a number of smaller layers, each


of which rests on the layer below it, and relies solely on the services
provided by the next lower layer.
• This approach allows each layer to be developed and debugged
independently, with the assumption that all lower layers have already
been debugged and are trusted to deliver proper services.
• The problem is deciding what order in which to place the layers, as no
layer can call upon the services of any higher layer, and so many chicken-
and-egg situations may arise.
• Layered approaches can also be less efficient, as a request for service
from a higher layer has to filter through all lower layers before it reaches
the HW, possibly with significant processing at each step.

layer N
user interface

Layered Approach Structure

A system can be made modular in many ways. One method is the layered
approach, in which the operating system is broken into a number of layers

19
(levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the
user interface.

Layer Function
5 The operator
4 User programs
3 Input/output management
2 Operator·process communication
1 Memory and drum management
0 Processor allocation and multiprogramming

Figtue 1~25. Structure of th8 THE operating system.

Layer 1 did the memory management. It allocated space for processes in main
mem01y.
Layer 2 handled communication between each process and the operator console
(that is, the user).
Layer 3 took care of managing the I/0 devices and buffering the information
streams to and from them.
Layer 4 was where the user programs were found. They did not have to worry
about process, memory, console, or I/0 management. The system operator
process was located in layer 5.

These layers are so designed that each layer uses the functions of the lower-
level layers. This simplifies the debugging process, if lower-level layers are
debugged and an error occurs during debugging, then the error must be on that
layer only, as the lower-level layers have already been debugged.

The main disadvantage of this strncture is that at each layer, the data needs to be
modified and passed on which adds overhead to the system. Moreover, careful
planning of the layers is necessary, as a layer can use only lower-level layers.
UNIX is an example of this strncture.

20
The main advantage of the layered approach is simplicity of construction and
debugging. The layers are selected so that each uses functions (operations) and
services of only lower-level layers. This approach simplifies debugging and
system verification. The first layer can be debugged without any concern for the
rest of the system, because, by definition, it uses only the basic hardware (which
is assumed correct) to implement its functions. Once the first layer is debugged,
its correct functioning can be assumed while the second layer is debugged, and
so on. If an error is fmmd during the debugging of a particular layer, the error
must be on that layer, because the layers below it are already debugged. Thus,
the design and implementation of the system are simplified.
Each layer is implemented only with operations provided by lower-level layers.
A layer does not need to know how these operations are implemented; it needs
to know only what these operations do. Hence, each layer hides the existence of
certain data structures, operations, and hardware from higher-level layers. The
major difficulty with the layered approach involves appropriately defining the
various layers. Because a layer can use only lower-level layers, careful planning
1s necessary.
3. Microkernels Structure
We have already seen that as UNIX expanded, the kernel became large and
difficult to manage. In the mid-1980s, researchers at Carnegie Mellon
University developed an operating system called 1Vfach that modularized the
kernel using the microkerne.l approach. This method structures the operating
system by removing all nonessential components from the kernel and
implementing them as system and user-level programs. The result is a smaller
kernel. There is little consensus regarding which services should remain in the
kernel and which should be implemented in user space. Typically, however,
microkemels provide minimal process and memory management, in addition to
a communication facility.

21
The main function of the microkernel is to provide c01m1rnnication between the
client program and the various services that are also running in user space.
Communication is provided through message passing. For example, if the
client program wishes to access a file, it must interact with the file server. The
client program and service never interact directly. Rather, they communicate
indirectly by exchanging messages with the
micro kernel.
One benefit of the microkernel approach is that it makes extending the operating
system easier. All new services are added to user space and consequently do not
require modification of the kernel. When the kernel does have to be modified,
the changes tend to be fewer, because the microkemel is a smaller kernel. The
resulting operating system is easier to port from one hardware design to another.
The microkernel also provides more secmity and reliability, since most services
are running as user-rather than kernel processes. If a service fails, the rest of
the operating system remains untouched.
Some contemporary operating systems have used the microkemel

I~
r
/\pplroation File user
Program system mode
,,. ]

kernel
mode

mia-okeme!

hardware

Unfortm1ately, the performance of microkernels can suffer due to increased


system-function overhead. Consider the history of Windows NT. The first
release had a layered microkernel organization. This version's perfo1mance was

22
low compared with that of Windows 95. Windows NT 4.0 partially corrected
the performance problem by moving layers from user space to kernel space and
integrating them more closely. By the time Windows XP was designed,
Windows architecture had become more monolithic than microkernel.

4. Microkernels Structure

• The basic idea behind micro kernels is to remove all non-essential


services from the kernel, and implement them as system applications
instead, thereby making the kernel as small and efficient as possible.
• Most microkernels provide basic process and memory management, and
message passing between other services, and not much more.
• Security and protection can be enhanced, as most services are performed
in user mode, not kernel mode.
• System expansion can also be easier, because it only involves adding
more system applications, not rebuilding a new kernel.
• Mach was the first and most widely known microkernel, and now fonns a
major component of Mac OSX.
• Windows NT was originally microkernel, but suffered from performance
problems relative to Windows 95. NT 4.0 improved performance by
moving more services into the kernel, and now XP is back to being more
monolithic.
• Another microkernel example 1s QNX, a real-time OS for embedded
systems.

23
5. Modules Structure

• Modern OS development is object-oriented, with a relatively small core


kernel and a set of modules which can be linked in dynamically. See for
example the Solaris structure, as shown in Figure below.
• Modules are similar to layers in that each subsystem has clearly defined
tasks and interfaces, but any module is free to contact any other module,
eliminating the problems of going through multiple intermediary layers,
as well as the chicken-and-egg problems.
• The kernel is relatively small in this architecture, similar to microkernels,
but the kernel does not have to implement message passing since modules
are free to contact each other directly.

(>cheduling
device and classes
file systems •
bus drivers

miscellaneous loadable
modules system calls

Operating-System Debugging
We have mentioned debugging frequently in this chapter. Here, we take a closer
look. Broadly, debugging is the activity of finding and fixing eITors in a system,
both in hardware and in software. Performance problems are considered bugs,
so debugging can also include performance tuning, which seeks to improve
perfmmance by removing processing bottlenecks.

24
Operating Systems Categories
1. Mainframe Operating Systems
At the high end are the operating systems for mainframes, those room-sized
computers still found in major corporate data centers. These computers differ
from personal computers in terms of their I/O capacity. A mainframe with 1000
disks and millions of gigabytes of data.
high-end Web servers, servers for large-scale electronic commerce sites, and
servers for business-to-business transactions.
The operating systems for mainframes are heavily oriented toward processing
many jobs at once, most of which need huge amounts of I/O. They typically
offer three kinds of services: batch, transaction processing, and timesharing. A
batch system is one that processes routine jobs without any interactive user
present.
Claims processing in an insurance company or sales reporting for a chain of
stores is typically done in batch mode. Transaction-processing systems handle
large numbers of small requests, for example, check processing at a bank or
airline reservations.
Each unit of work is small, but the system must handle hundreds or thousands
per second. Timesharing systems allow multiple remote users to run jobs on the
computer at once, such as querying a big database. However, mainframe
operating systems are gradually being replaced by UNIX variants such as
Linux.

2. Personal Computer Operating Systems


The next category is the personal computer operating system. Modern ones all
support multiprogramming, often with dozens of programs started up at boot
time. Their job is to provide good support to a single user. They are widely used
for word processing, spreadsheets, games, and Internet access. Common
examples are Linux, Windows 8, Windows 10, and Apple's OS X.
25
3. Handheld Computer Operating Systems
A handheld computer, originally known as a PDA (Personal Digital
Assistant), is a small computer that can be held in your hand during operation.
Smartphones and tablets are the best-known examples. As we have already
seen, this market is currently dominated by Google's Android and Apple's iOS.
Most of these devices boast multicore CPUs, GPS, cameras and other sensors,
numerous amounts of memory, and sophisticated operating systems.

4. Embedded Operating Systems


Embedded systems run on the computers that control devices that are not
generally thought of as computers and which do not accept user-installed
software. Typical examples are microwave ovens, TV sets, cars, DVD
recorders, traditional phones, and MP3 players. The main property which
distinguishes embedded systems from handhelds is the certainty that no
untrusted software will ever run on it. You cannot download new applications to
your microwave oven-all the software is in ROM. This means that there is no
need for protection between applications.

5. Sensor-Node Operating Systems


Networks of tiny sensor nodes are being deployed for numerous purposes.
These nodes are tiny computers that communicate with each other and with a
base station using wireless communication. Sensor networks are used to protect
the perimeters of buildings, guard national borders, detect fires in forests,
measure temperature and precipitation for weather forecasting, glean
information about enemy movements on battlefields, and much more. Each
sensor node is a real computer, with a CPU, RAM, ROM, and one or more
environmental sensors. It runs a small, but real operating system, usually one
that is event driven, responding to external events or making measurements

26
periodically based on an internal clock. The operating system has to be small
and simple because the nodes have little RAM and battety lifetime is a major
issue. Also, as with embedded systems, all the programs are loaded in advance.

6. Real-Time Operating Systems


Another type of operating system is the real-time system. These systems are
characterized by having time as a key parameter. For example, in industrial
process- control systems, real-time computers have to collect data about the
production process and use it to control machines in the factory. Often there are
hard deadlines that must be met. For example, if a car is moving down an
assembly line, certain actions must take place at certain instants of time. If, for
example, a welding robot welds too early or too late, the car will be mined. If
the action absolutely must occur at a certain moment (or within a certain range),
we have a hard real-time system. Many of these are found in industrial process
control, avionics, military, and similar application areas. These systems must
provide absolute guarantees that a certain action will occur by a certain time.

7. Smart Card Operating Systems


The smallest operating systems run on smart cards, which are credit-card-sized
devices containing a CPU chip. They have very severe processing power and
memory constraints. Some are powered by contacts in the reader into which
they are inserted, but contactless smart cards are inductively powered, which
greatly limits what they can do. Some of them can handle only a single function,
such as electronic payments, but others can handle multiple functions. Often
these are proprietary systems.

27
Computer Startup

• Bootstrap program is loaded at power-up or reboot


- Typically stored in ROM or electrically erasable programmable read-
only memory (EPROM), generally !mown as firmware,
- Initializes all aspects of system,
- Loads operating system Kernal and starts execution.
• BIOS: Short for Basic Input/Output System, The BIOS is a ROM chip
found on motherboard that allow access and set the computer at the most
basic level; it test the computer hardware and make sure no errors exist
before loading the operating system, after that the BIOS is locate a
Master Boot Record (MBR) to launch the bootloader which loads the
windows OS. The BIOS program evaluates the system hardware and
checks the available boot devices containing an MBR. It then reads the
first sector to 0000:7C00H and determines if the final signature is
55AAH. Next, it transfers control to the MBR to boot the OS. If the final
signature does not match, the BIOS looks for additional bootable devices.
If no devices are found, the OS does not boot, and the user receives an
error message.

• UEFI: Short for Unified Extensible Firmware Interface, the UEFI is an


update to traditional BIOS. UEFI supports larger hard drives over 2TB.
It allows for faster boot process so there's less downtime from power on
the PC until the operating system loads successfully.

28

You might also like