[go: up one dir, main page]

0% found this document useful (0 votes)
11 views29 pages

OS NOTES

The document provides a comprehensive overview of operating systems, detailing their purpose, history, characteristics, functionalities, and types. It explains the role of an operating system as an intermediary between users and hardware, managing resources and ensuring efficient operation. Additionally, it discusses the advantages and disadvantages of operating systems, as well as various types such as batch, multi-programming, and real-time systems.

Uploaded by

sowbarnika1605
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views29 pages

OS NOTES

The document provides a comprehensive overview of operating systems, detailing their purpose, history, characteristics, functionalities, and types. It explains the role of an operating system as an intermediary between users and hardware, managing resources and ensuring efficient operation. Additionally, it discusses the advantages and disadvantages of operating systems, as well as various types such as batch, multi-programming, and real-time systems.

Uploaded by

sowbarnika1605
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

OPERATING SYSTEMS

UNIT - I: Introduction - What Is an Operating System-


Operating System Software -A Brief History of Machine
Hardware -Types of Operating Systems - Brief History of
Operating System Development-Object-Oriented Design

An operating system acts as an intermediary between the user of a computer and


computer hardware. The purpose of an operating system is to provide an environment in
which a user can execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system. A
more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application
programs.
An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
a memory management module, I/O programs, and a file system.

History of Operating System


The operating system has been evolving through the years. The following table shows
the history of OS.

Generation Year Electronic Types of OS Devices


device used

Vacuum
First 1945-55 Plug Boards
Tubes

Second 1955-65 Transistors Batch Systems

Integrated
Third 1965-80 Multiprogramming
Circuits(IC)

Large Scale
Fourth Since 1980 PC
Integration
Characteristics of Operating Systems
Let us now discuss some of the important characteristic features of operating systems:
 Device Management: The operating system keeps track of all the devices. So, it is
also called the Input/Output controller that decides which process gets the device,
when, and for how much time.
 File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
 Memory Management: It keeps track of the primary memory, like what part of it is
in use by whom, or what part is not in use, etc. and It also allocates the memory when
a process or program requests it.
 Processor Management: It allocates the processor to a process and then de-
allocates the processor when it is no longer required or the job is done.
 Control on System Performance: It records the delays between the request for a
service and the system.
OPERATING SYSTEMS

 Security: It prevents unauthorized access to programs and data using passwords or


some kind of protection technique.
 Convenience: An OS makes a computer more convenient to use.
 Efficiency: An OS allows the computer system resources to be used efficiently.
 Ability to Evolve: An OS should be constructed in such a way as to permit the
effective development, testing, and introduction of new system functions at the same
time without interfering with service.
 Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).
Functionalities of Operating System
 Resource Management: When parallel accessing happens in the OS means when
multiple users are accessing the system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and
termination of the process. It is done with the help of CPU Scheduling algorithms .
 Storage Management: The file system mechanism used for the management of
the storage. NIFS, CIFS, CFS, NFS, etc. are some file systems. All the data is stored in
various tracks of Hard disks that are all managed by the storage manager. It
included Hard Disk.
 Memory Management: Refers to the management of primary memory. The
operating system has to keep track of how much memory has been used and by
whom. It has to decide which process needs memory space and how much. OS also
has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system
using passwords so that unauthorized applications can’t access programs or data. For
example, Windows uses Kerberos authentication to prevent unauthorized access to
data.
The process operating system as User Interface:

1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s), system
programs, and application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral devices, and storage devices. The system program consists of
compilers, loaders, editors, OS, etc. The application program consists of business
programs and database programs.

Every computer must have an operating system to run other programs. The operating
system coordinates the use of the hardware among the various system programs and
OPERATING SYSTEMS

application programs for various users. It simply provides an environment within which
other programs can do useful work.
The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the display
screen, and controlling peripheral devices.

Layered Design of Operating System

The extended machine provides operations like context save, dispatching, swapping, and
I/O initiation. The operating system layer is located on top of the extended machine
layer. This arrangement considerably simplifies the coding and testing of OS modules by
separating the algorithm of a function from the implementation of its primitive
operations. It is now easier to test, debug, and modify an OS module than in a monolithic
OS. We say that the lower layer provides an abstraction that is the extended machine.
We call the operating system layer the top layer of the OS.

Purposes and Tasks of Operating Systems


Several tasks are performed by the Operating Systems and it also helps in serving a lot
of purposes which are mentioned below. We will see how Operating System helps us in
serving in a better way with the help of the task performed by it.

Purposes of an Operating System

 It controls the allocation and use of the computing System’s resources among the
various user and tasks.
 It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
OPERATING SYSTEMS

Tasks of an Operating System

1. Provides the facilities to create and modify programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to
machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management
The module that keeps track of the status of devices is called the I/O traffic controller.
Each I/O device has a device handler that resides in a separate process associated with
that device.
The I/O subsystem consists of
 A memory Management component that includes buffering caching and spooling.
 A general device driver interface.
Drivers for Specific Hardware Devices
Below mentioned are the drivers which are required for a specific Hardware Device. Here
we discussed Assemblers, compilers, and interpreters, loaders.

Assembler

The input to an assembler is an assembly language program. The output is an object


program plus information that enables the loader to prepare the object program for
execution. At one time, the computer programmer had at his disposal a basic machine
that interpreted, through hardware, certain fundamental instructions. He would program
this computer by writing a series of ones and Zeros (Machine language) and placing them
into the memory of the machine. Examples of assembly languages include

Compiler and Interpreter

The High-level languages – examples are C, C++, Java, Python, etc (around 300+ famous
high-level languages) are processed by compilers and interpreters . A compiler is a
program that accepts a source program in a “high-level language “and produces machine
code in one go. Some of the compiled languages are FORTRAN, COBOL, C, C++, Rust,
and Go. An interpreter is a program that does the same thing but converts high-level
code to machine code line-by-line and not all at once. Examples of interpreted languages
are
 Python
 Perl
 Ruby
Loader
A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating, and direct-linking. In general, the
loader must load, relocate and link the object program. The loader is a program that
places programs into memory and prepares them for execution. In a simple loading
scheme, the assembler outputs the machine language translation of a program on a
secondary device and a loader places it in the core. The loader places into memory the
machine language version of the user’s program and transfers control to it. Since the
loader program is much smaller than the assembler, those make more core available to
the user’s program.

Components of an Operating Systems


There are two basic components of an Operating System.
 Shell
 Kernel

Shell

Shell is the outermost layer of the Operating System and it handles the interaction with
the user. The main task of the Shell is the management of interaction between the User
and OS. Shell provides better communication with the user and the Operating System
Shell does it by giving proper input to the user it also interprets input for the OS and
OPERATING SYSTEMS

handles the output from the OS. It works as a way of communication between the User
and the OS.

Kernel

The kernel is one of the components of the Operating System which works as a core
component. The rest of the components depends on Kernel for the supply of the
important services that are provided by the Operating System. The kernel is the primary
interface between the Operating system and Hardware.
Functions of Kernel
The following functions are to be performed by the Kernel.
 It helps in controlling the System Calls.
 It helps in I/O Management.
 It helps in the management of applications, memory, etc.
Types of Kernel
There are four types of Kernel that are mentioned below.
 Monolithic Kernel
 Microkernel
 Hybrid Kernel
 Exokernel
For more, refer to Kernel in Operating System .
Difference Between 32-Bit and 64-Bit Operating Systems

32-Bit Operating System 64-Bit Operating System

32-Bit OS is required for


64-Bit Processors can run
running of 32-Bit
on any of the Operating
Processors, as they are
Systems, like 32-Bit OS
not capable of running on
or 64-Bit OS.
64-bit processors.

64-Bit Operating System


32-Bit OS gives a low
provides highly efficient
efficient performance.
Performance.

Less amount of data is


A large amount of data
managed in 32-Bit
can be stored in 64-Bit
Operating System as
Operating System.
compared to 64-Bit Os.

32-Bit Operating System 64-Bit Operating System


can address 2^32 bytes can address 2^64 bytes
of RAM. of RAM.
The fundamental goals of operating system are:
 Efficient use: Ensure efficient use of a computer’s resources.
 User convenience: Provide convenient methods of using a computer system.
 Non interference: Prevent interference in the activities of its users.
1. Efficient use:
An operating system must ensure efficient use of the fundamental computer system
resources of memory, CPU, and I/O devices such as disks and printers. Poor efficiency
can result if a program does not use a resource allocated to it. Efficient use of resources
can be obtained by monitoring use of resources and performing corrective actions when
necessary. However, monitoring use of resources increases the overhead, which lowers
efficiency of use. In practice, operating systems that emphasize efficient use limit their
overhead by either restricting their focus to efficiency of a few important resources, like
OPERATING SYSTEMS

the CPU and the memory, or by not monitoring the use of resources at all, and instead
handling user programs and resources in a manner that guarantees high efficiency.
2.User convenience:
In the early days of computing, user convenience was synonymous with bare necessity—
the mere ability to execute a program written in a higher level language was considered
adequate. Experience with early operating systems led to demands for better service,
which in those days meant only fast response to a user request. Other facets of user
convenience evolved with the use of computers in new fields. Early operating systems
had command-line interfaces, which required a user to type in a command and specify
values of its parameters. Users needed substantial training to learn use of the
commands, which was acceptable because most users were scientists or computer
professionals. However, simpler interfaces were needed to facilitate use of computers by
new classes of users. Hence graphical user interfaces (GUIs) were evolved. These
interfaces used icons on a screen to represent programs and files and interpreted mouse
clicks on the icons and associated menus as commands concerning them. In many ways,
this move can be compared to the spread of car driving skills in the first half of the
twentieth century. Over a period of time, driving became less of a specialty and more of
a skill that could be acquired with limited training and experience.
3.Non interference:
A computer user can face different kinds of interference in his computational activities.
Execution of his program can be disrupted by actions of other persons, or the OS services
which he wishes to use can be disrupted in a similar manner. The OS prevents such
interference by allocating resources for exclusive use of programs and OS services, and
preventing illegal accesses to resources. Another form of interference concerns programs
and data stored in user files.

Advantages of Operating System


 It helps in managing the data present in the device i.e. Memory Management.
 It helps in making the best use of computer hardware.
 It helps in maintaining the security of the device.
 It helps different applications in running them efficiently.
Disadvantages of Operating System
 Operating Systems can be difficult for someone to use.
 Some OS are expensive and they require heavy maintenance.
 Operating Systems can come under threat if used by hackers.

TYPES OF OPERATING SYSTEMS

There are several types of Operating Systems which are mentioned below.

 Batch Operating System


 Multi-Programming System
 Multi-Processing System
 Multi-Tasking Operating System
 Time-Sharing Operating System
 Distributed Operating System
 Network Operating System
 Real-Time Operating System

1. Batch Operating System

This type of operating system does not interact with the computer directly. There is an
operator which takes similar jobs having the same requirement and groups them into
batches. It is the responsibility of the operator to sort jobs with similar needs.
OPERATING SYSTEMS

Advantages of Batch Operating System


 Multiple users can share the batch systems.
 The idle time for the batch system is very less.
 It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
 The computer operators should be well known with batch systems.
 Batch systems are hard to debug.
 It is sometimes costly.
 The other jobs will have to wait for an unknown time if any job fails.
 In batch operating system the processing time for jobs is commonly difficult to
accurately predict while they are in the queue.
 It is difficult to accurately predict the exact time required for a job to complete while it
is in the queue.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one


program is present in the main memory and any one of them can be kept in execution.
This is basically used for better execution of resources.
OPERATING SYSTEMS

Advantages of Multi-Programming Operating System


 Multi Programming increases the Throughput of the System.
 It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
 There is not any facility for user interaction of system resources with the system.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more than one
CPU is used for the execution of resources. It betters the throughput of the System.

Multiprocessing

Advantages of Multi-Processing Operating System


 It increases the throughput of the system.
OPERATING SYSTEMS

 As it has several processors, so, if one processor fails, we can proceed with another
processor.
Disadvantages of Multi-Processing Operating System
 Due to the multiple CPU, it can be more complex and somehow difficult to understand.

4. Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming Operating System with


having facility of a Round-Robin Scheduling Algorithm. It can run multiple programs
simultaneously.
There are two types of Multi-Tasking Systems which are listed below.
 Preemptive Multi-Tasking
 Cooperative Multi-Tasking

Multitasking

Advantages of Multi-Tasking Operating System


 Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.
 It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
 The system gets heated in case of heavy programs multiple times.

5. Time-Sharing Operating Systems

Each task is given some time to execute so that all the tasks work smoothly. Each user
gets the time of the CPU as they use a single system. These systems are also known as
Multitasking Systems. The task can be from a single user or different users also. The time
that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
OPERATING SYSTEMS

Time-Sharing OS

Advantages of Time-Sharing OS
 Each task gets an equal opportunity.
 Fewer chances of duplication of software.
 CPU idle time can be reduced.
 Resource Sharing: Time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory, and peripherals, reducing the cost of hardware
and increasing efficiency.
 Improved Productivity: Time-sharing allows users to work concurrently, thereby
reducing the waiting time for their turn to use the computer. This increased
productivity translates to more work getting done in less time.
 Improved User Experience: Time-sharing provides an interactive environment that
allows users to communicate with the computer in real time, providing a better user
experience than batch processing.
Disadvantages of Time-Sharing OS
 Reliability problem.
 One must have to take care of the security and integrity of user programs and data.
 Data communication problem.
 High Overhead: Time-sharing systems have a higher overhead than other operating
systems due to the need for scheduling, context switching, and other overheads that
come with supporting multiple users.
 Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of bugs
and errors.
 Security Risks: With multiple users sharing resources, the risk of security breaches
increases. Time-sharing systems require careful management of user access,
authentication, and authorization to ensure the security of data and software.
Examples of Time-Sharing OS with explanation
 IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first
introduced in 1972. It is still in use today, providing a virtual machine environment
OPERATING SYSTEMS

that allows multiple users to run their own instances of operating systems and
applications.
 TSO (Time Sharing Option): TSO is a time-sharing operating system that was first
introduced in the 1960s by IBM for the IBM System/360 mainframe computer. It
allowed multiple users to access the same computer simultaneously, running their own
applications.
 Windows Terminal Services: Windows Terminal Services is a time-sharing operating
system that allows multiple users to access a Windows server remotely. Users can run
their own applications and access shared resources, such as printers and network
storage, in real-time.

6. Distributed Operating System

These types of operating system is a recent advancement in the world of computer


technology and are being widely accepted all over the world and, that too, at a great
pace. Various autonomous interconnected computers communicate with each other using
a shared communication network. Independent systems possess their own memory unit
and CPU. These are referred to as loosely coupled systems or distributed systems . These
systems’ processors differ in size and function. The major benefit of working with these
types of the operating system is that it is always possible that one user can access the
files or software which are not actually present on his system but some other system
connected within this network i.e., remote access is enabled within the devices
connected in that network.

Distributed OS

Advantages of Distributed Operating System


 Failure of one will not affect the other network communication, as all systems are
independent of each other.
 Electronic mail increases the data exchange speed.
 Since resources are being shared, computation is highly fast and durable.
 Load on host computer reduces.
 These systems are easily scalable as many systems can be easily added to the
network.
 Delay in data processing reduces.
Disadvantages of Distributed Operating System
 Failure of the main network will stop the entire communication.
 To establish distributed systems the language is used not well-defined yet.
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet.
Examples of Distributed Operating Systems are LOCUS, etc.
The distributed os must tackle the following issues:
OPERATING SYSTEMS

 Networking causes delays in the transfer of data between nodes of a distributed


system. Such delays may lead to an inconsistent view of data located in different
nodes, and make it difficult to know the chronological order in which events occurred
in the system.
 Control functions like scheduling, resource allocation, and deadlock detection have to
be performed in several nodes to achieve computation speedup and provide reliable
operation when computers or networking components fail.
 Messages exchanged by processes present in different nodes may travel over public
networks and pass through computer systems that are not controlled by the
distributed operating system. An intruder may exploit this feature to tamper with
messages, or create fake messages to fool the authentication procedure and
masquerade as a user of the system.

7. Network Operating System

These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access to files, printers, security, applications, and other networking
functions over a small private network. One more important aspect of Network Operating
Systems is that all the users are well aware of the underlying configuration, of all other
users within the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems .

Network Operating System


OPERATING SYSTEMS

Advantages of Network Operating System


 Highly stable centralized servers.
 Security concerns are handled through servers.
 New technologies and hardware up-gradation are easily integrated into the system.
 Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System
 Servers are costly.
 User has to depend on a central location for most operations.
 Maintenance and updates are required regularly.
Examples of Network Operating Systems are Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.

8. Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like
missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
 Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are very strict
and even the shortest possible delay is not acceptable. These systems are built for
saving life like automatic parachutes or airbags which are required to be readily
available in case of an accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.
For more, refer to the Difference Between Hard Real-Time OS and Soft Real-Time OS .

Real-Time Operating System

Advantages of RTOS
 Maximum Consumption: Maximum utilization of devices and systems, thus more
output from all the resources.
OPERATING SYSTEMS

 Task Shifting: The time assigned for shifting tasks in these systems is very less. For
example, in older systems, it takes about 10 microseconds in shifting from one task to
another, and in the latest systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
 Real-time operating system in the embedded system: Since the size of programs
is small, RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS
 Limited Tasks: Very few tasks run at the same time and their concentration is very
less on a few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupts
signal to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.

Generation of Operating System


Below are four generations of operating systems.
 The First Generation
 The Second Generation
 The Third Generation
 The Fourth Generation

1. The First Generation (1940 to early 1950s)

In 1940, an operating system was not included in the creation of the first electrical
computer. Early computer users had complete control over the device and wrote
programs in pure machine language for every task. During the computer generation, a
programmer can merely execute and solve basic mathematical calculations. an operating
system is not needed for these computations.

2. The Second Generation (1955 – 1965)

GMOSIS, the first operating system (OS) was developed in the early 1950s. For the IBM
Computer, General Motors has created the operating system. Because it gathers all
related jobs into groups or batches and then submits them to the operating system using
a punch card to finish all of them, the second-generation operating system was built on a
single-stream batch processing system.

3. The Third Generation (1965 – 1980)

Because it gathers all similar jobs into groups or batches and then submits them to the
second generation operating system using a punch card to finish all jobs in a machine,
the second-generation operating system was based on a single stream batch processing
system. Control is transferred to the operating system upon each job’s completion,
whether it be routinely or unexpectedly. The operating system cleans up after each work
is finished before reading and starting the subsequent job on a punch card. Large,
professionally operated machines known as mainframes were introduced after
that. Operating system designers were able to create a new operating system in the late
1960s that was capable of multiprogramming—the simultaneous execution of several
tasks in a single computer program.
In order to create operating systems that enable a CPU to be active at all times by
carrying out multiple jobs on a computer at once, multiprogramming has to be
introduced. With the release of the DEC PDP-1 in 1961, the third generation of
minicomputers saw a new phase of growth and development.
OPERATING SYSTEMS

4. The Fourth Generation (1980 – Present Day)

The fourth generation of personal computers is the result of these PDPs. The Generation
IV (1980–Present)The evolution of the personal computer is linked to the fourth
generation of operating systems. Nonetheless, the third-generation minicomputers and
the personal computer have many similarities. At that time, minicomputers were only
slightly more expensive than personal computers, which were highly expensive.
The development of Microsoft and the Windows operating system was a significant
influence in the creation of personal computers. In 1975, Microsoft developed the first
Windows operating system. Bill Gates and Paul Allen had the idea to advance personal
computers after releasing the Microsoft Windows OS. As a result, the MS-DOS was
released in 1981, but users found it extremely challenging to decipher its complex
commands. Windows is now the most widely used and well-liked operating system
available. Following then, Windows released a number of operating systems, including
Windows 95, Windows 98, Windows XP, and Windows 7, the most recent operating
system. The majority of Windows users are currently running Windows 10. Apple is
another well-known operating system in addition to Windows.

Types of Operating System


Operating Systems have evolved in past years. It went through several changes before
getting its original form. These changes in the operating system are known as
the evolution of operating systems. OS improve itself with the invention of new
technology. Basically , OS added the feature of new technology and making itself more
powerful. Let us see the evolution of operating system year-wise in detail:
 No OS – (0s to 1940s)
 Batch Processing Systems -(1940s to 1950s)
 Multiprogramming Systems -(1950s to 1960s)
 Time-Sharing Systems -(1960s to 1970s)
 Introduction of GUI -(1970s to 1980s)
 Networked Systems – (1980s to 1990s)
 Mobile Operating Systems – (Late 1990s to Early 2000s)
 AI Integration – (2010s to ongoing)

1. No OS – (0s to 1940s)

As we know that before 1940s, there was no use of OS . Earlier, people are lacking OS in
their computer system so they had to manually type instructions for each tasks in
machine language(0-1 based language) . And at that time , it was very hard for users to
implement even a simple task. And it was very time consuming and also not user-friendly
. Because not everyone had that much level of understanding to understand the machine
language and it required a deep understanding.

2. Batch Processing Systems -(1940s to 1950s)

With the growth of time, batch processing system came into the market .Now Users had
facility to write their programs on punch cards and load it to the computer operator. And
then operator make different batches of similar types of jobs and then serve the different
batch(group of jobs) one by one to the CPU .CPU first executes jobs of one batch and
them jump to the jobs of other batch in a sequence manner.

3. Multiprogramming Systems -(1950s to 1960s)

Multiprogramming was the first operating system where actual revolution began. It
provide user facility to load the multiple program into the memory and provide a specific
portion of memory to each program. When one program is waiting for any I/O operations
(which take much time) at that time the OS give permission to CPU to switch from
previous program to other program(which is first in ready queue) for continuous
execution of program with interrupt.

4. Time-Sharing Systems -(1960s to 1970s)

Time-sharing systems is extended version of multiprogramming system. Here one extra


feature was added to avoid the use of CPU for long time by any single program and give
access of CPU to every program after a certain interval of time. Basically OS switches
OPERATING SYSTEMS

from one program to another program after a certain interval of time so that every
program can get access of CPU and complete their work.

5. Introduction of GUI -(1970s to 1980s)

With the growth of time, Graphical User Interfaces (GUIs) came. First time OS became
more user-friendly and changed the way of people to interact with computer. GUI
provides computer system visual elements which made user’s interaction with computer
more comfortable and user-friendly. User can just click on visual elements rather than
typing commands. Here are some feature of GUI in Microsoft’s windows icons, menus and
windows.

6. Networked Systems – (1980s to 1990s)

At 1980s,the craze of computer networks at it’s peak .A special type of Operating


Systems needed to manage the network communication . The OS like Novell NetWare
and Windows NT were developed to manage network communication which provide users
facility to work in collaborative environment and made file sharing and remote access
very easy.

7. Mobile Operating Systems – (Late 1990s to Early 2000s)

Invention of smartphones create a big revolution in software industry, To handle the


operation of smartphones , a special type of operating systems were developed. Some of
them are : iOS and Android etc. These operating systems were optimized with the time
and became more powerful.

8. AI Integration – (2010s to ongoing)

With the growth of time, Artificial intelligence came into picture. Operating system
integrates features of AI technology like Siri, Google Assistant, and Alexa and became
more powerful and efficient in many way. These AI features with operating system create
a entire new feature like voice commands, predictive text, and personalized
recommendations.
Note: The above mentioned OS basically tells how the OS evolved with the time by
adding new features but it doesn’t mean that only new generation OS are in use and
previously OS system are not in use, according to the need, all these OS are still used in
software industry.

Object-Oriented Analysis
Object-Oriented Analysis (OOA) is the first technical activity performed as part of
object-oriented software engineering. OOA introduces new concepts to investigate a
problem. It is based on a set of basic principles, which are as follows:
 The information domain is modeled:
o Lets say you’re building a game. OOA helps you figure out all the things you need
to know about the game world – the characters, their features, and how they
interact. It’s like making a map of everything important.
 Behavior is represented:
o OOA also helps you understand what your game characters will do. If a character
jumps when you press a button, OOA helps describe that action. It’s like writing
down a script for each character.
 The function is described:
o Every program has specific tasks or jobs it needs to do. OOA helps you list and
describe these jobs. In our game, it could be tasks like moving characters or
keeping score. It’s like making a to-do list for your software.
 Data, functional, and behavioral models are divided to uncover greater
detail:
o OOA is smart about breaking things into different parts. It splits the job into three
categories: things your game knows (like scores), things your game does (like
jumping), and how things in your game behave (like characters moving around).
This makes it easier to understand.
 Starting Simple, Getting Detailed:
o OOA knows that at first, you just want to understand the big picture. So, it starts
with a simple version of your game or program. Later on, you add more details to
OPERATING SYSTEMS

make it work perfectly. It’s like sketching a quick drawing before adding all the
colors and details.
The above noted principles form the foundation for the OOA approach.

Object-Oriented Design
In the object-oriented software development process, the analysis model, which is
initially formed through object-oriented analysis (OOA), undergoes a transformation
during object-oriented design (OOD). This evolution is crucial because it shapes the
analysis model into a detailed design model, essentially serving as a blueprint for
constructing the software.
The outcome of object-oriented design, or OOD, manifests in a design model
characterized by multiple levels of modularity. This modularity is expressed in two key
ways:
 Subsystem Partitioning:
o At a higher level, major components of the system are organized into
subsystems.
o This practice is similar to creating modules at the system level, providing a
structured and organized approach to managing the complexity of the software.
 Object Encapsulation:
o A more granular form of modularity is achieved through the encapsulation of
data manipulation operations into objects. ” It’s like putting specific tasks (or
operations) and the data they need into little boxes called “objects.”
o Each object does its job neatly and keeps things organized. So, if our game has a
character jumping, we put all the jumping stuff neatly inside an object.
o It’s like having a box for each task, making everything easier to handle and
understand.
Furthermore, as part of the object-oriented design process, it is essential to define
specific aspects:
 Data Organization of Attributes:
o OOD involves specifying how data attributes are organized within the objects.
This includes determining the types of data each object will hold and how they
relate to one another, ensuring a coherent and efficient data structure.
 Procedural Description of Operations:
o OOD requires a procedural description for each operation that an object can
perform. This involves detailing the steps or processes involved in carrying out
specific tasks, ensuring clarity and precision in the implementation of
functionality.
Below diagram shows a design pyramid for object-oriented systems. It is having the
following four layers.
OPERATING SYSTEMS

1. The Subsystem Layer: It represents the subsystem that enables software to achieve
user requirements and implement technical frameworks that meet user needs.
2. The Class and Object Layer: It represents the class hierarchies that enable the
system to develop using generalization and specialization. This layer also represents
each object.
3. The Message Layer: This layer deals with how objects interact with each other. It
includes messages sent between objects, method calls, and the flow of control within
the system.
4. The Responsibilities Layer: It focuses on the responsibilities of individual objects.
This includes defining the behavior of each class, specifying what each object is
responsible for, and how it responds to messages.
Benefits of Object-Oriented Analysis and Design(OOAD)
 Improved modularity: OOAD encourages the creation of small, reusable objects that
can be combined to create more complex systems, improving the modularity and
maintainability of the software.
 Better abstraction: OOAD provides a high-level, abstract representation of a
software system, making it easier to understand and maintain.
 Improved reuse: OOAD encourages the reuse of objects and object-oriented design
patterns, reducing the amount of code that needs to be written and improving the
quality and consistency of the software.
 Improved communication: OOAD provides a common vocabulary and methodology
for software developers, improving communication and collaboration within teams.
 Reusability: OOAD emphasizes the use of reusable components and design patterns,
which can save time and effort in software development by reducing the need to
create new code from scratch.
 Scalability: OOAD can help developers design software systems that are scalable and
can handle changes in user demand and business requirements over time.
OPERATING SYSTEMS

 Maintainability: OOAD emphasizes modular design and can help developers create
software systems that are easier to maintain and update over time.
 Flexibility: OOAD can help developers design software systems that are flexible and
can adapt to changing business requirements over time.
 Improved software quality: OOAD emphasizes the use of encapsulation,
inheritance, and polymorphism, which can lead to software systems that are more
reliable, secure, and efficient.

UNIT - II: Early Systems: Single-User Contiguous Scheme -Fixed


Partitions-Dynamic PartitionsBest-Fit versus First-Fit Allocation -
Deallocation - Relocatable Dynamic Partitions. Virtual Memory:
Paged Memory Allocation-Demand Paging-Page Replacement
Policies and Concepts -Segmented Memory Allocation-
Segmented/Demand Paged Memory Allocation - Virtual Memory-
Cache Memory

Fixed (or static) Partitioning in Operating System

Fixed partitioning, also known as static partitioning, is a memory allocation technique used
in operating systems to divide the physical memory into fixed-size partitions or regions,
each assigned to a specific process or user. Each partition is typically allocated at system
boot time and remains dedicated to a specific process until it terminates or releases the
partition.
OPERATING SYSTEMS

1. In fixed partitioning, the memory is divided into fixed-size chunks, with each chunk
being reserved for a specific process. When a process requests memory, the operating
system assigns it to the appropriate partition. Each partition is of the same size, and the
memory allocation is done at system boot time.
2. Fixed partitioning has several advantages over other memory allocation techniques.
First, it is simple and easy to implement. Second, it is predictable, meaning the
operating system can ensure a minimum amount of memory for each process. Third, it
can prevent processes from interfering with each other’s memory space, improving the
security and stability of the system.
3. However, fixed partitioning also has some disadvantages. It can lead to internal
fragmentation, where memory in a partition remains unused. This can happen when the
process’s memory requirements are smaller than the partition size, leaving some
memory unused. Additionally, fixed partitioning limits the number of processes that can
run concurrently, as each process requires a dedicated partition.
Overall, fixed partitioning is a useful memory allocation technique in situations where the
number of processes is fixed, and the memory requirements for each process are known in
advance. It is commonly used in embedded systems, real-time systems, and systems with
limited memory resources.
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. Memory Management function keeps track of the
status of each memory location, either allocated or free to ensure effective and efficient
use of Primary Memory.

There are two Memory Management Techniques:

1. Contiguous
2. Non-Contiguous
In Contiguous Technique, executing process must be loaded entirely in the main memory.
Contiguous Technique can be divided into:
 Fixed (or static) partitioning
 Variable (or dynamic) partitioning

Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the main
memory. In this partitioning, the number of partitions (non-overlapping) in RAM is fixed
but the size of each partition may or may not be the same. As it is
a contiguous allocation, hence no spanning is allowed. Here partitions are made before
execution or during system configure.
OPERATING SYSTEMS

As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 =
7MB.

Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite
of available free space because of contiguous allocation (as spanning is not allowed).
Hence, 7MB becomes part of External Fragmentation.

There are some advantages and disadvantages of fixed partitioning.

Advantages of Fixed Partitioning –


 Easy to implement: The algorithms needed to implement Fixed Partitioning are
straightforward and easy to implement.
 Low overhead: Fixed Partitioning requires minimal overhead, which makes it ideal for
systems with limited resources.
 Predictable: Fixed Partitioning ensures a predictable amount of memory for each
process.
 No external fragmentation: Fixed Partitioning eliminates the problem of external
fragmentation.
 Suitable for systems with a fixed number of processes: Fixed Partitioning is well-
suited for systems with a fixed number of processes and known memory requirements.
 Prevents processes from interfering with each other: Fixed Partitioning ensures
that processes do not interfere with each other’s memory space.
 Efficient use of memory: Fixed Partitioning ensures that memory is used efficiently by
allocating it to fixed-sized partitions.
 Good for batch processing: Fixed Partitioning is ideal for batch processing
environments where the number of processes is fixed.
 Better control over memory allocation: Fixed Partitioning gives the operating
system better control over the allocation of memory.
 Easy to debug: Fixed Partitioning is easy to debug since the size and location of each
process are predetermined.

Disadvantages of Fixed Partitioning –


OPERATING SYSTEMS

1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.

2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load
the processes even though there is space available but not in the contiguous form (as
spanning is not allowed).

3. Limit process size:


Process of size greater than the size of the partition in Main Memory cannot be
accommodated. The partition size cannot be varied according to the size of the incoming
process size. Hence, the process size of 32MB in the above-stated example is invalid.

4. Limitation on Degree of Multiprogramming:


Partitions in Main Memory are made before execution or during system configure. Main
Memory is divided into a fixed number of partitions. Suppose if there are partitions
in RAM and are the number of processes, then condition must be
fulfilled. Number of processes greater than the number of partitions in RAM is invalid in
Fixed Partitioning.

Variable (or Dynamic) Partitioning in Operating


System

In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. The memory Management function keeps track of
the status of each memory location, either allocated or free to ensure effective and
efficient use of Primary Memory.
Below are Memory Management Techniques.
 Contiguous
 Non-Contiguous
In the Contiguous Technique, the executing process must be loaded entirely in the main
memory. The contiguous Technique can be divided into:
 Fixed (static) partitioning
 Variable (dynamic) partitioning

What is Variable (Dynamic) Partitioning?


It is a part of the Contiguous allocation technique. It is used to alleviate the problem faced
by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before the
execution or during system configuration. Various features associated with variable
Partitioning-
 Initially, RAM is empty and partitions are made during the run-time according to the
process’s need instead of partitioning during system configuration.
 The size of the partition will be equal to the incoming process.
 The partition size varies according to the need of the process so that internal
fragmentation can be avoided to ensure efficient utilization of RAM.
 The number of partitions in RAM is not fixed and depends on the number of incoming
processes and the Main Memory’s size.
OPERATING SYSTEMS

Best-Fit Allocation in Operating System


INTRODUCTION:
Best-Fit Allocation is a memory allocation technique used in operating systems to allocate
memory to a process. In Best-Fit, the operating system searches through the list of free
blocks of memory to find the block that is closest in size to the memory request from the
process. Once a suitable block is found, the operating system splits the block into two
parts: the portion that will be allocated to the process, and the remaining free block.
Advantages of Best-Fit Allocation include improved memory utilization, as it allocates the
smallest block of memory that is sufficient to accommodate the memory request from the
process. Additionally, Best-Fit can also help to reduce memory fragmentation, as it tends to
allocate smaller blocks of memory that are less likely to become fragmented.
Disadvantages of Best-Fit Allocation include increased computational overhead, as the
search for the best-fit block of memory can be time-consuming and requires a more
complex search algorithm. Additionally, Best-Fit may also result in increased
fragmentation, as it may leave smaller blocks of memory scattered throughout the memory
space.
Overall, Best-Fit Allocation is a widely used memory allocation technique in operating
systems, but its effectiveness may vary depending on the specifics of the system and the
workload being executed.
For both fixed and dynamic memory allocation schemes, the operating system must keep
list of each memory location noting which are free and which are busy. Then as new jobs
come into the system, the free partitions must be allocated.
These partitions may be allocated by 4 ways:

1. First-Fit Memory Allocation


2. Best-Fit Memory Allocation
3. Worst-Fit Memory Allocation
4. Next-Fit Memory Allocation
These are Contiguous memory allocation techniques.
Best-Fit Memory Allocation:
This method keeps the free/busy list in order by size – smallest to largest. In this method,
the operating system first searches the whole of the memory according to the size of the
given job and allocates it to the closest-fitting free partition in the memory, making it able
to use memory efficiently. Here the jobs are in the order from smallest job to largest job.
OPERATING SYSTEMS

As illustrated in above figure, the operating system first search throughout the memory
and allocates the job to the minimum possible memory partition, making the memory
allocation efficient.

Advantages of Best-Fit Allocation :

 Memory Efficient. The operating system allocates the job minimum possible space in the
memory, making memory management very efficient.
 To save memory from getting wasted, it is the best method.
 Improved memory utilization
 Reduced memory fragmentation
 Minimizes external fragmentation

Disadvantages of Best-Fit Allocation :

 It is a Slow Process. Checking the whole memory for each job makes the working of the
operating system very slow. It takes a lot of time to complete the work.
 Increased computational overhead
 May lead to increased internal fragmentation
 Can result in slow memory allocation times.

Best-fit allocation is a memory allocation algorithm used in operating systems to allocate


memory to processes. In this algorithm, the operating system searches for the smallest
free block of memory that is big enough to accommodate the process being allocated
memory.
Here is a brief overview of the best-fit allocation algorithm:

1. The operating system maintains a list of all free memory blocks available in the system.
OPERATING SYSTEMS

2. When a process requests memory, the operating system searches the list for the
smallest free block of memory that is large enough to accommodate the process.
3. If a suitable block is found, the process is allocated memory from that block.
4. If no suitable block is found, the operating system can either wait until a suitable block
becomes available or request additional memory from the system.
5. The best-fit allocation algorithm has the advantage of minimizing external
fragmentation, as it searches for the smallest free block of memory that can
accommodate a process. However, it can also lead to more internal fragmentation, as
processes may not use the entire memory block allocated to them.
Overall, the best-fit allocation algorithm can be an effective way to allocate memory in an
operating system, but it is important to balance the advantages and disadvantages of this
approach with other allocation algorithms such as first-fit, next-fit, and worst-fit.

Dynamic Partitioning

Advantages of Variable(Dynamic) Partitioning


 No Internal Fragmentation: In variable Partitioning, space in the main memory is
allocated strictly according to the need of the process, hence there is no case of internal
fragmentation. There will be no unused space left in the partition.
 No restriction on the Degree of Multiprogramming: More processes can be
accommodated due to the absence of internal fragmentation. A process can be loaded
until the memory is empty.
 No Limitation on the Size of the Process: In Fixed partitioning, the process with a
size greater than the size of the largest partition could not be loaded and the process
can not be divided as it is invalid in the contiguous allocation technique. Here, In
variable partitioning, the process size can’t be restricted since the partition size is
decided according to the process size.

Disadvantages of Variable(Dynamic) Partitioning


 Difficult Implementation: Implementing variable Partitioning is difficult as compared
to Fixed Partitioning as it involves the allocation of memory during run-time rather than
during system configuration.
 External Fragmentation: There will be external fragmentation despite the absence of
internal fragmentation. For example, suppose in the above example- process P1(2MB)
and process P3(1MB) completed their execution. Hence two spaces are left i.e. 2MB and
1MB. Let’s suppose process P5 of size 3MB comes. The space in memory cannot be
OPERATING SYSTEMS

allocated as no spanning is allowed in contiguous allocation. The rule says that the
process must be continuously present in the main memory to get executed. Hence it
results in External Fragmentation.

No Internal Fragmentation

Now P5 of size 3 MB cannot be accommodated despite the required available space


because in contiguous no spanning is allowed.

Key Points On Variable (Dynamic) Partitioning in Operating


Systems
 Variable (or dynamic) partitioning is a memory allocation technique that allows memory
partitions to be created and resized dynamically as needed.
 The operating system maintains a table of free memory blocks or holes, each of which
represents a potential partition. When a process requests memory, the operating
system searches the table for a suitable hole that can accommodate the requested
amount of memory.
 Dynamic partitioning reduces internal fragmentation by allocating memory more
efficiently, allows multiple processes to share the same memory space, and is flexible in
accommodating processes with varying memory requirements.
 However, dynamic partitioning can also lead to external fragmentation and requires
more complex memory management algorithms, which can make it slower than fixed
partitioning.
 Understanding dynamic partitioning is essential for operating system design and
implementation, as well as for system-level programming.

First-Fit Memory Allocation : This method keeps the free/busy list of jobs organized by
memory location, low-ordered to high-ordered memory. In this method, first job claims
the first available memory with space more than or equal to it’s size. The operating
system doesn’t search for appropriate partition but just allocate the job to the nearest
OPERATING SYSTEMS

memory partition available with sufficient size.

As illustrated above, the system assigns J1 the nearest partition in the memory. As a
result, there is no partition with sufficient space is available for J3 and it is placed in the
waiting list.The processor ignores if the size of partition allocated to the job is very large
as compared to the size of job or not. It just allocates the memory. As a result, a lot of
memory is wasted and many jobs may not get space in the memory, and would have to
wait for another job to complete.

Advantages of First-Fit Allocation in Operating Systems:

1. Simple and efficient search algorithm


2. Minimizes memory fragmentation
3. Fast allocation of memory

Disadvantages of First-Fit Allocation in Operating Systems:

1. Poor performance in highly fragmented memory


2. May lead to poor memory utilization
3. May allocate larger blocks of memory than required.

Best-Fit Memory Allocation :


This method keeps the free/busy list in order by size – smallest to largest. In this method,
the operating system first searches the whole of the memory according to the size of the
given job and allocates it to the closest-fitting free partition in the memory, making it
able to use memory efficiently. Here the jobs are in the order from smallest job to largest
job.
OPERATING SYSTEMS

As illustrated in above figure, the operating system first search throughout the memory
and allocates the job to the minimum possible memory partition, making the memory
allocation efficient.

Advantages of Best-Fit Allocation :

 Memory Efficient. The operating system allocates the job minimum possible space in
the memory, making memory management very efficient.
 To save memory from getting wasted, it is the best method.
 Improved memory utilization
 Reduced memory fragmentation
 Minimizes external fragmentation

Disadvantages of Best-Fit Allocation :

 It is a Slow Process. Checking the whole memory for each job makes the working of
the operating system very slow. It takes a lot of time to complete the work.
 Increased computational overhead
 May lead to increased internal fragmentation
 Can result in slow memory allocation times.

Best-fit allocation is a memory allocation algorithm used in operating systems to allocate


memory to processes. In this algorithm, the operating system searches for the smallest
free block of memory that is big enough to accommodate the process being allocated
memory.
Here is a brief overview of the best-fit allocation algorithm:

1. The operating system maintains a list of all free memory blocks available in the
system.
2. When a process requests memory, the operating system searches the list for the
smallest free block of memory that is large enough to accommodate the process.
OPERATING SYSTEMS

3. If a suitable block is found, the process is allocated memory from that block.
4. If no suitable block is found, the operating system can either wait until a suitable block
becomes available or request additional memory from the system.
5. The best-fit allocation algorithm has the advantage of minimizing external
fragmentation, as it searches for the smallest free block of memory that can
accommodate a process. However, it can also lead to more internal fragmentation, as
processes may not use the entire memory block allocated to them.

Deallocation:
Memory deallocaton, which is the process of releasing memory that was previously
allocated for use by a program, has several advantages, including:
1. Efficient use of resources: By deallocating memory that is no longer needed, a
program can free up resources for other processes or for future use. This can help to
improve the overall performance of the system by reducing memory usage and minimizing
the risk of running out of memory.
2. Avoiding memory leaks: If memory is not deallocated properly, it can result in
memory leaks,where memory is allocated but not released. Over time, this can lead to a
gradual depletion of available memory, which can cause a program to slow down or crash.
By deallocating memory when it is no longer needed, a program can avoid these issues
and ensure that memory is used efficiently.
3. Simplifying memory management: By deallocating memory when it is no longer
needed, a program can simplify memory management and reduce the risk of memory-
related errors. This can make the code easier to read, maintain, and debug.
4. Reducing memory fragmentation: Frequent memory allocation and deallocation
can lead to memory fragmentation, where small blocks of memory become scattered
throughout the memory space. This can make it difficult to allocate contiguous blocks of
memory for larger data structures, and can also lead to increased overhead when
managing memory. By deallocating memory when it is no longer needed, a program can
reduce the risk of memory fragmentation and improve memory usage efficiency.
5. Avoiding dangling pointers: If memory is deallocated but  a poin ter to that 
memory still exists, he pointer becomes a dangling pointer. Dereferencing a
dangling pointer can lead to undefined behavior or segmentation faults. By deallocating
memory when it is no longer needed, a program can avoid the risk of creating dangling
pointers and improve the reliability and stability of the code.

You might also like