OS NOTES
OS NOTES
Vacuum
First 1945-55 Plug Boards
Tubes
Integrated
Third 1965-80 Multiprogramming
Circuits(IC)
Large Scale
Fourth Since 1980 PC
Integration
Characteristics of Operating Systems
Let us now discuss some of the important characteristic features of operating systems:
Device Management: The operating system keeps track of all the devices. So, it is
also called the Input/Output controller that decides which process gets the device,
when, and for how much time.
File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
Job Accounting: It keeps track of time and resources used by various jobs or users.
Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
Memory Management: It keeps track of the primary memory, like what part of it is
in use by whom, or what part is not in use, etc. and It also allocates the memory when
a process or program requests it.
Processor Management: It allocates the processor to a process and then de-
allocates the processor when it is no longer required or the job is done.
Control on System Performance: It records the delays between the request for a
service and the system.
OPERATING SYSTEMS
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s), system
programs, and application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral devices, and storage devices. The system program consists of
compilers, loaders, editors, OS, etc. The application program consists of business
programs and database programs.
Every computer must have an operating system to run other programs. The operating
system coordinates the use of the hardware among the various system programs and
OPERATING SYSTEMS
application programs for various users. It simply provides an environment within which
other programs can do useful work.
The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the display
screen, and controlling peripheral devices.
The extended machine provides operations like context save, dispatching, swapping, and
I/O initiation. The operating system layer is located on top of the extended machine
layer. This arrangement considerably simplifies the coding and testing of OS modules by
separating the algorithm of a function from the implementation of its primitive
operations. It is now easier to test, debug, and modify an OS module than in a monolithic
OS. We say that the lower layer provides an abstraction that is the extended machine.
We call the operating system layer the top layer of the OS.
It controls the allocation and use of the computing System’s resources among the
various user and tasks.
It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
OPERATING SYSTEMS
1. Provides the facilities to create and modify programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to
machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management
The module that keeps track of the status of devices is called the I/O traffic controller.
Each I/O device has a device handler that resides in a separate process associated with
that device.
The I/O subsystem consists of
A memory Management component that includes buffering caching and spooling.
A general device driver interface.
Drivers for Specific Hardware Devices
Below mentioned are the drivers which are required for a specific Hardware Device. Here
we discussed Assemblers, compilers, and interpreters, loaders.
Assembler
The High-level languages – examples are C, C++, Java, Python, etc (around 300+ famous
high-level languages) are processed by compilers and interpreters . A compiler is a
program that accepts a source program in a “high-level language “and produces machine
code in one go. Some of the compiled languages are FORTRAN, COBOL, C, C++, Rust,
and Go. An interpreter is a program that does the same thing but converts high-level
code to machine code line-by-line and not all at once. Examples of interpreted languages
are
Python
Perl
Ruby
Loader
A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating, and direct-linking. In general, the
loader must load, relocate and link the object program. The loader is a program that
places programs into memory and prepares them for execution. In a simple loading
scheme, the assembler outputs the machine language translation of a program on a
secondary device and a loader places it in the core. The loader places into memory the
machine language version of the user’s program and transfers control to it. Since the
loader program is much smaller than the assembler, those make more core available to
the user’s program.
Shell
Shell is the outermost layer of the Operating System and it handles the interaction with
the user. The main task of the Shell is the management of interaction between the User
and OS. Shell provides better communication with the user and the Operating System
Shell does it by giving proper input to the user it also interprets input for the OS and
OPERATING SYSTEMS
handles the output from the OS. It works as a way of communication between the User
and the OS.
Kernel
The kernel is one of the components of the Operating System which works as a core
component. The rest of the components depends on Kernel for the supply of the
important services that are provided by the Operating System. The kernel is the primary
interface between the Operating system and Hardware.
Functions of Kernel
The following functions are to be performed by the Kernel.
It helps in controlling the System Calls.
It helps in I/O Management.
It helps in the management of applications, memory, etc.
Types of Kernel
There are four types of Kernel that are mentioned below.
Monolithic Kernel
Microkernel
Hybrid Kernel
Exokernel
For more, refer to Kernel in Operating System .
Difference Between 32-Bit and 64-Bit Operating Systems
the CPU and the memory, or by not monitoring the use of resources at all, and instead
handling user programs and resources in a manner that guarantees high efficiency.
2.User convenience:
In the early days of computing, user convenience was synonymous with bare necessity—
the mere ability to execute a program written in a higher level language was considered
adequate. Experience with early operating systems led to demands for better service,
which in those days meant only fast response to a user request. Other facets of user
convenience evolved with the use of computers in new fields. Early operating systems
had command-line interfaces, which required a user to type in a command and specify
values of its parameters. Users needed substantial training to learn use of the
commands, which was acceptable because most users were scientists or computer
professionals. However, simpler interfaces were needed to facilitate use of computers by
new classes of users. Hence graphical user interfaces (GUIs) were evolved. These
interfaces used icons on a screen to represent programs and files and interpreted mouse
clicks on the icons and associated menus as commands concerning them. In many ways,
this move can be compared to the spread of car driving skills in the first half of the
twentieth century. Over a period of time, driving became less of a specialty and more of
a skill that could be acquired with limited training and experience.
3.Non interference:
A computer user can face different kinds of interference in his computational activities.
Execution of his program can be disrupted by actions of other persons, or the OS services
which he wishes to use can be disrupted in a similar manner. The OS prevents such
interference by allocating resources for exclusive use of programs and OS services, and
preventing illegal accesses to resources. Another form of interference concerns programs
and data stored in user files.
There are several types of Operating Systems which are mentioned below.
This type of operating system does not interact with the computer directly. There is an
operator which takes similar jobs having the same requirement and groups them into
batches. It is the responsibility of the operator to sort jobs with similar needs.
OPERATING SYSTEMS
Multi-Processing Operating System is a type of Operating System in which more than one
CPU is used for the execution of resources. It betters the throughput of the System.
Multiprocessing
As it has several processors, so, if one processor fails, we can proceed with another
processor.
Disadvantages of Multi-Processing Operating System
Due to the multiple CPU, it can be more complex and somehow difficult to understand.
Multitasking
Each task is given some time to execute so that all the tasks work smoothly. Each user
gets the time of the CPU as they use a single system. These systems are also known as
Multitasking Systems. The task can be from a single user or different users also. The time
that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
OPERATING SYSTEMS
Time-Sharing OS
Advantages of Time-Sharing OS
Each task gets an equal opportunity.
Fewer chances of duplication of software.
CPU idle time can be reduced.
Resource Sharing: Time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory, and peripherals, reducing the cost of hardware
and increasing efficiency.
Improved Productivity: Time-sharing allows users to work concurrently, thereby
reducing the waiting time for their turn to use the computer. This increased
productivity translates to more work getting done in less time.
Improved User Experience: Time-sharing provides an interactive environment that
allows users to communicate with the computer in real time, providing a better user
experience than batch processing.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs and data.
Data communication problem.
High Overhead: Time-sharing systems have a higher overhead than other operating
systems due to the need for scheduling, context switching, and other overheads that
come with supporting multiple users.
Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of bugs
and errors.
Security Risks: With multiple users sharing resources, the risk of security breaches
increases. Time-sharing systems require careful management of user access,
authentication, and authorization to ensure the security of data and software.
Examples of Time-Sharing OS with explanation
IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first
introduced in 1972. It is still in use today, providing a virtual machine environment
OPERATING SYSTEMS
that allows multiple users to run their own instances of operating systems and
applications.
TSO (Time Sharing Option): TSO is a time-sharing operating system that was first
introduced in the 1960s by IBM for the IBM System/360 mainframe computer. It
allowed multiple users to access the same computer simultaneously, running their own
applications.
Windows Terminal Services: Windows Terminal Services is a time-sharing operating
system that allows multiple users to access a Windows server remotely. Users can run
their own applications and access shared resources, such as printers and network
storage, in real-time.
Distributed OS
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access to files, printers, security, applications, and other networking
functions over a small private network. One more important aspect of Network Operating
Systems is that all the users are well aware of the underlying configuration, of all other
users within the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems .
These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like
missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are very strict
and even the shortest possible delay is not acceptable. These systems are built for
saving life like automatic parachutes or airbags which are required to be readily
available in case of an accident. Virtual memory is rarely found in these systems.
Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.
For more, refer to the Difference Between Hard Real-Time OS and Soft Real-Time OS .
Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and systems, thus more
output from all the resources.
OPERATING SYSTEMS
Task Shifting: The time assigned for shifting tasks in these systems is very less. For
example, in older systems, it takes about 10 microseconds in shifting from one task to
another, and in the latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
Real-time operating system in the embedded system: Since the size of programs
is small, RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is very
less on a few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts
signal to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
In 1940, an operating system was not included in the creation of the first electrical
computer. Early computer users had complete control over the device and wrote
programs in pure machine language for every task. During the computer generation, a
programmer can merely execute and solve basic mathematical calculations. an operating
system is not needed for these computations.
GMOSIS, the first operating system (OS) was developed in the early 1950s. For the IBM
Computer, General Motors has created the operating system. Because it gathers all
related jobs into groups or batches and then submits them to the operating system using
a punch card to finish all of them, the second-generation operating system was built on a
single-stream batch processing system.
Because it gathers all similar jobs into groups or batches and then submits them to the
second generation operating system using a punch card to finish all jobs in a machine,
the second-generation operating system was based on a single stream batch processing
system. Control is transferred to the operating system upon each job’s completion,
whether it be routinely or unexpectedly. The operating system cleans up after each work
is finished before reading and starting the subsequent job on a punch card. Large,
professionally operated machines known as mainframes were introduced after
that. Operating system designers were able to create a new operating system in the late
1960s that was capable of multiprogramming—the simultaneous execution of several
tasks in a single computer program.
In order to create operating systems that enable a CPU to be active at all times by
carrying out multiple jobs on a computer at once, multiprogramming has to be
introduced. With the release of the DEC PDP-1 in 1961, the third generation of
minicomputers saw a new phase of growth and development.
OPERATING SYSTEMS
The fourth generation of personal computers is the result of these PDPs. The Generation
IV (1980–Present)The evolution of the personal computer is linked to the fourth
generation of operating systems. Nonetheless, the third-generation minicomputers and
the personal computer have many similarities. At that time, minicomputers were only
slightly more expensive than personal computers, which were highly expensive.
The development of Microsoft and the Windows operating system was a significant
influence in the creation of personal computers. In 1975, Microsoft developed the first
Windows operating system. Bill Gates and Paul Allen had the idea to advance personal
computers after releasing the Microsoft Windows OS. As a result, the MS-DOS was
released in 1981, but users found it extremely challenging to decipher its complex
commands. Windows is now the most widely used and well-liked operating system
available. Following then, Windows released a number of operating systems, including
Windows 95, Windows 98, Windows XP, and Windows 7, the most recent operating
system. The majority of Windows users are currently running Windows 10. Apple is
another well-known operating system in addition to Windows.
1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS . Earlier, people are lacking OS in
their computer system so they had to manually type instructions for each tasks in
machine language(0-1 based language) . And at that time , it was very hard for users to
implement even a simple task. And it was very time consuming and also not user-friendly
. Because not everyone had that much level of understanding to understand the machine
language and it required a deep understanding.
With the growth of time, batch processing system came into the market .Now Users had
facility to write their programs on punch cards and load it to the computer operator. And
then operator make different batches of similar types of jobs and then serve the different
batch(group of jobs) one by one to the CPU .CPU first executes jobs of one batch and
them jump to the jobs of other batch in a sequence manner.
Multiprogramming was the first operating system where actual revolution began. It
provide user facility to load the multiple program into the memory and provide a specific
portion of memory to each program. When one program is waiting for any I/O operations
(which take much time) at that time the OS give permission to CPU to switch from
previous program to other program(which is first in ready queue) for continuous
execution of program with interrupt.
from one program to another program after a certain interval of time so that every
program can get access of CPU and complete their work.
With the growth of time, Graphical User Interfaces (GUIs) came. First time OS became
more user-friendly and changed the way of people to interact with computer. GUI
provides computer system visual elements which made user’s interaction with computer
more comfortable and user-friendly. User can just click on visual elements rather than
typing commands. Here are some feature of GUI in Microsoft’s windows icons, menus and
windows.
With the growth of time, Artificial intelligence came into picture. Operating system
integrates features of AI technology like Siri, Google Assistant, and Alexa and became
more powerful and efficient in many way. These AI features with operating system create
a entire new feature like voice commands, predictive text, and personalized
recommendations.
Note: The above mentioned OS basically tells how the OS evolved with the time by
adding new features but it doesn’t mean that only new generation OS are in use and
previously OS system are not in use, according to the need, all these OS are still used in
software industry.
Object-Oriented Analysis
Object-Oriented Analysis (OOA) is the first technical activity performed as part of
object-oriented software engineering. OOA introduces new concepts to investigate a
problem. It is based on a set of basic principles, which are as follows:
The information domain is modeled:
o Lets say you’re building a game. OOA helps you figure out all the things you need
to know about the game world – the characters, their features, and how they
interact. It’s like making a map of everything important.
Behavior is represented:
o OOA also helps you understand what your game characters will do. If a character
jumps when you press a button, OOA helps describe that action. It’s like writing
down a script for each character.
The function is described:
o Every program has specific tasks or jobs it needs to do. OOA helps you list and
describe these jobs. In our game, it could be tasks like moving characters or
keeping score. It’s like making a to-do list for your software.
Data, functional, and behavioral models are divided to uncover greater
detail:
o OOA is smart about breaking things into different parts. It splits the job into three
categories: things your game knows (like scores), things your game does (like
jumping), and how things in your game behave (like characters moving around).
This makes it easier to understand.
Starting Simple, Getting Detailed:
o OOA knows that at first, you just want to understand the big picture. So, it starts
with a simple version of your game or program. Later on, you add more details to
OPERATING SYSTEMS
make it work perfectly. It’s like sketching a quick drawing before adding all the
colors and details.
The above noted principles form the foundation for the OOA approach.
Object-Oriented Design
In the object-oriented software development process, the analysis model, which is
initially formed through object-oriented analysis (OOA), undergoes a transformation
during object-oriented design (OOD). This evolution is crucial because it shapes the
analysis model into a detailed design model, essentially serving as a blueprint for
constructing the software.
The outcome of object-oriented design, or OOD, manifests in a design model
characterized by multiple levels of modularity. This modularity is expressed in two key
ways:
Subsystem Partitioning:
o At a higher level, major components of the system are organized into
subsystems.
o This practice is similar to creating modules at the system level, providing a
structured and organized approach to managing the complexity of the software.
Object Encapsulation:
o A more granular form of modularity is achieved through the encapsulation of
data manipulation operations into objects. ” It’s like putting specific tasks (or
operations) and the data they need into little boxes called “objects.”
o Each object does its job neatly and keeps things organized. So, if our game has a
character jumping, we put all the jumping stuff neatly inside an object.
o It’s like having a box for each task, making everything easier to handle and
understand.
Furthermore, as part of the object-oriented design process, it is essential to define
specific aspects:
Data Organization of Attributes:
o OOD involves specifying how data attributes are organized within the objects.
This includes determining the types of data each object will hold and how they
relate to one another, ensuring a coherent and efficient data structure.
Procedural Description of Operations:
o OOD requires a procedural description for each operation that an object can
perform. This involves detailing the steps or processes involved in carrying out
specific tasks, ensuring clarity and precision in the implementation of
functionality.
Below diagram shows a design pyramid for object-oriented systems. It is having the
following four layers.
OPERATING SYSTEMS
1. The Subsystem Layer: It represents the subsystem that enables software to achieve
user requirements and implement technical frameworks that meet user needs.
2. The Class and Object Layer: It represents the class hierarchies that enable the
system to develop using generalization and specialization. This layer also represents
each object.
3. The Message Layer: This layer deals with how objects interact with each other. It
includes messages sent between objects, method calls, and the flow of control within
the system.
4. The Responsibilities Layer: It focuses on the responsibilities of individual objects.
This includes defining the behavior of each class, specifying what each object is
responsible for, and how it responds to messages.
Benefits of Object-Oriented Analysis and Design(OOAD)
Improved modularity: OOAD encourages the creation of small, reusable objects that
can be combined to create more complex systems, improving the modularity and
maintainability of the software.
Better abstraction: OOAD provides a high-level, abstract representation of a
software system, making it easier to understand and maintain.
Improved reuse: OOAD encourages the reuse of objects and object-oriented design
patterns, reducing the amount of code that needs to be written and improving the
quality and consistency of the software.
Improved communication: OOAD provides a common vocabulary and methodology
for software developers, improving communication and collaboration within teams.
Reusability: OOAD emphasizes the use of reusable components and design patterns,
which can save time and effort in software development by reducing the need to
create new code from scratch.
Scalability: OOAD can help developers design software systems that are scalable and
can handle changes in user demand and business requirements over time.
OPERATING SYSTEMS
Maintainability: OOAD emphasizes modular design and can help developers create
software systems that are easier to maintain and update over time.
Flexibility: OOAD can help developers design software systems that are flexible and
can adapt to changing business requirements over time.
Improved software quality: OOAD emphasizes the use of encapsulation,
inheritance, and polymorphism, which can lead to software systems that are more
reliable, secure, and efficient.
Fixed partitioning, also known as static partitioning, is a memory allocation technique used
in operating systems to divide the physical memory into fixed-size partitions or regions,
each assigned to a specific process or user. Each partition is typically allocated at system
boot time and remains dedicated to a specific process until it terminates or releases the
partition.
OPERATING SYSTEMS
1. In fixed partitioning, the memory is divided into fixed-size chunks, with each chunk
being reserved for a specific process. When a process requests memory, the operating
system assigns it to the appropriate partition. Each partition is of the same size, and the
memory allocation is done at system boot time.
2. Fixed partitioning has several advantages over other memory allocation techniques.
First, it is simple and easy to implement. Second, it is predictable, meaning the
operating system can ensure a minimum amount of memory for each process. Third, it
can prevent processes from interfering with each other’s memory space, improving the
security and stability of the system.
3. However, fixed partitioning also has some disadvantages. It can lead to internal
fragmentation, where memory in a partition remains unused. This can happen when the
process’s memory requirements are smaller than the partition size, leaving some
memory unused. Additionally, fixed partitioning limits the number of processes that can
run concurrently, as each process requires a dedicated partition.
Overall, fixed partitioning is a useful memory allocation technique in situations where the
number of processes is fixed, and the memory requirements for each process are known in
advance. It is commonly used in embedded systems, real-time systems, and systems with
limited memory resources.
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. Memory Management function keeps track of the
status of each memory location, either allocated or free to ensure effective and efficient
use of Primary Memory.
1. Contiguous
2. Non-Contiguous
In Contiguous Technique, executing process must be loaded entirely in the main memory.
Contiguous Technique can be divided into:
Fixed (or static) partitioning
Variable (or dynamic) partitioning
Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the main
memory. In this partitioning, the number of partitions (non-overlapping) in RAM is fixed
but the size of each partition may or may not be the same. As it is
a contiguous allocation, hence no spanning is allowed. Here partitions are made before
execution or during system configure.
OPERATING SYSTEMS
As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 =
7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite
of available free space because of contiguous allocation (as spanning is not allowed).
Hence, 7MB becomes part of External Fragmentation.
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load
the processes even though there is space available but not in the contiguous form (as
spanning is not allowed).
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. The memory Management function keeps track of
the status of each memory location, either allocated or free to ensure effective and
efficient use of Primary Memory.
Below are Memory Management Techniques.
Contiguous
Non-Contiguous
In the Contiguous Technique, the executing process must be loaded entirely in the main
memory. The contiguous Technique can be divided into:
Fixed (static) partitioning
Variable (dynamic) partitioning
As illustrated in above figure, the operating system first search throughout the memory
and allocates the job to the minimum possible memory partition, making the memory
allocation efficient.
Memory Efficient. The operating system allocates the job minimum possible space in the
memory, making memory management very efficient.
To save memory from getting wasted, it is the best method.
Improved memory utilization
Reduced memory fragmentation
Minimizes external fragmentation
It is a Slow Process. Checking the whole memory for each job makes the working of the
operating system very slow. It takes a lot of time to complete the work.
Increased computational overhead
May lead to increased internal fragmentation
Can result in slow memory allocation times.
1. The operating system maintains a list of all free memory blocks available in the system.
OPERATING SYSTEMS
2. When a process requests memory, the operating system searches the list for the
smallest free block of memory that is large enough to accommodate the process.
3. If a suitable block is found, the process is allocated memory from that block.
4. If no suitable block is found, the operating system can either wait until a suitable block
becomes available or request additional memory from the system.
5. The best-fit allocation algorithm has the advantage of minimizing external
fragmentation, as it searches for the smallest free block of memory that can
accommodate a process. However, it can also lead to more internal fragmentation, as
processes may not use the entire memory block allocated to them.
Overall, the best-fit allocation algorithm can be an effective way to allocate memory in an
operating system, but it is important to balance the advantages and disadvantages of this
approach with other allocation algorithms such as first-fit, next-fit, and worst-fit.
Dynamic Partitioning
allocated as no spanning is allowed in contiguous allocation. The rule says that the
process must be continuously present in the main memory to get executed. Hence it
results in External Fragmentation.
No Internal Fragmentation
First-Fit Memory Allocation : This method keeps the free/busy list of jobs organized by
memory location, low-ordered to high-ordered memory. In this method, first job claims
the first available memory with space more than or equal to it’s size. The operating
system doesn’t search for appropriate partition but just allocate the job to the nearest
OPERATING SYSTEMS
As illustrated above, the system assigns J1 the nearest partition in the memory. As a
result, there is no partition with sufficient space is available for J3 and it is placed in the
waiting list.The processor ignores if the size of partition allocated to the job is very large
as compared to the size of job or not. It just allocates the memory. As a result, a lot of
memory is wasted and many jobs may not get space in the memory, and would have to
wait for another job to complete.
As illustrated in above figure, the operating system first search throughout the memory
and allocates the job to the minimum possible memory partition, making the memory
allocation efficient.
Memory Efficient. The operating system allocates the job minimum possible space in
the memory, making memory management very efficient.
To save memory from getting wasted, it is the best method.
Improved memory utilization
Reduced memory fragmentation
Minimizes external fragmentation
It is a Slow Process. Checking the whole memory for each job makes the working of
the operating system very slow. It takes a lot of time to complete the work.
Increased computational overhead
May lead to increased internal fragmentation
Can result in slow memory allocation times.
1. The operating system maintains a list of all free memory blocks available in the
system.
2. When a process requests memory, the operating system searches the list for the
smallest free block of memory that is large enough to accommodate the process.
OPERATING SYSTEMS
3. If a suitable block is found, the process is allocated memory from that block.
4. If no suitable block is found, the operating system can either wait until a suitable block
becomes available or request additional memory from the system.
5. The best-fit allocation algorithm has the advantage of minimizing external
fragmentation, as it searches for the smallest free block of memory that can
accommodate a process. However, it can also lead to more internal fragmentation, as
processes may not use the entire memory block allocated to them.
Deallocation:
Memory deallocaton, which is the process of releasing memory that was previously
allocated for use by a program, has several advantages, including:
1. Efficient use of resources: By deallocating memory that is no longer needed, a
program can free up resources for other processes or for future use. This can help to
improve the overall performance of the system by reducing memory usage and minimizing
the risk of running out of memory.
2. Avoiding memory leaks: If memory is not deallocated properly, it can result in
memory leaks,where memory is allocated but not released. Over time, this can lead to a
gradual depletion of available memory, which can cause a program to slow down or crash.
By deallocating memory when it is no longer needed, a program can avoid these issues
and ensure that memory is used efficiently.
3. Simplifying memory management: By deallocating memory when it is no longer
needed, a program can simplify memory management and reduce the risk of memory-
related errors. This can make the code easier to read, maintain, and debug.
4. Reducing memory fragmentation: Frequent memory allocation and deallocation
can lead to memory fragmentation, where small blocks of memory become scattered
throughout the memory space. This can make it difficult to allocate contiguous blocks of
memory for larger data structures, and can also lead to increased overhead when
managing memory. By deallocating memory when it is no longer needed, a program can
reduce the risk of memory fragmentation and improve memory usage efficiency.
5. Avoiding dangling pointers: If memory is deallocated but a poin ter to that
memory still exists, he pointer becomes a dangling pointer. Dereferencing a
dangling pointer can lead to undefined behavior or segmentation faults. By deallocating
memory when it is no longer needed, a program can avoid the risk of creating dangling
pointers and improve the reliability and stability of the code.