Jai Project 1
Jai Project 1
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
SUBMITTED
2024-2025
Accredited by NAAC with A+ Grade, Recognized under Sec. 2(f) of UGC Act. 1956
Approved by AICTE and Affiliated to Jawaharlal Nehru Technological University,
Hyderabad
CERTIFICATE
This is to certify that the Lab Project entitled “DYNAMIC MEMORY ALLOCATOR” is the
bonafide work carried out by CH.ManiDeekshith (23J21A1211), J.Ananth
Kumar(23J21A1216), P. Jaya Ptrakash (23J21A1237) of II B.Tech IT, under our guidance
and supervision.The Lab Project Report is submitted to Joginpally BR Engineering College
(UGC Autonomous) Hyderabad in partial fulfilment of requirements of the award of the
degree of Bachelor of Technology in Information Technology during theacademic year 2024-
2025.
Principal
ACKNOWLEDGEMENT
We express our hearted thank you to our principal Dr. B Venkata Ramana Reddy for giving
us spontaneous encourage for completing the Lab project.
We are really thank you to our HOD Dr. T. Sheshagiri for his time to time, much needed
valuable guidance throughout our Lab project.
We would like to express our sincere gratitude to our LAB PROJECT INTERNAL GUIDE
Dr.shaik Aleem Basha who has guided and supported us through every stage in the Lab
Project.
DECLARATION
We hereby declare that our Lab Project entitled “DYNAMIC MEMORY ALLOCATOR”
is the work done during the academic year 2024-2025, II year - I-Sem and our Lab Project
is submitted in partial fulfilment of the requirements for the award of B.Tech degree in
Information Technology from JOGINPALLY B.R. ENGINEERING COLLEGE (UGC
Autonomous),Hyderabad.
ABSTRACT
A Dynamic Memory Allocator is a system component responsible for managing memory
allocation and deallocation during program execution. Unlike static memory allocation,
which assigns fixed memory at compile time, dynamic memory allocation enables efficient
and flexible memory utilization at runtime. This is particularly useful in applications with
unpredictable memory requirements, such as operating systems, embedded systems, and
highperformance computing.
This paper discusses the fundamental principles of dynamic memory allocation, including
heap management, fragmentation, and garbage collection techniques. It explores common
implementations like malloc/free, buddy system, slab allocation, and memory pools,
analyzing their advantages and trade-offs. Performance optimization strategies such as
best-fit, first-fit, and worst-fit algorithms are also examined.
Efficient memory management is crucial for system performance, reducing memory
fragmentation, and improving execution speed. Future advancements in dynamic memory
allocation involve hardware-assisted techniques, intelligent compaction methods, and
machine learning-based predictive allocation.
This study provides insights into the design and implementation of dynamic memory
allocators, highlighting challenges and solutions for optimized memory management in
modern computing systems.
CONTENTS
I. Cover Page
Abstract
II. Introduction
II.1Basic concepts on Dynamic Memory Allocator
II.1.1 Fragmentation
II.1.2 Internal Fragmentation
II.1.3 External Fragmentation
II.1.4 Total Fragmentation
II.2Executing Time
3 Memory Allocation Techniques
3.1 Linked list Techniques
3.1.1 First Fit
3.1.2 Next Fit
3.1.3 Best Fit
3.1.4 Worst Fit
3.2 Buddy System
3.2.1 Binary Buddy System
3.2.2 Fibonacci Buddy System
3.2.3 Waited Buddy System
4 Simulation and Performance Analysis
5 Algorithm
6 Code
7 Test case
I. INTRODUCTION
Received Date : 10.12.2004 Accepted Date: 27.03.2005 Until now, improved software
techniques by using various data structures are separated into three different categories:
Bitmap, Linked List techniques, Buddy Systems.
Bitmap implementation use a bitmap, where each bit represents some portions of memory
and indicates whether it is free or occupied.
The most popular method in dynamic memory allocation techniques is Linked Lists. First
Fit, Best Fit, Worst Fit and Next Fit Linked List techniques are well known and frequently
used. Although Memory usage of these algorithms is good, list structure leads to decrease
running speed of these algorithms. Each memory request (malloc function in C
programming language) causes to search all list sequentially. The memory manager scans
among the list until it finds blocks that are big enough. The hole is then broken into two
pieces; one for the process and one for the unused memory.
The most popular method in dynamic memory allocation techniques is Linked Lists. First
Fit, Best Fit, Worst Fit and Next Fit Linked List techniques are well known and frequently
used. Although Memory usage of these algorithms is good, list structure leads to decrease
running speed of these algorithms. Each memory request (malloc function in C
programming language) causes to search all list sequentially. The memory manager scans
among the list until it finds blocks that are big enough. The hole is then broken into two
pieces; one for the process and one for the unused memory.
and reliable operation. Dynamic memory allocation is a technique that enables programs to
allocate and manage memory at runtime, allowing for flexible and efficient use of system
resources. A dynamic memory allocator is a software component responsible for managing
the allocation and deallocation of memory blocks, ensuring that memory is used efficiently
and minimizing memory-related errors.
Background
Dynamic memory allocation involves allocating memory blocks of varying sizes
from a larger memory pool. The allocator must ensure that memory blocks are allocated
efficiently, minimizing fragmentation and reducing the risk of memory-related errors.
Dynamic memory allocators are used in a wide range of applications, from operating
systems and embedded systems to high-performance computing and cloud computing.
respond immediately to a request for space deciding which block of memory to allocate. It
can only deal with memory whether it is free and only choose where in free memory to
allocate the requested block
2.1.1 FRAGMENTATION
Fragmentation is one of the most important problem that an allocator encounters. The
fragmentation problem prevents memory to be used effectively. Fragmentation is classified
as external or internal [6].
Bitmap
Linked List techniques: First Fit, Best Fit, Next Fit, Worst Fit
Buddy Systems: Binary, Fibonacci,
Weighted
In spite of simplicity, the main problem is that when it has been decided to bring a k
unit process into memory, the memory manager must search the bit map to find k
consecutive 0 bits in the bit map. Searching a bit map for a request is a slow operation,
so in practice, bit maps are not often used.
Figure 1: (a) A part of memory with five processes and 3 holes. The tick marks show
the memory allocation units. The shaded regions (0 in the bit map) are free. (b) The
corresponding bit map
search begins from there. By means of this feature, Small and awkward memory hole can not
accumulate at the beginning of the list as first fit technique
It is expected from this strategy that unused holes are decreased. But, this is not performed and
first fit algorithm may be more successful from this point of view. Because, after allocation are
performed, the remainder may be quite small and perhaps unusable for next larger request. A
little time later, memory has a lot of small blocks and total unused area is too much than that of
first fit.
Although Buddy system is faster, it has a very important disadvantage. Buddy systems are
inefficient in memory usage. Though the linked list techniques have only external
fragmentation, buddy systems have both internal and external fragmentation. Basis reason of
this problem is that allocated blocks are power of two (for Binary buddy system). Therefore,
first request size round up to smallest number that is power of two (for binary buddy) then
allocation process is performed.
Working mechanism of Binary buddy system is as follows: We start with the entire block of
size 2U. When a request of size S is made: If 2U-1 < S<= 2U then allocate the entire block of
size 2U. Else, split this block into two buddies, each of size 2U-1. If U-2< S<= 2U-1 then
allocate one of the two buddies. Otherwise one of the two buddies is split in half again. This
process is repeated until the smallest block greater or equal to S is generated [7].
Figure 3 shows the settlement of binary buddy system at binary tree. Striped circle, white
circle and gray circle represent respectively split blocks for allocation, leave nodes that show
free blocks, nodes that shows used memory chunks.
According to this definition, the elements of Fibonacci series: 0, 1, 2, 3, 5, 8, 13, 21, 34, 55,
89, 144, 233, 377, 610, 987...
Figure 4 shows sample tree structure of Fibonacci Buddy systems.
As showed in figure 5a, If split block size is 2 k+2, this block are split to 3*2k+2 and 2k block
size. If block size is 3*2k+2(Figure 6b), it separated into 2k+1 and 2k block sizes [3*2k -> (2k+1,
2k)].
Allocation processes are performed by using generated random numbers in program. Internal
and external fragmentation is calculated after each allocation process. In this practice, total
fragmentation is calculated by using average internal and external values for 100 steps.
Allocation and deallocation process can not be calculated sensitively by using classic time
function in C++. Because, running time implements in very short time. In order to solve this
problem, Windows API
Buddy systems suffer from both internal and external fragmentation. The others have only
external fragmentation. Internal and external fragmentation values of binary and weighted
buddy systems are showed graphically in figure 7.
As shown in figure, it can be observed that Weighted buddy system suffers from internal
fragmentation less than Binary buddy. In contrast, in respect of external fragmentation binary
buddy is more successful. Another result obtained from this figure is that maximum
allocatable block number is directly proportional to external fragmentation.
In this study, although results of all techniques are obtained in respect of performance criteria,
Graphics are drawn only for the best method in each category. In this way, the performance of
categorically classified methods is evaluated.
In figure 8, In respect of fragmentation criteria, First fit is the best and Binary buddy is the
worst technique. Allocation of bit map method gives close similarity to first fit. Because,
searching bits is performed as first fit.
Figure 9 shows that, according to allocation process, the best performance technique is binary
buddy, the worst is bitmap. Search process in tree structure can be done in shorter time than in
list structure. So, Buddy systems have the best performance in respect of allocation process
duration. In contrast, in bit map techniques, allocation is faster at small memory sizes due to
searching at bits and not splitting. The bigger memory size, the slower allocation time in the
bit map.
In figure 10, in respect of deallocation duration performance criteria, the most successful
deallocation process is performed in short time by updating the bitmap by inverting (from 1 to
5. ALGORITHM
For a dynamic memory allocator project, you can implement algorithms like First-Fit, Best-Fit,
or Worst-Fit, which manage memory allocation and deallocation by searching for suitable free
blocks and splitting/merging them as needed.
Here's a breakdown of the key concepts and algorithms:
1. Understanding Dynamic Memory Allocation:
Heap:
Dynamic memory allocation deals with the heap, a region of memory where you can request
and release memory during program execution.
Allocation:
The allocator finds free blocks of memory large enough to satisfy the request and returns a
pointer to the beginning of that block.
Deallocation:
When memory is no longer needed, the allocator marks the block as free, potentially merging it
with adjacent free blocks.
2. Common Algorithms:
First-Fit:
Searches for the first free block that's large enough.
Simple to implement but can lead to fragmentation.
Best-Fit:
Searches for the smallest free block that's large enough.
Can reduce external fragmentation but might increase internal fragmentation.
Worst-Fit:
Searches for the largest free block.
Can lead to large free blocks being used for smaller requests, potentially wasting space.
Next-Fit:
Similar to first-fit, but starts searching from the point where the last allocation was made.
Can lead to better memory utilization than first-fit, but still suffers from fragmentation.
6.CODE
#include <stdio.h>
#include <stdlib.h>
typedef struct { int
process_id; int
arrival_time; int
burst_time; int
completion_time; int
turn_around_time; int
waiting_time;
} Process;
// Function to calculate and display waiting and turn-around times
void calculateTimes(Process *processes, int n) { int
total_waiting_time = 0; int total_turn_around_time = 0;
// Calculate completion time, turn around time, and waiting time for each process
processes[0].completion_time = processes[0].arrival_time + processes[0].burst_time;
processes[0].turn_around_time = processes[0].completion_time - processes[0].arrival_time;
processes[0].waiting_time = processes[0].turn_around_time - processes[0].burst_time;
total_waiting_time = processes[0].waiting_time;
total_turn_around_time = processes[0].turn_around_time;
for (int i = 1; i < n; i++) {
processes[i].completion_time = processes[i-1].completion_time +
processes[i].burst_time;
processes[i].turn_around_time = processes[i].completion_time -
processes[i].arrival_time; processes[i].waiting_time = processes[i].turn_around_time -
processes[i].burst_time;
total_waiting_time += processes[i].waiting_time;
total_turn_around_time += processes[i].turn_around_time;
}
// Display results
7.TEST CASES
8.EXPERIMENTAL RESULTS/OUTPUTS
9.CONCLUSION
has the worst at fragmentation rate. According to these results, it is seen that there are not yet
the best performance techniques point from all performance criteria.
Consequently, it can not be said that the basic memory allocation techniques are successful. To
minimize the fragmentation problem and to allocate faster, new memory allocation techniques
can be implemented by using hardware structure.
It is suggested that by using new and different data structures, faster and more efficient
memory allocators can be achieved.
10.REFERENCES
Zorn B., July 1993, The measured cost of conservative garbage collection,
SoftwarePractıce And Experıence, Vol. 23, No. 7, 733-756
Shen. K.K. and Peterson, J.L., Oct. 1974, A weighted buddy method for dynamic
storage allocation. Communications Of The ACM 17, 10, 558-562
Barkle, R.E. ve Lee T.P., Dec 1989, A Lazy Buddy System Bounded by Two
Coalescing Delays per Class, Proc. 12th Symp. OpertatınSystem
Prıncples,vol:23,no:5, 167-176
Hırschberg, D.S., Oct. 1973, A class of dynamic memory allocation algorithms.
Communications Of The ACM 16, 10, 615-618
Tanenbaum A.S. ve Woodhull A. S., 1997, Operating Systems Design and
Implementation,
Prentice Hall, Portland, 0-13-638677-6
Wilson P. R., Johnstone M. S., Neely M. ve Boles D., September 1995, Dynamic
Storage