[go: up one dir, main page]

0% found this document useful (0 votes)
28 views32 pages

DSA Notes 3rd Sem

The document provides an introduction to data structures, defining them as a way to store and organize data efficiently in computer memory. It classifies data structures into primitive and non-primitive types, detailing examples such as integers, arrays, stacks, and trees. Additionally, it discusses operations on data structures, algorithm efficiency, and performance analysis, including time and space complexity metrics.

Uploaded by

molvika21zaveri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views32 pages

DSA Notes 3rd Sem

The document provides an introduction to data structures, defining them as a way to store and organize data efficiently in computer memory. It classifies data structures into primitive and non-primitive types, detailing examples such as integers, arrays, stacks, and trees. Additionally, it discusses operations on data structures, algorithm efficiency, and performance analysis, including time and space complexity metrics.

Uploaded by

molvika21zaveri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Introduction to Data Structure

Introduction to Data Structure


 Computer is an electronic machine which is used for data processing and manipulation.
 When programmer collects such type of data for processing, he would require to store all of them in
computer’s main memory.
 In order to make computer work we need to know
o Representation of data in computer.
o Accessing of data.
o How to solve problem step by step.
 For doing this task we use data structure.

What is Data Structure?


 Data structure is a representation of the logical relationship existing between individual elements of
data.

 Data Structure is a particular way of storing and organizing data in the memory of the computer so that
these data can easily be retrieved and efficiently utilized in the future when required.
 Data Structure is a way of organizing all data items that considers not only the elements stored but also
their relationship to each other.
 We can also define data structure as a mathematical or logical model of a particular organization of
data items.
 The representation of particular data structure in the main memory of a computer is called as storage
structure.
 The storage structure representation in auxiliary memory is called as file structure.
 It is defined as the way of storing and manipulating data in organized form so that it can be used
efficiently.
 Data Structure mainly specifies the following four things
o Organization of Data
o Accessing methods
o Degree of associativity
o Processing alternatives for information
 Algorithm + Data Structure = Program
 Data structure study covers the following points
o Amount of memory require to store.
o Amount of time require to process.
o Representation of data in memory.
o Operations performed on that data.

Bhavisha Patel - Data Structure


1
Introduction to Data Structure

Classification of Data Structure

DATA
STRUCTURE

PRIMITIVE NON
PRIMITIVE

INTEGER FLOATING CHARACTER POINTER ARRAY LIST FILE


POINT

LINEAR LIST NON


LINEAR LIST

STACK QUEUE GRAPH TREE

Data Structures are normally classified into two broad categories

1. Primitive Data Structure

2. Non-primitive data Structure

Data types
A particular kind of data item, as defined by the values it can take, the programming language used, or
the operations that can be performed on it.

Primitive Data Structure


 Primitive data structures are basic structures and are directly operated upon by machine instructions.
 Primitive data structures have different representations on different computers.
 Integers, floats, character and pointers are examples of primitive data structures.
 These data types are available in most programming languages as built in type.
o Integer: It is a data type which allows all values without fraction part. We can use it for whole numbers.
o Float: It is a data type which use for storing fractional numbers.
o Character: It is a data type which is used for character values.

Bhavisha Patel - Data Structure


2
Introduction to Data Structure

Pointer: A variable that holds memory address of another variable are called pointer.

Non primitive Data Type


 These are more sophisticated data structures.
 These are derived from primitive data structures.
 The non-primitive data structures emphasize on structuring of a group of homogeneous or heterogeneous
data items.
 Examples of Non-primitive data type are Array, List, and File etc.
 A Non-primitive data type is further divided into Linear and Non-Linear data structure
o Array: An array is a fixed-size sequenced collection of elements of the same data type.
o List: An ordered set containing variable number of elements is called as Lists.
o File: A file is a collection of logically related information. It can be viewed as a large list of records
consisting of various fields.

Linear data structures


 A data structure is said to be Linear, if its elements are connected in linear fashion by means of logically or in
sequence memory locations.
 There are two ways to represent a linear data structure in memory,
o Static memory allocation
o Dynamic memory allocation
 The possible operations on the linear data structure are: Traversal, Insertion, Deletion, Searching, Sorting
and Merging.
 Examples of Linear Data Structure are Stack and Queue.
 Stack: Stack is a data structure in which insertion and deletion operations are performed at one end only.
o The insertion operation is referred to as ‘PUSH’ and deletion operation is referred to as ‘POP’ operation.
o Stack is also called as Last in First out (LIFO) data structure.
 Queue: The data structure which permits the insertion at one end and Deletion at another end, known as
Queue.
o End at which deletion is occurs is known as FRONT end and another end at which insertion occurs is
known as REAR end.
o Queue is also called as First in First out (FIFO) data structure.

Nonlinear data structures


 Nonlinear data structures are those data structure in which data items are not arranged in a sequence.
 Examples of Non-linear Data Structure are Tree and Graph.
 Tree: A tree can be defined as finite set of data items (nodes) in which data items are arranged in branches
and sub branches according to requirement.
o Trees represent the hierarchical relationship between various elements.
o Tree consist of nodes connected by edge, the node represented by circle and edge lives connecting to
circle.

Bhavisha Patel - Data Structure


3
Introduction to Data Structure

 Graph: Graph is a collection of nodes (Information) and connecting edges (Logical relation) between nodes.
o A tree can be viewed as restricted graph.
o Graphs have many types:
 Un-directed Graph
 Directed Graph
 Mixed Graph
 Multi Graph
 Simple Graph
 Null Graph
 Weighted Graph

Difference between Linear and Non Linear Data Structure


Linear Data Structure Non-Linear Data Structure
Every item is related to its previous and next time. Every item is attached with many other items.
Data is arranged in linear sequence. Data is not arranged in sequence.
Data items can be traversed in a single run. Data cannot be traversed in a single run.
Eg. Array, Stacks, linked list, queue. Eg. tree, graph.
Implementation is easy. Implementation is difficult.

Operation on Data Structures


Design of efficient data structure must take operations to be performed on the data structures into account. The
most commonly used operations on data structure are broadly categorized into following types

1. Create
The create operation results in reserving memory for program elements. This can be done by declaration
statement. Creation of data structure may take place either during compile-time or run-time. malloc()
function of C language is used for creation.

2. Destroy
Destroy operation destroys memory space allocated for specified data structure. free() function of C language
is used to destroy data structure.

3. Selection
Selection operation deals with accessing a particular data within a data structure.

Bhavisha Patel - Data Structure


4
Introduction to Data Structure

4. Updation
It updates or modifies the data in the data structure.

5. Searching
It finds the presence of desired data item in the list of data items, it may also find the locations of all
elements that satisfy certain conditions.

6. Sorting
Sorting is a process of arranging all data items in a data structure in a particular order, say for example,
either in ascending order or in descending order.

7. Merging
Merging is a process of combining the data items of two different sorted list into a single sorted list.

8. Splitting
Splitting is a process of partitioning single list to multiple list.

9. Traversal
Traversal is a process of visiting each and every node of a list in systematic manner.

Performance Analysis and MeasurementT ( Time and space analysis of algorithms )


Algorithm

 An essential aspect to data structures is algorithms.

 Data structures are implemented using algorithms.

 An algorithm is a procedure that you can write as a C function or program, or any other language.

 An algorithm states explicitly how the data will be manipulated.

Algorithm Efficiency

 Some algorithms are more efficient than others. We would prefer to choose an efficient algorithm, so it
would be nice to have metrics for comparing algorithm efficiency.

 The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the
amount of data the algorithm must process.

 Usually there are natural units for the domain and range of this function. There are two main complexity
measures of the efficiency of an algorithm

 Time complexity

 Time Complexity is a function describing the amount of time an algorithm takes in terms of the
amount of input to the algorithm.

Bhavisha Patel - Data Structure


5
Introduction to Data Structure

 "Time" can mean the number of memory accesses performed, the number of comparisons between
integers, the number of times some inner loop is executed, or some other natural unit related to the
amount of real time the algorithm will take.

 Space complexity

 Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of
the amount of input to the algorithm.

 We often speak of "extra" memory needed, not counting the memory needed to store the input itself.
Again, we use natural (but fixed-length) units to measure this.

 We can use bytes, but it's easier to use, say, number of integers used, number of fixed-sized structures,
etc. In the end, the function we come up with will be independent of the actual number of bytes needed
to represent the unit.

 Space complexity is sometimes ignored because the space used is minimal and/or obvious, but
sometimes it becomes as important an issue as time.

Worst Case Analysis


In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case
that causes maximum number of operations to be executed. For Linear Search, the worst case happens when
the element to be searched is not present in the array. When x is not present, the search () functions compares
it with all the elements of array [] one by one. Therefore, the worst case time complexity of linear search would
be.

Average Case Analysis


In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all
the calculated values and divide the sum by total number of inputs. We must know (or predict) distribution of
cases. For the linear search problem, let us assume that all cases are uniformly distributed. So we sum all the cases
and divide the sum by (n+1).

Best Case Analysis


In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case
that causes minimum number of operations to be executed. In the linear search problem, the best case occurs
when x is present at the first location. The number of operations in worst case is constant (not dependent on n).
So time complexity in the best case would be.

Bhavisha Patel - Data Structure


6
Introduction to Data Structure

Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time complexity of an
algorithm is given below:

o Big oh Notation (?)


o Omega Notation (Ω)
o Theta Notation (θ)

Big oh Notation (O)


o Big O notation is an asymptotic notation that measures the performance of an algorithm by simply
providing the order of growth of the function.
o This notation provides an upper bound on a function which ensures that the function never grows faster
than the upper bound.
o So, it gives the least upper bound on a function so that the function never grows faster than this upper
bound.
o It is the formal way to express the upper boundary of an algorithm running time.
o It measures the worst case of time complexity or the algorithm's longest amount of time to complete its
operation. It is represented as shown below:

For example:

If f(n) and g(n) are the two functions defined for positive integers,
Bhavisha Patel - Data Structure
7
Introduction to Data Structure

then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists constants c
and no such that:

f(n)≤c.g(n) for all n≥no

This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the function f(n). In
this case, we are calculating the growth rate of the function which eventually calculates the worst time
complexity of a function, i.e., how worst an algorithm can perform.

Let's understand through examples

Example 1: f(n)=2n+3 , g(n)=n

Now, we have to find Is f(n)=O(g(n))?

To check f(n)=O(g(n)), it must satisfy the given condition:

f(n)<=c.g(n)

First, we will replace f(n) by 2n+3 and g(n) by n.

2n+3 <= c.n

Let's assume c=5, n=1 then

2*1+3<=5*1

5<=5

For n=1, the above condition is true.

If n=2

2*2+3<=5*2

7<=10

For n=2, the above condition is true.

c.g(n) is the upper bound of the f(n).

It can be represented graphically as:


Bhavisha Patel - Data Structure
8
Introduction to Data Structure

Omega Notation (Ω)


o It basically describes the best-case scenario which is opposite to the big o notation.
o It is the formal way to represent the lower bound of an algorithm's running time. It measures the best
amount of time an algorithm can possibly take to complete or the best-case time complexity.
o It determines what is the fastest time that an algorithm can run.

If we required that an algorithm takes at least certain amount of time without using an upper bound,
we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of running time for
large input size.

If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there exists constants c
and no such that:

f(n)>=c.g(n) for all n≥no and c>0

Let's consider a simple example.

Bhavisha Patel - Data Structure


9
Introduction to Data Structure

If f(n) = 2n+3, g(n) = n,

Is f(n)= Ω (g(n))?

It must satisfy the condition:

f(n)>=c.g(n)

To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.

2n+3>=c*n

Suppose c=1

2n+3>=n (This equation will be true for any value of n starting from 1).

Therefore, it is proved that g(n) is big omega of 2n+3 function.

Bhavisha Patel - Data Structure


1
0
Introduction to Data Structure

Theta Notation (θ)


o The theta notation mainly describes the average case scenarios.
o It represents the realistic time complexity of an algorithm. Every time, an algorithm does not perform
worst or best, in real-world problems, algorithms mainly fluctuate between the worst-case and best-case,
and this gives us the average case of the algorithm.
o Big theta is mainly used when the value of worst-case and the best-case is same.
o It is the formal way to express both the upper bound and lower bound of an algorithm running time.

Let's understand the big theta notation mathematically:

Let f(n) and g(n) be the functions of n where n is the steps required to execute the program then:

f(n)= θg(n)

The above condition is satisfied only if when

c1.g(n)<=f(n)<=c2.g(n)

where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in between. The
condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to f(n) and c2.g(n) is greater
than or equal to f(n). The graphical representation of theta notation is given below:

Bhavisha Patel - Data Structure


1
1
Introduction to Data Structure

Let's consider the same example where


f(n)=2n+3
g(n)=n

As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater than f(n) so c2 is
equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n) is the upper limit of the f(n).

c1.g(n)<=f(n)<=c2.g(n)

Replace g(n) by n and f(n) by 2n+3

c1.n <=2n+3<=c2.n

if c1=1, c2=2, n=1

1*1 <=2*1+3 <=2*1

1 <= 5 <= 2 // for n=1, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

If n=2

Bhavisha Patel - Data Structure


1
2
Introduction to Data Structure

1*2<=2*2+3<=2*2

2<=7<=4 // for n=2, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

Therefore, we can say that for any value of n, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n). Hence, it
is proved that f(n) is big theta of g(n). So, this is the average-case scenario which provides the realistic
time complexity.

What is an Algorithm?
An algorithm is a process or a set of rules required to perform calculations or some other problem-
solving operations especially by a computer.

Its finite set of instructions which are being carried in a specific order to perform the specific task.

It is not the complete program or code; it is just a solution (logic) of a problem, which can be represented
either as an informal description using a Flowchart or Pseudocode.

Characteristics of an Algorithm
The following are the characteristics of an algorithm:

o Input: An algorithm has some input values. We can pass 0 or some input value to an algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the instructions in an algorithm
should be clear and simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness means that the algorithm should contain
a limited number of instructions, i.e., the instructions should be countable.
o Effectiveness: An algorithm should be effective as each instruction in an algorithm affects the overall
process.
o Language independent: An algorithm must be language-independent so that the instructions in an
algorithm can be implemented in any of the languages with the same output.

Bhavisha Patel - Data Structure


1
3
Linear Data Structure

Explain Array in detail


One Dimensional Array
 Simplest data structure that makes use of computed address to locate its elements is the one-
dimensional array or vector; number of memory locations is sequentially allocated to the vector.
 A vector size is fixed and therefore requires a fixed number of memory locations.
 Vector A with subscript lower bound of “one” is represented as below….

L0  L0 is the address of the first word allocated to the first element of


vector A.
 C words are allocated for each element or node
 The address of Ai is given equation Loc (Ai) = L0 + C (i-1)
L0 + (i-1)C
A [i]  Let’s consider the more general case of representing a vector A
whose lower bound for it’s subscript is given by some variable b.
The location of Ai is then given by Loc (Ai) = L0 + C (i-b)

Two Dimensional Array


 Two dimensional arrays are also called table or matrix, two dimensional arrays have two subscripts
 Two dimensional array in which elements are stored column by column is called as column major matrix
 Two dimensional array in which elements are stored row by row is called as row major matrix
 First subscript denotes number of rows and second subscript denotes the number of columns
 Two dimensional array consisting of two rows and four columns as above Fig is stored sequentially by
columns : A [ 1, 1 ], A [ 2 , 1 ], A [ 1 , 2 ], A [ 2 , 2 ], A [ 1 , 3 ], A [ 2 , 3 ], A [ 1, 4 ], A [ 2, 4 ]
 The address of element A [ i , j ] can be obtained by expression Loc (A [ i , j ]) = L0 + (j-1)*2 + i-1
 In general for two dimensional array consisting of n rows and m columns the address element A [ i , j ] is
given by Loc (A [ i , j ]) = L0 + (j-1)*n + (i – 1)
 In row major matrix, array can be generalized to arbitrary lower and upper bound in its subscripts,
assume that b1 ≤ I ≤ u1 and b2 ≤ j ≤u2

b1, b2 b1, u2
[1,1][1,2] [1,3] [ 1 ,m ] col 1 col 2 col 3 col 4

[2,1][2,2] [2,3] [2m] row 1 [ 1 , 1 ] [1,2] [1,3] [1,4]


row 2 [ 2 , 1 ] [2,2] [2,3] [2,4]
[n,1][n,2] [n,3] [n,m]
u1, b2

Row major matrix Column major matrix


No of Columns = m = u2 – b2 + 1

 For row major matrix : Loc (A [ i , j ]) = L0 + ( i – b1 ) *(u2-b2+1) + (j-b2)

Bhavisha Patel -Data Structure


1
Linear Data Structure

Representation of an array

o Index starts with 0.


o The array's length is 10, which means we can store 10 elements.
o Each element in the array can be accessed via its index.

Why are arrays required?


o Sorting and searching a value in an array is easier.
o Arrays are best to process multiple values quickly and easily.
o Arrays are good for storing multiple values in a single variable - In computer programming, most
cases require storing a large number of data of a similar type. To store such an amount of data, we need
to define a large number of variables. It would be very difficult to remember the names of all the variables
while writing the programs. Instead of naming all the variables with a different name, it is better to define
an array and store all the elements into it.

Memory allocation of an array


As stated above, all the data elements of an array are stored at contiguous locations in the main memory.
The name of the array represents the base address or the address of the first element in the main
memory. Each element of the array is represented by proper indexing.

We can define the indexing of an array in the below ways -

1. 0 (zero-based indexing): The first element of the array will be arr[0].


2. 1 (one-based indexing): The first element of the array will be arr[1].
3. n (n - based indexing): The first element of the array can reside at any random index number.

Bhavisha Patel -Data Structure


2
Linear Data Structure

In the above image, we have shown the memory allocation of an array arr of size 5. The array follows a
0-based indexing approach. The base address of the array is 100 bytes. It is the address of arr[0]. Here,
the size of the data type used is 4 bytes; therefore, each element will take 4 bytes in the memory.

Advantages of Array
o Array provides the single name for the group of variables of the same type. Therefore, it is easy to
remember the name of all the elements of an array.
o Traversing an array is a very simple process; we just need to increment the base address of the array in
order to visit each element one by one.
o Any element in the array can be directly accessed by using the index.

Disadvantages of Array
o Array is homogenous. It means that the elements with similar data type can be stored in it.
o In array, there is static memory allocation that is size of an array cannot be altered.
o There will be wastage of memory if we store less number of elements than the declared size.

Applications of Array
1. Symbol Manipulation (matrix representation of polynomial equation)
2. Sparse Matrix

Symbol Manipulation using Array


 We can use array for different kind of operations in polynomial equation such as addition, subtraction,
division, differentiation etc…
 We are interested in finding suitable representation for polynomial so that different operations like
addition, subtraction etc… can be performed in efficient manner
 Array can be used to represent Polynomial equation
Bhavisha Patel -Data Structure
3
Linear Data Structure

 Matrix Representation of Polynomial equation


Y Y2 Y3 Y4
X XY X Y2 X Y3 X Y4
2 2 2 2 3
X X2 Y X Y X Y X2 Y4
X3 X3 Y X3 Y2 X3 Y3 X3 Y4
4
X X4 Y X4 Y2 X4 Y3 X4 Y4

e.g. 2x2+5xy+Y2 e.g. x2+3xy+Y2+Y-X


is represented in matrix form as below is represented in matrix form as below
Y Y2 Y3 Y4 Y Y2 Y3 Y4
0 0 1 0 0 0 0 1 0 0
X 0 5 0 0 0 X -1 3 0 0 0
2 2
X 2 0 0 0 0 X 1 0 0 0 0
X3 0 0 0 0 0 X3 0 0 0 0 0
4 4
X 0 0 0 0 0 X 0 0 0 0 0

Bhavisha Patel -Data Structure


4
Linear Data Structure

 Once we have algorithm for converting the polynomial equation to an array representation and another
algorithm for converting array to polynomial equation, then different operations in array (matrix) will be
corresponding operations of polynomial equation

What is sparse matrix? Explain


 An mXn matrix is said to be sparse if “many” of its elements are zero.
 A matrix that is not sparse is called a dense matrix.
 We can device a simple representation scheme whose space requirement equals the size of the non-
zero elements.
 Example:-
o The non-zero entries of a sparse matrix may be mapped into a linear list in row-major order.
o For example the non-zero entries of 4X8 matrix of below fig.(a) in row major order are 2, 1, 6, 7,
3, 9, 8, 4, 5

0 0 0 2 0 0 1 0
0 6 0 0 7 0 0 3
0 0 0 9 0 8 0 0
0 4 5 0 0 0 0 0
Fig (a) 4 x 8 matrix

Terms 0 1 2 3 4 5 6 7 8
Row 1 1 2 2 2 3 3 4 4
Column 4 7 2 5 8 4 6 2 3
Value 2 1 6 7 3 9 8 4 5
Fig (b) Linear Representation of above matrix
 To construct matrix structure we need to record
(a) Original row and columns of each non zero entries
(b) No of rows and columns in the matrix
 So each element of the array into which the sparse matrix is mapped need to have three fields: row,
column and value
 A corresponding amount of time is saved creating the linear list representation over initialization of two
dimension array.

0 0 6 0 9 0 0
A= 2 0 0 7 8 0 4
10 0 0 0 0 0 0
0 0 12 0 0 0 0
0 0 0 0 0 0 0
0 0 0 3 0 0 5
Bhavisha Patel -Data Structure
5
Linear Data Structure

 Here from 6X7=42 elements, only 10 are non zero. A[1,3]=6, A[1,5]=9, A[2,1]=2, A[2,4]=7, A[2,5]=8,
A[2,7]=4, A[3,1]=10, A[4,3]=12, A[6,4]=3, A[6,7]=5.
 One basic method for storing such a sparse matrix is to store non-zero elements in one dimensional
array and to identify each array elements with row and column indices fig (c)

ROW COLUMN A

1 1 3 6
2 1 5 9

3 2 1 2
4 2 4 7
5 2 5 8
6 2 7 4
7 3 1 10
8 4 3 12
9 6 4 3
10 6 7 5
Fig (c )

COLUMN A
1 3 6
ROW 2 5 9
1 1 3 1 2
2 3 4 4 7
3 7 5 5 8
4 8 6 7 4
5 0 7 1 10
6 9 8 3 12
9 4 3
10 7 5
ROW NO First Column
for row no COLUMN NO

Fig(d)

 A more efficient representation in terms of storage requirement and access time to the row of the matrix
is shown in fid (d). The row vector changed so that its ith element is the index to the first of the column
indices for the element in row I of the matrix.

Bhavisha Patel -Data Structure


6
Linear Data Structure

Linked Representation of Sparse matrix


Typical node to represent non-zero element is

Row Column Value Pointer To


Number Number Next Node

1 3 6 1 5 9 2 1 2 2 4 7

2 5 8 2 7 4 3 1 10 4 3 12

6 4 3 6 7 5 NULL

What is a Stack?
o A Stack is a linear data structure that follows the LIFO (Last-In-First-Out) principle.
o Stack has one end, whereas the Queue has two ends (front and rear). It contains only one
pointer top pointer pointing to the topmost element of the stack.
o Whenever an element is added in the stack, it is added on the top of the stack, and the element
can be deleted only from the stack.
o In other words, a stack can be defined as a container in which insertion and deletion can be
done from the one end known as the top of the stack.

Some key points related to stack


o It is called as stack because it behaves like a real-world stack, piles of books, etc.
o A Stack is an abstract data type with a pre-defined capacity, which means that it can store the elements
of a limited size.
o It is a data structure that follows some order to insert and delete the elements, and that order can be LIFO
or FILO.

Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there are five memory blocks in
the stack; therefore, the size of the stack is 5.

Bhavisha Patel -Data Structure


7
Linear Data Structure

Suppose we want to store the elements in a stack and let's assume that stack is empty. We have taken
the stack of size 5 as shown below in which we are pushing the elements one by one until the stack
becomes full.

Since our stack is full as the size of the stack is 5. In the above cases, we can observe that it goes from
the top to the bottom when we were entering the new element in the stack. The stack gets filled up
from the bottom to the top.

When we perform the delete operation on the stack, there is only one way for entry and exit as the other
end is closed. It follows the LIFO pattern, which means that the value entered first will be removed last.
In the above case, the value 5 is entered first, so it will be removed only after the deletion of all the other
elements.

Standard Stack Operations


The following are some common operations implemented on the stack:

o push(): When we insert an element in a stack then the operation is known as a push. If the stack is full
then the overflow condition occurs.
o pop(): When we delete an element from the stack, the operation is known as a pop. If the stack is empty
means that no element exists in the stack, this state is known as an underflow state.
o isEmpty(): It determines whether the stack is empty or not.
o isFull(): It determines whether the stack is full or not.'
o peek(): It returns the element at the given position.
Bhavisha Patel -Data Structure
8
Linear Data Structure

o count(): It returns the total number of elements available in a stack.


o change(): It changes the element at the given position.
o display(): It prints all the elements available in the stack.

PUSH operation
The steps involved in the PUSH operation is given below:

o Before inserting an element in a stack, we check whether the stack is full.


o If we try to insert the element in a stack, and the stack is full, then the overflow condition occurs.
o When we initialize a stack, we set the value of top as -1 to check that the stack is empty.
o When the new element is pushed in a stack, first, the value of the top gets incremented,
i.e., top=top+1, and the element will be placed at the new position of the top.
o The elements will be inserted until we reach the max size of the stack.

Bhavisha Patel -Data Structure


9
Linear Data Structure

Algorithm : PUSH (S, TOP, X)


 This procedure inserts an element x to the top of a stack which is represented by a vector S containing N
elements with a pointer TOP denoting the top element in the stack.

1. [Check for stack overflow]


If TOP ≥ MAX -1
Then write (‘STACK OVERFLOW’)
Return
2. [Increment TOP]
TOP ←TOP + 1
3. [Insert Element]
S[TOP] ←X
4. [Finished]
Return

POP operation
The steps involved in the POP operation is given below:

o Before deleting the element from the stack, we check whether the stack is empty.
o If we try to delete the element from the empty stack, then the underflow condition occurs.
o If the stack is not empty, we first access the element which is pointed by the top
o Once the pop operation is performed, the top is decremented by 1, i.e., top=top-1.

Algorithm : POP (S, TOP)


 This function removes the top element from a stack which is represented by a vector S and returns this
element. TOP is a pointer to the top element of the stack.

1. [Check for underflow of stack]


If TOP = -1
Then Write (‘STACK UNDERFLOW ON POP’)
Take action in response to underflow
Return
2. [Decrement Pointer]
TOP ← TOP – 1
3. [Return former top element of stack]
Return (S[TOP + 1])

Bhavisha Patel -Data Structure


10
Linear Data Structure

Applications of Stack
The following are the applications of the stack:

o Balancing of symbols: Stack is used for balancing a symbol. For example, we have the following program:

int main()
{
cout<<"Hello";
cout<<"javaTpoint";
}

As we know, each program has an opening and closing braces; when the opening braces come, we push
the braces in a stack, and when the closing braces appear, we pop the opening braces from the stack.
Therefore, the net value comes out to be zero. If any symbol is left in the stack, it means that some
syntax occurs in a program.

o String reversal: Stack is also used for reversing a string. For example, we want to reverse a "javaTpoint"
string, so we can achieve this with the help of a stack.
First, we push all the characters of the string in a stack until we reach the null character.

Bhavisha Patel -Data Structure


11
Linear Data Structure

After pushing all the characters, we start taking out the character one by one until we reach the bottom
of the stack.
o UNDO/REDO: It can also be used for performing UNDO/REDO operations. For example, we have an editor
in which we write 'a', then 'b', and then 'c'; therefore, the text written in an editor is abc. So, there are three
states, a, ab, and abc, which are stored in a stack. There would be two stacks in which one stack shows
UNDO state, and the other shows REDO state.
If we want to perform UNDO operation, and want to achieve 'ab' state, then we implement pop operation.
o Recursion: The recursion means that the function is calling itself again. To maintain the previous states,
the compiler creates a system stack in which all the previous records of the function are maintained.
o DFS(Depth First Search): This search is implemented on a Graph, and Graph uses the stack data structure.
o Backtracking: Suppose we have to create a path to solve a maze problem. If we are moving in a particular
path, and we realize that we come on the wrong way. In order to come at the beginning of the path to
create a new path, we have to use the stack data structure.
o Expression conversion: Stack can also be used for expression conversion. This is one of the most
important applications of stack. The list of the expression conversion is given below:
o Infix to prefix
o Infix to postfix
o Prefix to infix
o Prefix to postfix
o Postfix to infix

o Memory management: The stack manages the memory. The memory is assigned in the contiguous
memory blocks. The memory is known as stack memory as all the variables are assigned in a function call
stack memory. The memory size assigned to the program is known to the compiler. When the function is
created, all its variables are assigned in the stack memory. When the function completed its execution, all
the variables assigned in the stack are released.

Bhavisha Patel -Data Structure


12
Linear Data Structure

Tower of Hanoi
o The Tower of Hanoi is a mathematical puzzle.
o It consists of three poles and a number of disks of different sizes which can slide onto
any pole.
o The puzzle starts with the disk in a neat stack in ascending order of size in one pole,
the smallest at the top thus making a conical shape.
o The objective of the puzzle is to move all the disks from one pole (say ‘source pole’) to
another pole (say ‘destination pole’) with the help of the third pole (say auxiliary pole).
o Total of 2n – 1 moves are required.
o The puzzle has the following two rules:
1. You can’t place a larger disk onto a smaller disk
2. Only one disk can be moved at a time

Example:
Let us understand with a simple example with 3 disks:
So, total number of moves required = 7

S A D

When i= 1, (i % 3 == 1) legal movement between‘S’ and ‘D’

When i = 2, (i % 3 == 2) legal movement between ‘S’ and ‘A’

When i = 3, (i % 3 == 0) legal movement between ‘A’ and ‘D’ ’


Bhavisha Patel -Data Structure
13
Linear Data Structure

When i = 4, (i % 3 == 1) legal movement between ‘S’ and ‘D’

When i = 5, (i % 3 == 2) legal movement between ‘S’ and ‘A’

When i = 6, (i % 3 == 0) legal movement between ‘A’ and ‘D’

When i = 7, (i % 3 == 1) legal movement between ‘S’ and ‘D’

So, after all these destination poles contains all the in order of size.
After observing above iterations, we can think that after a disk other than the smallest
disk is moved, the next disk to be moved must be the smallest disk because it is the top
disk resting on the spare pole and there are no other choices to move a disk.

Bhavisha Patel -Data Structure


14
Linear Data Structure

Recursion Using Stack with Example



A function that calls itself is called a recursive function and this technique is called recursion. A recursive
function will call itself until a final call that does not require a call to itself is made. It takes advantage of the
system stack to temporarily store the calling function’s return address and local variables.

There are two major cases in any recursive solution. They are

 Base case in which the problem is simple enough to be solved directly without the need for any more calls
to the same function
 Recursive case
 The problem at hand is initially subdivided into simpler sub-parts.
 The function calls itself again, but this time with sub-parts of the problem obtained in the first step.
 The final result is obtained by combining the solutions of simpler sub-parts.
A recursive function is said to be well-defined if it has the following two properties:

 There must be a base criteria in which the function doesn’t call itself.
 Every time the function is called itself, it must be closer to the base criteria.
Factorial Function

To illustrate recursive functions, consider calculating the factorial of an integer.

The product of the positive integers from 1 to n is called n factorial denoted by

n!
. To find n!, multiply the number by the factorial of the number that is one less than the number.
That is,

n! = n * (n - 1)!
Assume we need to find the value of 5!.

Then,

5! = 5 * 4!
, where
4! = 4 * 3!
Therefore,

Bhavisha Patel -Data Structure


15
Linear Data Structure

5! = 5 * 4 * 3!
Expanding further, we get

5! = 5 * 4 * 3 * 2 * 1!
We can now write a recursive function that computes the factorial of a number. Here the base case is
when

n = 1
, because the result will be 1 as
1! = 1
. The recursive case of the factorial function will call itself, but with a smaller value of n, as
factorial(n) = n factorial (n–1).

Working of the factorial function using recursion

Example: Factorial of a given number

#include <stdio.h>
int Fact(int);
int main()
{
Bhavisha Patel -Data Structure
16
Linear Data Structure

int num, val;


//read a number from the user
printf("Enter the number: ");
scanf("%d", &num);
//call fact function
val = Fact(num);
//print result
printf("Factorial of %d = %d", num, val);
return 0;
}
//function to calculate factorial
int Fact(int n)
{
if (n == 1)
return 1;
else
return (n * Fact(n-1));
}
Output

Enter the number: 5


Factorial of 5 = 120

Write an algorithm to find factorial of given no using recursion


Algorithm: FACTORIAL

Given integer N, this algorithm computes factorial of N. Stack A is used to store an activation record associated
with each recursive call. Each activation record contains the current value of N and the current return address
RET_ADDE. TEMP_REC is also a record which contains two variables PARAM & ADDRESS.TOP is a pointer to the
top element of stack A. Initially return address is set to the main calling address. PARAM is set to initial value N.

Bhavisha Patel -Data Structure


17
Linear Data Structure

1. [Save N and return Address]


CALL PUSH (A, TOP, TEMP_REC)
2. [Is the base criterion found?]
If N=0
then FACTORIAL 1
GO TO Step 4
Else PARAM N-1
ADDRESS Step 3
GO TO Step 1
3. [Calculate N!]
FACTORIAL N * FACTORIAL
4. [Restore previous N and return address]
TEMP_RECPOP(A,TOP)
(i.e. PARAMN, ADDRESSRET_ADDR)
GO TO ADDRESS

Give difference between recursion and iteration


Iteration Recursion
In iteration, a problem is converted into a train of Recursion is like piling all of those steps on top of
steps that are finished one at a time, one after each other and then quashing them all into the
another solution.
With iteration, each step clearly leads onto the In recursion, each step replicates itself at a smaller
next, like stepping stones across a river scale, so that all of them combined together
eventually solve the problem.
Any iterative problem is solved recursively Not all recursive problem can solved by iteration
It does not use Stack It uses Stack

Bhavisha Patel -Data Structure


18
19

You might also like