[go: up one dir, main page]

0% found this document useful (0 votes)
16 views46 pages

DS (U2)

Uploaded by

special27032001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views46 pages

DS (U2)

Uploaded by

special27032001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

DATA STRUCTURES II IT

UNIT-II

Queues: Queue abstract data type - Array implementation – circular queue - linked list
implementation of queues – priority queues – double ended queues – multiple stacks and
queues - application.

QUEUE ADT
A queue is an ordered collection of items from which items may be deleted at one end
called the front, and into which items may be inserted at the other end called rear of the
queue.
Queue Model
The basic operations on a queue are enqueue, which inserts an element at the end
of the list (called the rear), and dequeue, which deletes (and returns) the element at the
start of the list (known as the front). Figure shows the abstract model of a queue.

Model of a queue

Example:
Front
A B C
Rear

In this example, A is at the front of the queue and C is at the rear. An element has been
deleted from the front of the queue. i.e.,
Front
B C
Rear
DATA STRUCTURES II IT

UNIT-II

and items can be inserted at the rear of the queue. i.e.,


Front
B C D
Rear
The first element inserted into a queue is the first element to be removed. For this reason,
a queue is sometimes called a FIFO list. (First In First Out)
Operations involved in a queue:
1. Create a queue.
2. Check whether a queue is empty.
3. Check whether the queue is full.
4. Add item at the rear queue.
5. Remove item from front of queue.
6. Read the front of queue.
7. Print the entire queue.

IMPLEMENTATION:
The simplest way to represent a queue is using one-dimensional array and here we need
two more variables front and rear, to hold the position within the array of the first and last
elements of the queue.

Array Implementation of Queues


 As with stacks, any list implementation is legal for queues. Like stacks, both the
linked list and array implementations give fast O(1) running times for every
operation.
 For each queue data structure, keep an array, QUEUE[], and the positions q_front
and q_rear, which represent the ends of the queue.
DATA STRUCTURES II IT

UNIT-II

 Keep track of the number of elements that are actually in the queue, q_size. All
this information is part of one structure, except for the queue routines themselves,
no routine should ever access these directly.
 The operations should be clear. To enqueue an element x, increment q_size and
q_rear, then set QUEUE[q_rear] = x. To dequeue an element, set the return value
to QUEUE[q_front], decrement q_size, and then increment q_front.

OPERATIONS:
INSERT OPERATION:
Insert operation inserts an element onto the queue. Queue is a dynamic structure that is
constantly allowed to grow and shrink and thus changes its size. But now we represented
the queue by means of array. An attempt to push an element onto the queue, when the
array is full, causes an overflow. Insert operation involves:
1. Check whether the array is full before attempting to insert another
element. If so halt execution.
2. Increment the rear pointer.
3. Insert the element at the rear point of the queue.

Structure Definition
Class queue
{
Protected:
struct queue
{
int queue_size;
int front;
DATA STRUCTURES II IT

UNIT-II

int rear;
int *queuearray[ ];
};

struct queue record *queue;


#define rear(-1),front(0)
queue create queue(unsigned int *maxelement);
{
queue q;
q=(queue)malloc(sizeof( struct queuerecord);
if(q==NULL)
printf("out of space");
q-> queuearray==(element_type*)malloc(sizeof(element
type)*(maxelement));
if(q->queuearray==NULL)
printf("out of space")
q-> rear=-1;
q-> front=0;
q-> queue_size=maxelement;
return(q);
}

Function to insert an element into queue


It is possible to insert an element into the queue only at the rear end. The rear
pointer is incremented and the element is inserted in the last position.
Template<class element_type>
void queue<element_type>::enqueue(const element_type&x)
{
DATA STRUCTURES II IT

UNIT-II

if(is_full())
error("FULL QUEUE");
else
{
Q size++;
Increment(Q_rear);
Q_array[Q_rear]=x;
}
}

Function to delete an element from the queue


It is possible to remove an element from the front of the queue. This is done by
moving the front pointer to the next element.
Template<class element_type>
Void queue<element_type>::dequeue(const element_type&x)
{
if(is _empty(q))
error("EMPTY QUEUE");
else
q->queuearray [q->*front];
}
Function to check whether the queue is empty or not
This function isempty() checks whether the queue is empty or not. It returns true,
if the queue is empty. If q->front == q->rear + 1 or q->front == q->max, then the
queue is empty.
int is_empty()
{
Return Q_size= = 0;
DATA STRUCTURES II IT

UNIT-II

}
Function to return the first element from the queue
It checks whether the queue is empty. If it is not empty, it returns the element at
the front of the queue
int front(queue q)
{
if (!isempty(q))
return q->queuearray[q->front];
}

Function to return and remove the first element from the queue
It checks whether the queue is empty. If it is not empty, it returns the element and
then removes the element at the front of the queue.
int frontanddelete(queue q)
{
if (isempty(q)
error("EMPTY QUEUE");
else
return (q->queuearray[q->front++]);
}
Function to check whether the queue is full or not
This function isfull() checks whether the queue is full or not. It returns true if
queue is full. If q->rear=q->max-1, then the queue is full.

int is_full()
{
return Q_size = =Full_queue;
}
DATA STRUCTURES II IT

UNIT-II

Function to empty the queue


If the queue is not empty, the elements in the queue are removed until the queue
becomes empty. In other words, the queue exists but the queue is empty.
Template<class element_type>
void queue<element_type>::make_empty()
{
Q_front = 0;
Q_rear = 0;
Q_size = 0;
}
Function to dispose a queue
This function not only frees the memory allocated for the stack (array) but also the
structure containing the details of the stack.
void dispose_queue( queue q )
{
if( q != NULL )
{
free( q->queuearray );
free( q );
}
}

APPLICATIONS OF QUEUES
 Numerous applications of queue structure are known in computer science. One
major application of queue is in simulation.
 Another important application of queue is observed in implementation of various
aspects of operating system.
 Multiprogramming environment uses several queues to control various programs.
DATA STRUCTURES II IT

UNIT-II

 And, of course, queues are very much useful to implement various algorithms.
For example, various scheduling algorithms are known to use varieties of queue
structures.

General applications
There are several algorithms that use queues to give efficient running times.
 When jobs are submitted to a printer, they are arranged in order of arrival.
Thus, essentially, jobs sent to a line printer are placed on a queue We say
essentially a queue, because jobs can be killed. This amounts to a deletion from
the middle of the queue, which is a violation of the strict definition.
 Virtually every real-life line is (supposed to be) a queue. For instance, lines at
ticket counters are queues, because service is first-come first-served.
 Another example concerns computer networks. There are many network setups
of personal computers in which the disk is attached to one machine, known as the
file server. Users on other machines are given access to files on a first-come first-
served basis, so the data structure is a queue.
 Calls to large companies are generally placed on a queue when all operators
are busy.
 In large universities, where resources are limited, students must sign a
waiting list if all terminals are occupied. The student who has been at a terminal
the longest is forced off first, and the student who has been waiting the longest is
the next user to be allowed on.
 A whole branch of mathematics, known as queueing theory, deals with
computing, probabilistically, how long users expect to wait on a line, how
long the line gets, and other such questions. The answer depends on how
frequently users arrive to the line and how long it takes to process a user once the
user is served. Both of these parameters are given as probability distribution
functions. In simple cases, an answer can be computed analytically.
DATA STRUCTURES II IT

UNIT-II

 An example of an easy case would be a phone line with one operator. If the
operator is busy, callers are placed on a waiting line (up to some maximum limit).
This problem is important for businesses, because studies have shown that people
are quick to hang up the phone.

(a) Simulation
 Simulation is a modeling of a real life problem, in other words, it is the model of a
real life Situation in the form of a computer program.
 The main objective of the simulation program is to study the real life situation
under the control of various parameters which affect the real problem, and is a
research interest of system analysts or operation research scientists.
 Based on the results of simulation, actual problem can be solved in an optimized
way.
 Another advantage of simulation is to experiment the danger area. for example,
areas such as military operations are safer to simulate than to field test, which is
free from any risk as well as inexpensive.
 Simulation is a classical area where queries can be applied. Before going to
discuss the simulated modeling, let us study few terms related to it.
 Any process or situation that is to be simulated is called a system. A system is a
collection of interconnected objects which accepts aero or more inputs and
produces at least one output, Figure(a).
 For example, a computer program is a system where instructions are
interconnected objects and inputs or initialization values are the inputs and results
during the execution is output.
 Similarly, a ticket reservation counter is also a system. Note that a system can be
composed of one or more smaller system(s).
DATA STRUCTURES II IT

UNIT-II

 A system can be divided into different types as shown in Figure(b).


 A system is discrete if the input/ output parameters are of discrete values.
 For example, if customers arriving at a ticket reservation counter then it is a
discrete whereas water flowing through a pipe to a reservoir or emanating from it
is an example of continuous system; here the parameters are of continuous type.
 A system is deterministic if from a given set of inputs and initial condition of the
system, final outcome can be predicted. For example, a program to calculate the
factorial of an integer is a deterministic system.
 On the other hand, stochastic system is based on the randomness; its behavior
cannot be predicted before. As an another example, number of customers waiting
in front of a ticket reservation counter at any instant cannot be forecasted. There
may be some systems which are intermixing of both deterministic and stochastic.
 After getting an idea about the variants type of systems.
 Let us define various kind of simulation models.
 There are two kind of simulation models: even-driven simulation and time-driven
simulation; these are decided by how the state of a system changes.
DATA STRUCTURES II IT

UNIT-II

 In case of time-driven simulation, systems change its states with the change of
time and in event-driven simulation, systems changes its state whenever a new
event arrives to the system or exits from the system.
 Now let us consider a system, its model for simulation study and then application
of queues in it.
 Consider a system as a ticket selling centre. There are two kinds of tickets
available, namely, T1 and T2, which customers are to purchase. Two counters C1
and C2 are there (Figure).
 Also assume that time required to issue a ticket of T1 and T2 are t1 and t2
respectively. Two queues Q1 and Q2 are possible for the counters C1 and C2
respectively.
 With this description of the system, two models are proposed:

Model 1
 Any counter can issue both type of tickets.
 A customer when arrives goes to the queue which has lesser number of
customers; if both are equally crowded, then to Q1, the queue of counter C1.

Model 2
 Two counters are earmarked, say. C1 for selling T1 and C2 for selling T2 only.
 A customer on arrival goes to either queue Q1 orQ2 depending on the ticket, that
is, Customer for T1 will be on Ql and that for T2 will be on Q2.
DATA STRUCTURES II IT

UNIT-II

 To simplify the simulation model, the underlying assumptions are made:


1. Queue lengths are infinite.
2. One customer in a queue is allowed for one ticket only. q
3. Let λ1 and λ2, are the mean arrival rates of customers for ticket T1 and
12 respectively. The values for λ1 and λ2 will be provided by the
system analyst.
4. Let us consider the discrete probability distribution (also called poisson
distribution) for the arrival of the customers to the centre. Poisson
distribution gives a probability function
P(r) = l-e- λt
where P(r)= the probability that the next customer arrives at time t. and /1=the
mean
arrival rate. Thus, if we assume N be the total population of customers in a day,
then
N1= N1P(t) = N1 (l-e- λ1t)
is the number of customers arrived at the centre for ticket T1 at time t and N 2=
λ t
N2P(t) = N2 (l-e- 2 ), is the number of customers arrived the centre for ticket T2
at time t.
5. A clock is maintained with an initial value (to dictate the opening and
closing of counters, when the counter is made available to the customer,
etc.

With the these assumptions, the objective of the simulation model is to study the
performance of the system under various conditions.

Average queuing time


DATA STRUCTURES II IT

UNIT-II

It is defined as the average time that a customer for a ticket T i (i = 1, 2) will be in


queue (this time includes service time Ti (i = 1, 2). that is, the time to issue a ticket).

Average queue length


It is the average length of the queue Qi (i = 1, 2) over a day.

Total service time


It is the time that a counter Ci(i= 1,2) remains busy over a day.

With these basic assumptions and definition, the proposed simulation model may
be termed as discrete deterministic time-driven simulation model.

(b) CPU scheduling in multiprogramming environment


 In a multiprogramming environment, a single CPU has to serve more than one
program simultaneously.
 Consider a multiprogramming environment where possible jobs to the CPU are
categorized into three groups:
1. Interrupts to be serviced. Variety of devices and terminals are connected with
the CPU and they may interrupt at any moment to get a service from it.
2. Interactive users to be serviced. These are mainly student's programs in
various terminals under execution.
3. Batch jobs to be serviced.
 These are long term jobs mainly from the non-interactive users, where all the
inputs are fed when jobs are submitted; simulation programs, and jobs to print
documents are of this kind.
 Here the problem is to schedule all sorts of jobs so that the required level of
performance of the environment will be attained.
DATA STRUCTURES II IT

UNIT-II

 One way to implement complex scheduling is to classify the work load according
to its characteristics and t o maintain separate process queues.
 So far the environment is concerned, we can maintain three queues, as depicted in
Figure.

 This approach is often called multi-level queues scheduling. Process will be


assigned to their respective queues.
 CPU will then service the processes as per the priority of the queues. In case of a
simple strategy, absolute priority, the process from the highest priority queue (for
example, system processes) are serviced until the queue becomes empty.
 Then CPU switches to the queue of interactive processes which is having
medium-priority and so on.
 A lower- priority process may, of course. be pre-empted by a higher-priority
arrival in one of the upper- level queues.
 Multi-level queues strategy is a general discipline but has some drawbacks. The
main drawback is that when process arrived in higher-priority queues is very high.
 The processes in lower-priority queue may starve for a long time.
 One way out to solve this problem is to time slice between the queues. Bach
queue gets a certain portion of the CPU time.
DATA STRUCTURES II IT

UNIT-II

 Another possibility is known as multi-level feedback queue strategy. Normally in


multi-level queue strategy, processes are permanently assigned to a queue upon
entry to the system and processes do not move among queues.
 Multi-level feedback queue strategy, on the contrary, allows a process to move
between queues. The idea is to separate out processes with different CPU burst
characteristics.
 If a process uses too much CPU time (that is long run process), it will be moved
to a lower-priority queue.
 Similarly. a process which waits too long in a lower-priority queue may be moved
to a higher-priority queue. For example, consider a multi-level feedback queue
strategy with three queues Q1,Q2 and Q3.

 A process entering the system is put in queue Q1. A process in Q1 is given a time
quantum τ of I0 ms, say, it does not finish within this time, it is moved to the tail
of queue Q2.
 If Q1 is empty, the process at the front of queue Q2 is given a time quantum τ of
20 ms, say.
 If it does not complete within this time quantum. it is pre-empted and put into
queue Q3.
 Processes in queue Q3 are serviced only when queues Q1 and Q2 are empty.
 Thus, with this strategy, CPU first executes all processes in queue Q1.
 Only when Q1 is empty it will execute all processes in queue Q2. Similarly,
processes in queue Q3 will only be executed if queues Q1 and Q2 are empty.
DATA STRUCTURES II IT

UNIT-II

 A process which arrives in queue Q1 will pre-empt a process in queue Q2 or Q3.


 lt can be observed that, this strategy gives the highest priority to any process with
a CPU burst of 10 ms or less. Processes which need more than 10 ms. but less
than or equal to 20 ms are also served quickly, that is, it gets the next highest
priority than the shorter processes.
 Longer processes automatically sink to queue Q3, from Q3. Processes will be
served in first-come first-serve (FCFS) basis and in case of process waiting for a
too long time (as decided by the scheduler) may be put into the tail of queue Q1.

(c) Round Robin Algorithm


 Round robin (RR) algorithm is a well-known scheduling algorithm and is
designed especially for time sharing systems.
 A circular queue can be used to implement such an algorithm.
 Suppose, there are n processes P1, P2, ....... Pn, required to be served by the CPU.
 Different processes require different execution time. Suppose, sequence of
process arrivals are according to their subscripts, that is. P1 comes first than P2
and in general, Pi comes after Pi-1 for 1 < i ≤ n.
 RR algorithm first decides a small unit of time, called a time quantum or time
slice, τ.
 A time quantum is generally from I0 to l00 milliseconds. CPU starts services with
P1. P1 gets CPU for τ instant of time.
 Afterwards CPU switches to P2 and so on. When CPU reaches the end of time
quantum of Pn it returns to P1 and the same process will be repeated.
 Now, during time sharing, if a process finishes its execution before the finishing
of its time quantum, the process then simply releases the CPU and the next
process waiting will get the CPU immediately.
DATA STRUCTURES II IT

UNIT-II

 The advantages of this kind of scheduling is reducing the average turnaround


time (not necessarily true always). Turn around time of a process is the time of its
completion - time of its arrival.
 In time sharing systems any process may arrive at any instant of time.
 Generally, all the process currently under executions, are maintained in a queue.
When a process finishes
 Implementation of RR scheduling algorithm. Circular queue is the best choice
for it. If may be noted that it is not strictly a circular queue because here a process
when it completes its execution required to be deleted from the queue and it is not
necessarily from the front of the queue rather from any position of the queue.
 Except this, it follows all the properties of queue, that is, processes which comes
first gets its turn first.
 Implementation of the RR algorithm using a circular queue is straightforward.
 Variable sized circular queue is used; size of the queue at any instant is decided
by the number of processes in execution at that instant.
 Another mechanism is necessary; whenever a process is deleted, to fill the space
of deleted process, it is required to squeeze ell the processes preceding to it
starting from the front pointer.

PRIORITY QUEUES
A queue in which we are able to insert items or remove items from any position based on
some property (such as priority of the task to be processed ) is often referred to as a
priority queue.
Fig : Represents a priority queue of jobs waiting to use a computer.

R1 R2 ………. Ri-1 Q1 Q2 …… Qj-1 B1 B2 … Bk-1 …


1 1 ………. 1 2 2 …… 2 3 3 … 3 …
DATA STRUCTURES II IT

UNIT-II

Priorities of 1, 2, and 3 have been attached to jobs of type real-time, on-line , and batch,
respectively. Therefore, if a job is initiated with priority i, it is inserted immediately at the
end of the list of other jobs with priority i, for i=0,1,2 or 3. In this example, jobs are
always removed from the front of the queue. (In general, this is not a necessary restriction
on a priority queue.)
A priority queue can be conceptualized as a series of queue representing situations i
which it is known a priori what priorities are associated with queue items.
Priority 1:

R1 R2 ......................... Ri-1
…….

Priority 2:

Q1 Q2 …………. Qj-1 ……….

Priority 3:

B1 B2 ……….. Bk-1 ……….

The above fig shows how single -priority queue can be visualized as three separate
queues, each exhibiting a strictly FIFO behavior .Elements in the second queue are
removed only when the first queue is empty , and from the third queue are removed only
when the first and second queues are empty. This separation of a single-priority queue
into a series of queues also suggests an efficient storage representation of a priority
queue. When elements are nserted, they are always added at the end of one of the queues
DATA STRUCTURES II IT

UNIT-II

as determined by the priority. Alternatively, if a single sequential storage structure is used


for the priority queue, then insertion may mean that the new element must be placed in
the middle of the structure .This can require the movement of several items. It is better to split
the priority queue into several queues each having its own storage structure.

 A priority queue is another variation of queue structure. Here, each element has
been assigned a value, called priority of the element, and an element can be
inserted or deleted not only at the ends but at any position on the queue.

 With this structure, an element of priority pi may be deleted before an element


which is at front. Similarly, insertion of an element is based on its priority, that is,
instead of adding it after the rear it may be inserted at an intermediate position
dictated by its priority value.

 Priority queue is a data structure that is not a queue as per the definition: priority
queue does not strictly follow First-In First-Out (FIFO) principle which is the
basic principle of a queue.

 Nevertheless, the name is now firmly associated with this particular data type.
However, there are various models of priority queue known in different
applications. Consider a particular model of priority queue.

1. An element of higher priority is processed before any element of lower


priority.
DATA STRUCTURES II IT

UNIT-II

2. Two elements with the same priority are processed according to the order
in which they were added to the queue.

 Here, process means two basic operations namely insertion or deletion. There are
various ways of implementing the structure of a priority queue. These are

a. Using a simple/ circular array

b. Multi-queue implementation

c. Using a double linked list, and

d. Using heap tree.

Priority queue using an array


 With this representation, an array can be maintained to hold the item and its
priority value.

 The element can be inserted at the REAR end as usual. The deletion operation
will then be performed either of the two following ways.

a. Starting from the FRONT pointer, traverse the array for an element of the
highest priority. Delete this element from the queue. If this is not the front
most element shift all its trailing elements after the deleted element one stroke
each to fill up the vacant position. This implementation, however, is very
inefficient as it involved searching the queue for the highest priority element
and shifting the trailing elements after the deletion.
DATA STRUCTURES II IT

UNIT-II

A better implementation is as follows

b. Add the elements at the REAR end as earlier. Using a stable sorting
algorithm, sort the elements of the queue so that the highest priority elements
is at the FRONT end. When a deletion is required, delete it from the FRONT
end only.

The second implementation is comparatively better than the first one; here the
only burden is to sort the elements.

Multi-queue implementation
 This implementation assumes N different priority values. For each priority Pi
there are two pointers Fi and Ri corresponding to FRONT and REAR pointer
respectively. The elements between Fi and Ri are all of equal priority value Pi, fig
represents a view of such structure.

 With this representation, an element with priority value P i, will consult Fi for
deletion and Ri for insertion. But this implementation is associated with number
of difficulties.
DATA STRUCTURES II IT

UNIT-II

1. It may lead to a huge shifting in order to make a room for an item to be


inserted.

2. Large number of pointers are involved when the range of priority values
are large.

 In addition to above, there are two other techniques to represent multi-queue


which are shown below.

 It is clear from fig (a) that for each priority value a simple value a simple queue
is to be maintained. An element will be added into a particular queue
depending on its priority value.

 The priority as shown in fig(b) is some way better than the multi-queue with
multiple queues. Here one can get rid off maintaining several pointers for
FRONT and REAR in several queues. Multiqueue with multiple queues has
one advantage that one can have different queues of arbitrary length. in some
application, it is seen that, number of occurrence of elements with some
priority value is much larger than other value, thus demanding a queue of
larger size.
DATA STRUCTURES II IT

UNIT-II

Linked list representation of priority queue


 This representation assumes the node structure as shown in the figure.
 LLINK an RLINK are 2 usual link fields, DATA is to store actual content and
PRIORITY I to store the priority value of the item.
 FRONT and REAR are two pointers pointing the fist and last node in the queue,
respectively.
 All the nodes are in sorted order according to the priority values of the items in
the nodes.
 With this structure, to delete an item having priority p, the list will be searched
starting from the node under pointer REAR and the first occurring node with
priority= p will be deleted.
 Similarly to insert a node containing an item, the search will begin from node
under pointer q->next and node will be inserted before a node found first with
priority value p, or if not found, then before the node with the next priority value.

Linked List implementation of priority Queue


struct node
{
int priority;
int info;
struct node *next;
};

insert( )
{
struct node *tmp,*q, *p;
int added_ item, item_ priority;
tmp = (struct node *)malloc(sizeof(struct node));
DATA STRUCTURES II IT

UNIT-II

if(tmp== NULL)
fatal error (“out of space”);
else
{
tmp->info = added_item;
tmp->priority = item_priority;
}
if( q->next == NULL || item_priority < q->next ->priority )
{
tmp->next = q->next;
q->next = tmp;
}
else
{
p = q;
while( p->next != NULL && p->next->priority <= item_priority )
p=p->next;
tmp->next = p->next;
p->next = tmp;
}
}

del( )
{
struct node *tmp;
if(q->next == NULL)
display „Queue Underflow‟
else
{
tmp = q->next;
q->next = tmp ->next;
DATA STRUCTURES II IT

UNIT-II

free(tmp);
}
}

IMPLEMENTATION OF DEQUEUE

The Deque is a short form of DOUBLE ENDED QUEUE and defines a data structure in
which items can be added or deleted at either the front or rear and , but no changes can be
a made elsewhere in the list. Thus a Deque is a generalization of both a stack and a
queue.

Fig shows the representation of a Deque


Deletion

Insertion
12 53 61
24 19
Insertion
Deletion
Insertion

Front Top
Inserting Element:

Position : End of Deque

Steps to be Followed:-
1. Check whether the Deque is full or not .
2. Check whether the element is the first to be added.
a. If it is true then set the values of front and rear to zero.
3. Insert the given element.
Position : Beginning of Deque
Steps to be Followed:-
DATA STRUCTURES II IT

UNIT-II

Check whether the Deque is full or not .


1. Check whether the element is the first to be added.
a. If it is true then set the values of front and rear t6o zero.
b. If false then do the following steps:
1. To Shift the element(s) to the right , first we have
obtain the total number of elements present in the
Deque.
2. Next we to obtain the position of the last element in
the Deque i.i., the element in the rear end.
th
3. Insert the given element in the Zero position.
Deleting Element:
Position : Beginning of Deque
Steps to be Followed:-
1. Check whether the Deque is empty or not .
2. An element is removed from the front position
3. The index of the next element is stored in the front pointer.
4. So that the value of front in incremented by one.
Position : End of Deque
Steps to be Followed:-
1. Check whether the Deque is empty or not .
2. An element is removed from the end position.
3. The index of the next element is stored in the rear pointer. .
4. So that the value of rear is decremented by one.
Consider a situation when the value of rear has reached MAX – 1 (maximum size of
array). But the value is greater than zero i.e., the front pointer is pointing some non zero
location. It is not Possible to insert a element in the rear and but element can be inserted
at the front and to do so the element would be required to be shifted to the left.
DATA STRUCTURES II IT

UNIT-II

There are two variations of a Deque. These are:


 Input – Restricted Deque
 Output – Restricted Deque
Input – Restricted Deque:
An input restricted Deque restricts the insertion of elements at one end only, but the
deletion of elements can be done at both the ends of a Deque.

Deletion

12 53 61 Deletion
24 19
Insertion

Front Top

Fig shows the representation of input – Restricted Deque


Output – Restricted Deque:
An output restricted Deque restricts the deletion of elements at one end only, but the
insertion of elements can be done at both the ends of a Deque.

Deletion

Insertion 12 53 61
24 19
Insertion
Insertion

Front Top
Fig shows the representation of output – Restricted Deque
DATA STRUCTURES II IT

UNIT-II

 Allocate the memory location using malloc function and if the queue is empty
then the front and the rear element are the same
 For deletion operation check if the queue has only one element if so perform
deletion. if the queue contains more elements then perform deletion accordingly.
 For the display operation use the while loop and check if x->link!=NULL if so
print the elements.
 For the display operation if rear== null then print that the queue is empty

Program:

struct node
{
int data;
struct node *next;
};
struct node *front=NULL,*rear=NULL,*x;

void enqfro(element_type x )
{
ptr tonode tempcell;
if(front = = NULL)
{
tempcell =(struct node*)malloc(sizeof(struct node));
tempcell ->data = x;
tempcell ->next = NULL;
DATA STRUCTURES II IT

UNIT-II

rear = tempcell; front = tempcell;


}
else
{
tmpcell = (struct node*) malloc (sizeof(struct node));
x->link=NULL;
front->link=x;
front=x;
}
}

void deqrea()
{
if(rear==NULL)
display(Queue is empty);
else
{
x=rear;
display (x->data);
rear=x->link;
free(x);
}
}

void enqrea()
{
if(rear==NULL)
{
DATA STRUCTURES II IT

UNIT-II

x=(struct node*)malloc(sizeof(struct node));


x->link=NULL;
rear=x;
front=x;
}
else
{
x=(struct node*)malloc(sizeof(struct node));
printf("\nenter the data");
scanf("%d",&x->data);
x->link=NULL;
rear->link=x;
rear=x;
}
}

void deqfro( )
{
if(front==NULL)
display(Queue is empty);
else
{
x=front;
display (deleted element is ,x->data);
front=x->link;
free(x);
}
}
DATA STRUCTURES II IT

UNIT-II

void display2( )
{
if(front==NULL)
display("queue is empty");
else
{
x=front;
while(x->link!=NULL)
{
display(x->data);
x=x->link;
}
display(x->data);
}
}

void display1( )
{
if(rear==NULL)
display(queue is empty);
else
{
x=rear;
while(x->link!=NULL)
{
display(x->data);
x=x->link;
DATA STRUCTURES II IT

UNIT-II

}
display (x->data);
}
}

MULTIPLE STACKS AND QUEUES:


Till now, we have been concerned only with the sequential data representation of a single
stack or a single queue in the memory of a computer using array.

Let us see about the data representation needed for several stacks and queues. Here also
we are going to use array v(1:m) for sequential mapping.

Consider, we are having two stacks to represent here we can use v(1) for the bottom most
element in stack one and v(m) for the corresponding element in stack 2. Stack 1 can
grow towards v(m) and stack 2 towards v(1). So that available space can be efficiently
utilized.

We cannot do the same procedure for representing more than two stacks. Because in a
one dimensional array we are having only two fixed points v(1) and v(m) to represent
bottom most element.

In this case, when more than two stacks ,say n, are to be represented sequentially, we can
initially divide out the available memory v(1:m) into n segments and allocate one of these
segments to each of the n stacks. Thus division of v(1:m) into segments may be done in
proportion to expected sizes of the various stacks if the sizes are known.
DATA STRUCTURES II IT

UNIT-II

In the absence of such information, (1:m) may be divided into equal segments. For each
stacks i we shall use B(i) to represent a position one less than the position in v for the
bottom most element of that stack. T(i),1 < i < n will point to the topmost element of the
stack i. Here ,we are going to use the boundary condition B(i)=T(i), If the ith stack is
empty.

If we grow the ith stack in lower memory indexes then the i+1 st
then with equal initial segments we have the initial values of B(i) and T(i) as

B(i)=T(i)=m/n(i-1),1< i < n

shown in following figure.


1 2 [m/n] 2[m/n] m

B(1) B(2) B(3) B(n+1)


T(1) T(2) T(3)
FIG: Initial configuration for n stacks in v(1:m). All stacks are empty and memory is
divided into equal segments.
Stack i, 1 < i < n can grow from B(i)+1 up to B(i+1) before if catches up with the i+1 st
stack. Moreover B(n+1)=m.

Using this definition, the add and delete algorithm become:


Procedure ADD(i,X)
//add elements X to the ith stack, 1 <= i <= n//
DATA STRUCTURES II IT

UNIT-II

if T(i) = B(i+1) then call STACK-FULL(i);return


T(i)  T(i)+1
V(T(i))  X
end ADD
procedure DELETE(i,x)
// delete topmost element if stack i//
if T(i) = B(i) then call STACK-EMPTY(i),return
X  V(T(i))
T(i)  T(i)-1
end DELETE

The above algorithm will be suitable for only 1 or 2 stacks since STACK-FULL
condition in algorithm ADD does not imply that all m location of V are in use. In fact,
there may be a lot of unused space between stacks j and j+1 for 1 <= j <= n and j = i
which is shown in the following fig:

…. ….

B(1) T(1) B(2) T(2) B(i) T(i) T(i+1) T(j) B(j+1) B(n+1)
B(I+1)B(I+2)

Fig: Configuration when stack i meets with stack i+1 but there is still free space else
where in v.

The procedure STACK_FULL(i) should therefore determine whether there is any free
th
space around so as to make some of these free space available to the i stack, the
primary objective of the algorithm STACK FULL is to permit the adding of elements to
stack so long as there is some free space in v and shift stack around so as to make some
DATA STRUCTURES II IT

UNIT-II

of these free space available to the Jth stack. The primary objective of algorithm STACK
FULL is to permit the adding of elements to stack so long as there is some free space in
v.
PROCEDURE:
a) Determine the least j ,i <j<n such that there is free space between stacks j and j+1,
that is T(j)<B(j+1). If there is such a j, then move stacks i+1,i+2,…j one position to
the right, thereby creating a space between stacks i and i+1.
b) If there is no j as in (a), then look to the left of stack i. Find the largest j such that
1<=j<i and there is space between stacks j and j+1 that is, T(j) <B(j+1). If there is
such a j, then move stack j+1, j+2,…,i once space left creating free space between
stacks i and i+1.If there is no j satisfying either the conditions of (a)or(b),then all in
spaces of v are utilized and there is no free space.

LINKED QUEUE:
In queue,items are deleted from the front of a queue and inserted at the rear. Let a
pointer to the first element of the list represent the front of the queue .Another pointer
to the last element of the list represents the rear of the queue as shown in the
following figure: ALGORITHM:REFER CLASS NOTES
DATA STRUCTURES II IT

UNIT-II

APPLICATIONS OF A LIST
 In order to store and process data, linked list are very popular data structures.
 This type of data structures hold certain advantages over arrays.

Demerits of arrays and Merits of linked lists


 First, in case of an array, data are stored in contiguous memory locations, so
insertion and deletion operations are quite time consuming, in insertion of a new
element at a desired location , all the trailing elements should be shifted down;
similarly, in case of deletion, in order to fill the location of deleted element, all the
trailing elements are required to shift upwards. But, in linked lists, it is a matter of
only change in pointers.
 Second, array is based on the static allocation of memory: amount of memory
required for an array must be known before hand, once it is allocated its size
cannot be expanded. This is why, for an array, general practice is to allocate
memory, which is much more than the memory that actually will be used. But this
is simply wastage of memory space. This problem is not there in linked list.
Linked list uses dynamic memory management scheme; memory allocation is
decided during the run-time as and when require. Also if a memory is no more
required, it can be returned to the free storage space, so that other module or
program can utilize it.
 Third, a program using an array may not execute although the memory required
for the data are available but not in contiguous locations rather dispersed. As link
structures do not necessarily require to store data in adjacent memory location, so
the program of that kind, using linked list can then be executed.

Demerits of linked lists


 However, there are obviously some disadvantages: one is the pointer business.
Pointers, if not managed carefully, may lead to serious errors in execution.
DATA STRUCTURES II IT

UNIT-II

 Next, linked lists consume extra space than the space for actual data as the links
among the nodes are to be maintained.

Five examples are provided that use linked lists.


 The first is a simple way to represent single-variable polynomials.
 The second is a method to sort in linear time, for some special cases.
 Thirdly, linked lists might be used to keep track of course registration at a
university.
 The next application is the representation of sparse matrix
 The last application is Dynamic Storage management

The Polynomial ADT

 An abstract data type for single-variable polynomials (with nonnegative


exponents) can be defined by using a list.
 If most of the coefficients ai are nonzero, use a simple array to store the
coefficients. Routines can be written to perform addition, subtraction,
multiplication, differentiation, and other operations on these polynomials. In this
case, use the type declarations below.
 Two possibilities are addition and multiplication.
 Ignoring the time to initialize the output polynomials to zero, the running time of
the multiplication routine is proportional to the product of the degree of the two
input polynomials. This is adequate for dense polynomials, where most of the
terms are present, but if p1(x) = 10x1000 + 5x14 + 1 and p2(x) = 3x1990 - 2x1492 + 11x
+ 5, then the running time is likely to be unacceptable.
 Most of the time is spent multiplying zeros and stepping through what amounts to
nonexistent parts of the input polynomials. This is always undesirable.
typedef struct
DATA STRUCTURES II IT

UNIT-II

{
int coeff_array[ MAX_DEGREE+1 ];
unsigned int high_power;
} *POLYNOMIAL;
Type declarations for array implementation of the polynomial ADT
 An alternative is to use a singly linked list. Each term in the polynomial is
contained in one cell, and the cells are sorted in decreasing order of exponents.
For instance, the linked lists in represent p1(x) and p2(x).

APPLICATION OF LINKED LIST:

POLYNOMIAL ADDITION:

Polynomial addition is represented as

A(x)=
Where ai are non-zero coefficients with exponent ei such that em>em-1>..>e2>e1>0.
Each term will be represented by a node.A node will be of fixed size leaving 3 fields
which represent the coefficient and exponent of a term plus a pointer to a next term.

COEF EXP
LINK

For instance the polynomial would be stored as

While would look like


DATA STRUCTURES II IT

UNIT-II

In order to add two polynomial together we examine their terms starting at the nodes
pointed to be A and B.Two pointer p and q are used to move along the terms of A and B.
If the exponent of the two terms are equal then the coefficient terms are added and the
new term created for the result.
If the exponent of the current term is less then exponent of the current term of B then a
duplicate of the term of B is created and attached to C .The pointer q is advanced to the
next term.
Similar action is taken on A if EXP(p)EXP(q).If one polynomial gets exhausted we must
exit and the remaining terms of the other can be copied directly into result.

The following figure illustrate this addition procedure on the polynomial A and B given
above.
DATA STRUCTURES II IT

UNIT-II

Each time a new node is generated into COEF and EXP fields are set and it is appended
to the end of the list C.In order to avoid having to search for the last node in C each time
a new node is added ,we keep a pointer d which points to the current last node in C.

Procedure PADD specifies the algorithm to add two polynomial .PADD makes use of a
subroutine ATTACH which creates a new node and appends it to the end of C.In order to
avoid more computation,C is initially given a single node with no values which is
deleted at the end of the algorithm.

Procedure ATTACH(C,E,d)
//create a new term with COEF=c and EXP=E and attach it to the node pointed at by d//
Call GETNODE(I)
DATA STRUCTURES II IT

UNIT-II

EXP(I) E
COEF( I)C
LINK (d)  //attach this node to the end of this list//
DI //move pointer d to the new last node end ATTACHED
Procedure AADD(A,B,C)
//polynomial A and B represented as singly linked list are summed to form the new list
named C//
1. PA ;qB//p,q pointer to next term of A ,B//
2. call GETNODE (C)dc //initial node for c,returned latar//
3.While p!=0 and q!=0//While there are more term in A ,B
4.Case
5.:EXP(p)=EXP(q)://equal exponent//
6.xCOEF(p)+COEF(q)
7.if x!=0 then call ATTACH (x,EXP(p),d)
8.pLINK(p);qLINK (q)// advance to next term//
9.else:call ATTACH (COEF(p),EXP(p),d)
10.pLINK (p)//advance to the next term of A//
11.end
12. end
13. While p!=0 do //copy remaining term of A//
14.Call ATTACH (COEF(p),EXP(p),d)
15. PLINK(p)
16. end
17. while q!=0 do //copy remaining list of B//
18.call ATTACH (COEF(q),EXP(q),d)
19. q LINK(q)
20. end
21. LINK (d)0.tc;cLINK( c)
22.Call RET(t) //delete extra initial node //
DATA STRUCTURES II IT

UNIT-II

23. End PADD.

COMPUTING TIME:
In order to carry out a computing time analysis, it is first necessary to determine which operation contribute to the cost for this
algorithm. There are several cost measures.

1. coefficient addition(COEX + EXP)


2.Coefficient copmparison(EXP)
3. Creation of new nodes for c( GETNODE(c) )
Let us assume that each of these form operation ,if done once;takes a single unit of time
.The total time taken by algorithm PADD is then dtermined by the number of times
these operation are performed.
This number clearly depend on how many terms are present in the polynomial A and B.
Assume that A and B have m and n term respectively.
Where
1. Then clearly the number coefficient and addition can vary as 0<=coefficient
addition s<= min{m,n}.
Lower bound is achieved when none of the exponent are equal,while the upper
bound is achieved when the exponent of one of the polynomial are a subset of the
exponent of the others.
2. As the exponent comparisons,one comparison is made on each iteration of the
while loop of the lines 3-15.on each iteration either p or q or both move to the next term
in their respective polynomials.
Since the total number of term is m+n the number of iteration and hence the
number of exponent comparison is bounded by m+n.
3. The maximum number of term in c is m+n,and so no more then m+n new nodes
are created except the front node of c.
hence ,the maximum no.of executions of any of the statements in PADD is
bounded above by m+n,Therefore ,the computing time is 0(m+n)
DATA STRUCTURES II IT

UNIT-II

SPARSE MATRIX:
We’ve already seen one of the method of storing sparse arrays,i.e ,by using a 3-tuple for
each element.
As matrix operation such as addition, subtraction and multiplication are performed ,the
number of non-zero terms in the matrix need to added or deleted .That requires a lot of
data movement in the static storage system.
An important over this would be to store these non-zero element as a linked list of 3
tuples instead of using an array.
In the Data representation ,each column of a sparse matrix will be represented by a
circular by linked list with head node.In addition ,each row will also be a circularly
linked list with a head node .Each node in the l structure other than a head node will
represent a non-zero term in the matrix A will be made up of 5 fields:
ROW,COL.DOWN,RIGHT and VALUE.
Here, the Down field will be used to link to the next non-zero element in the same
column,while the RIGHT field will be used to link to the next non-zero element in the
same row.
Thus,if aij!=0,then there will be a node with VALUE field aij,ROW field I and COL field
j,this node will be linked into the circular linked list for row I and also into the circular
linked list for column j.Ait will therefore,be a member of two list at the same time.

In order to avoid having nodes of two different sizes in the system ,we shall assume head
node to be configured exactly as a nodes being used to represent the non-zero terms of
the sparse matrix. The row and COL field of head nodes will be set to zero by assuming
the indices value >0.
FIG2: Shows the structure obtained for the 6*7 matrix a fig 1:

0 0 11 0 0 13 0
12 0 0 0 0 0 14
DATA STRUCTURES II IT

UNIT-II

0 –4 0 0 0 –8 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 –9 0 0 0 0 0
FIG1: 6*7 sparse Matrix A

The head nodes are marked M1-M7.As can be seen from the figure ,the VALUE field of
the head nodes for each column list is used to link to the next head node.while the
DOWN field links to the first non-zero term in that column or to itself if there Is no non-
zero term in that column.This leaves the RIGHT field in unutilised .The head node is the
RIGHT field utilised by the row head nodes ,hence it is possible to use the same node as
the head node foe row i as for column i.
The head nodes themselves,linked through the VALUE field ,from a circular linked list
with a head node pointed to by A .This head node for the list of row and column head
nodes contain the dimensions of the matrix.
i.e Row (A) is the number of rows in the matrix and COL(A) is the number of column.

If we wish to represent a n*m sparse matrix with r non-zero term then the number of
nodes needed is r+max{n,m}+1.

OPERATION ON SPARSE MATRIX:


Let us now see the about an algorithm to read in a sparse matrix and set upto linked list
representation.
Procedure m READ (A)
1. read(n,m,r)
2. pmax{m,n}
3. fori1 to p do // get p head nodes for rows and column//
4.call GETNODE (x) ;HONODE(I)x
DATA STRUCTURES II IT

UNIT-II

5. Row (x) COL (x)0


6. Right(x) Value(x)x // these field point to themselves//
7.end
8.current row1;lastHonoDE(1)
9.for I1 to r do
10. read (rrow,ccol,val)//get next triple//
11. If rrow >current-row//current row is done ;close it and begin another//
12.then[riGHT(last) HDNODE(current-row)
13.current-rowrrow; LASTHDNODE(rrow)]//LAST point to right most node//
14.call GETNODEE(x)
15.Row(x) rrow;COL(x)ccol;VALUE(x)Val//store tripple into new node
16.RIGHT(LAST)X; LAST<M—X // link into row list//
17. DOWN (VALUE(HDNODE(ccol)))X;//link into column list //
18.Value (HDNODE(ccol))X
19. end // Close last row//
20. if r!=0 then RIGHT(LAST) HDNODE(current row)
21.for I1 to m do // close all column lists//
22. Dowwn ( VALUE(HDNODE(I)))HDNODE(I)
23. end //set up the list of headnodes linked through VALUE field//
24. call GETNODE (A) ; ROW(A) n; Col(A)m //set up headnode of matrix//
25. for i1 to p-1 do VALUE (HDNODE(I)) HDNODE(I+1)
26.end
27. if p=0 then VALUE (A)A
28. else[VALUE(HDNODE(p))A
VALUE (A)HDNODE(1)].
29. end m READ
here,n –no.of rows
m-no.of columns and
DATA STRUCTURES II IT

UNIT-II

r- no.of non-zero terms are given as input by r triple of the form (I,j,aij).The triple are
ordered by the rows and within each row,the triple are ordered by columns.

TRPLE FOR GIVEN 6*7 SPARSE MATRIX:


1 2 3
` 6 7 7
1 3 11
1 6 13
2 1 12
2 7 14
3 2 -4
3 6 -8
6 2 -9

hdnode is an array with max size of largest dimension of a given matrix in which hdnode
will be a pointer to the head for column i,and hence also for row i. Algorithm mread
proceed by first setting up all the head nodes and the settiing up each row list,
Simultaneously building the coulmn list .The VALUE field of head node I is initially
used to keep track of the last node in column i

ANALYSIS OF ALGORITHM MREAD:


Since GETNODE work in a constant amount of time ,all the head node may be set
up in o(Max{n,m})time.Each non-zero term can be set up in constant amount of time
using for loop say 0(r) time.The rest of the algorithm takes 0(max{m,n})time.The total
time is therefore 0(m+n+r).

You might also like