[go: up one dir, main page]

0% found this document useful (0 votes)
118 views6 pages

Advance Data Structure

1. A data structure organizes and stores data for accessing and working with algorithms. Common linear structures include arrays, stacks, queues, and linked lists, while non-linear structures include trees and graphs. 2. Linked lists allow efficient insertion and deletion by using pointers to change which node is next. 3. Infix to postfix conversion uses an algorithm with a stack to evaluate operator precedence and convert the expression, while postfix evaluation uses a stack to perform operations in the proper order.

Uploaded by

Neha Rai
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views6 pages

Advance Data Structure

1. A data structure organizes and stores data for accessing and working with algorithms. Common linear structures include arrays, stacks, queues, and linked lists, while non-linear structures include trees and graphs. 2. Linked lists allow efficient insertion and deletion by using pointers to change which node is next. 3. Infix to postfix conversion uses an algorithm with a stack to evaluate operator precedence and convert the expression, while postfix evaluation uses a stack to perform operations in the proper order.

Uploaded by

Neha Rai
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1. What is a data structure? Discuss briefly on types of data structures.

Ans: A data structure is a specialized format for organizing and storing data . General data structure types include the array , the file , the record , the table , the tree, and so on. Any data structure is designed to organize data to suit a specific purpose so that it can be accessed and worked with in appropriate ways. In computer programming, a data structure may be selected or designed to store data for the purpose of working on it with various algorithms. Data structures can be divided in to two types Linear data structure Non Linear data structure

Linear data structure When the data is stored in the memory in linear or sequential form is called linear data structure. Examples of linear data structures include Array, Stack, Queue, Linked list. Non Linear data structure When the data is stored in the memory in dispersed or non sequential order is called non linear data structure. Example of non linear data structures includes trees, graphs, files etc. 2. Explain the insertion and deletion operation of linked list in detail. Ans: the algorithm ITEM contains the new data to be added to the list. Checking to see if space is available in the AVAIL list. If not then AVAIL = NULL, then the algorithm will print the message OVERFLOW. Removing the first node from the AVAIL list. Using the variable N to keep track of the location of the new node, this step can be implemented by the pair of assignments N := AVAIL, AVAIL := LINK [AVAIL] Copying new data into the new node. DATA [N] := ITEM The following figure 3.7 shows the how the last two steps are implemented in free- storage list.

Figure 3.7: Availing new node from AVAIL List Deletion is the process of removing a node from the linked list. The deletion of a node is made just by a pointer change. Suppose a node has to be deleted between node A and B, the figure 3.9 (a) and (b) explains it.

e next pointer field of node A now points to node B, where node N previously pointed. storage list, where AVAIL previously pointed.

3. Discuss the process of Infix to postfix conversion. Also explain Postfix evaluation. Ans:

Algorithm for conversion of infix to postfix


1. Initialize a STACK as empty in the beginning. 2. Inspect the infix string from left to right. While reading character from a string we may encounterOperand - add with the postfix string. Operator if the stack is empty push the operator into stack, if any operator available with the stack compares the current operator precedence with the topStack if it has higher precedence over the current one pop the stack and add with the post string else push the current operator to the stack. Repeat this process until the stack is empty. Left Parenthesis: Push in to the STACK. Right Parenthesis: Pop everything until you get the left parenthesis or end of STACK. 3. Repeat Process with the infix string until all the characters are read. 4. Check for stack status if it is not empty add topStack to postfix string and repeat this process until the STACK is empty 5. Return the postfix string. 6. Exit.

Algorithm: Evaluation of Postfix Expression.

1. Add a right parenthesis ) at the end of X. [This acts as a sentinel] 2. Scan X from left to right and repeat Steps 3 and 4 for each element of X until the sentinel ) is encountered. 3. If an operand is encountered, put it on STACK. 4. If an operator is encountered, then: (a) Remove the two top element of STACK, where A is the top element and B is the next- to-top element. (b) Evaluate B A. (c) Place the result of (b) back on STACK. [End of If structure] [End of Step 2 loop.] 5. Set VALUE equal to the top element on STACK. 6. Exit.

4. Explain the different types of traversal on binary tree. Ans: A full traversal produces a linear order for the information in a tree. This linear order may be familiar and useful.

Inorder traversal
Inorder traversal helps to print the binary search tree in sorted order. Here the traversal starts with left most subtree then prints the parent node and then proceeds with the right subtree. 1) Traverse the left subtree 2) Visit the root node 3) Traverse the right subtree 5. Explain Breadth-first and depth-first search algorithm.

Postorder traversal
In postorder traversal, traversal starts with the left most subtree then proceeds with right sub tree and print the parent of those nodes. 1) Traverse the left subtree 2) Traverse the right subtree 3) Visit the root node

6. Explain the meaning of dynamic storage management. Also explain the concept of storage release.

Preorder traversal
Here in preorder traversal, we start the traversal technique with the root node then proceeds towards the end of the left subtree and then towards the right sub tree. Figure 5.8(iii) shows the traversal pattern using preorder traversal. 1) Start at the root node 2) Traverse the left subtree 3) Traverse the right subtree

Breadth first traversal


Trees can also be traversed in level-order, where we visit every node on a level before going to a lower level. This is also called breadth first traversal. Level order traversal will be using queue to print the traversal pattern. Initially queue will be empty and then the root node will be inserted and look for the subtrees. Then root will be removed from the queue and printed and the left and right nodes will be inserted into the queue this process proceeds until the tree reaches Null. 5. Explain Breadth-first and depth-first search algorithm. Ans: In graph theory , breadth-first search (BFS) is a strategy for searching in graph when search is limited to
essentially two operations: (a) visit and inspect a node of a graph; (b) gain access to visit the nodes that neighbor the currently visited node. The BFS begins at a root node and inspects all the neighboring nodes. Then for each of those neighbor nodes in turn, it inspects their neighbor nodes which were unvisited, and so on. Compare it with the deptyh first search.. The algorithm uses a queue data structure to store intermediate results as it traverses the graph, as follows: 1. Enqueue the root node 2. Dequeue a node and examine it If the element sought is found in this node, quit the search and return a result. Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.

3. If the queue is empty, every node on the graph has been examined quit the search and return "not found". 4. If the queue is not empty, repeat from Step 2. Note: Using a stack instead of a queue would turn this algorithm into a depth first search. Breadth-first search can be used to solve many problems in graph theory, for example: Finding all nodes within one connected component. Copying Collection, Cheneys Algorithm.

Finding the shortest path for bipartitness. Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. A version of depth-first search was investigated in the 19th century by French mathematician charles tremaux as a strategy for solving mazes

A convenient description of a depth first search of a graph is in terms of a spanning tree of the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges.

Vertex orderings
It is also possible to use the depth-first search to linearly order the vertices of the original graph (or tree). There are three common ways of doing this: A preordering is a list of the vertices in the order that they were first visited by the depth-first search algorithm. This is a compact and natural way of describing the progress of the search, as was done earlier in this article. A preordering of an expression tree is the expression in Polish notation. A postordering is a list of the vertices in the order that they were last visited by the algorithm. A postordering of an expression tree is the expression inverse Polish notation. A reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit. Reverse postordering is not the same as preordering. For example, when searching the directed graph

6. Explain the meaning of dynamic storage management. Also explain the concept of storage release.
o o o o

Ans: dynamic storage management:


Allocate a given number of bytes Free a previously allocated block Two general approaches to dynamic storage allocation: Stack allocation (hierarchical): restricted, but simple and efficient. Heap allocation: more general, but less efficient, more difficult to implement.

Stack Allocation
A stack can be used when memory allocation and freeing are partially predictable: memory is freed in opposite order from allocation. Ifalloc(A) then alloc(B) then alloc(C), then it will be free(C) then free(B) then free(A). Example: procedure call. X calls Y calls Y again. Stacks are also useful for lots of other things: tree traversal, expression evaluation, top-down recursive descent parsers, etc. A stack-based organization keeps all the free space together in one place.

Heap Allocation
Heap allocation must be used when allocation and release are unpredictable Memory consists of allocated areas and free areas (or holes). Inevitably end up with lots of holes.

Goal: reuse the space in holes to keep the number of holes small, keep their size large. Fragmentation: inefficient use of memory due to holes that are too small to be useful. Stack allocation is perfect: only one hole. Typically, heap allocation schemes use a free list to keep track of the storage that is not in use. Algorithms differ in how they manage the free list. Best fit: keep linked list of free blocks, search the whole list on each allocation, choose block that comes closest to matching the needs of the allocation, save the excess for later. During release operations, merge adjacent free blocks. First fit: just scan list for the first hole that is large enough. Free excess. Also merge on releases. Most first fit implementations are rotating first fit. Best fit is not necessarily better than first fit! First fit tends to leave "average" size holes, while best fit tends to leave some very large ones, some very small ones. The very small ones can't be used very easily. Problem: over time, holes tend to fragment, approaching the size of the smallest objects allocated; this makes it hard to allocate large objects. Bit map: used for managing storage that comes in fixed-size chunks (e.g. disk blocks, or 32-byte chunks). Keep a large array of bits, one for each chunk. If bit is 0 it means chunk is in use, if bit is 1 it means chunk is free. Pools: keep a separate linked list for each popular size. Allocation is fast, no fragmentation.

o o o

Storage Reclamation
o o o o o How do we know when dynamically-allocated memory can be freed? Easy when a chunk is only used in one place. Reclamation is hard when information is shared: it can't be recycled until all of the sharers are finished. Sharing is indicated by the presence of pointers to the data. Without a pointer, can't access (can't find it). Two problems in reclamation: Dangling pointers: better not recycle storage while it's still being used. Memory leaks: storage gets "lost" because no one freed it even though it can't ever be used again. Reference counts: keep count of the number of outstanding pointers to each chunk of memory. When this becomes zero, free the memory. Example: Smalltalk, file descriptors in Unix. Works fine for hierarchical structures. Garbage collection: storage isn't freed explicitly (using free operation), but rather implicitly: just delete pointers. When the system needs storage, it searches through all of the pointers (must be able to find them all!) and collects things that aren't used. If structures are circular then this is the only way to reclaim space. Most garbage collectors also compact memory, moving objects to coalesce all free space. Makes life easier on the application programmer, but garbage collectors are incredibly difficult to program and debug, especially if compaction is also done. Examples: many modern languages, such as Java. How does garbage collection work? Must be able to find all objects. Must be able to find all pointers to objects. Pass 1: mark. Go through all statically-allocated and procedure-local variables, looking for pointers. Mark each object pointed to, and recursively mark all objects it points to. The compiler has to cooperate by saving information about where the pointers are within structures. Pass 2: sweep. Go through all objects, free up those that aren't marked. Garbage collection is often expensive: 10-20% or all CPU time in systems that use it.

o o o o o o o o

You might also like