[go: up one dir, main page]

0% found this document useful (0 votes)
121 views37 pages

Chapter 4

This document discusses various dynamic programming techniques including memoization, tabulation, and their application to problems like the Fibonacci sequence, multistage graphs, optimal binary search trees, 0/1 knapsack, reliability design, and the travelling salesman problem. It provides examples of how to implement memoization and tabulation to solve problems more efficiently by avoiding recomputing overlapping subproblems and instead saving prior results.

Uploaded by

Silabat Ashagrie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views37 pages

Chapter 4

This document discusses various dynamic programming techniques including memoization, tabulation, and their application to problems like the Fibonacci sequence, multistage graphs, optimal binary search trees, 0/1 knapsack, reliability design, and the travelling salesman problem. It provides examples of how to implement memoization and tabulation to solve problems more efficiently by avoiding recomputing overlapping subproblems and instead saving prior results.

Uploaded by

Silabat Ashagrie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter 4

Dynamic Programming and


Traversal Techniques
Dynamic Programming
 Dynamic programming, like the divide-and-conquer method, solves problems by
combining the solutions to subproblems. (“Programming” in this context refers to a
tabular method, not to writing computer code.)
 As we saw in Chapters 2, divide-and-conquer algorithms partition the problem into
disjoint subproblems, solve the subproblems recursively, and then combine their
solutions to solve the original problem.
 In contrast, dynamic programming applies when the subproblems overlap—that is, when
subproblems share sub-subproblems. In this context, a divide-and-conquer algorithm
does more work than necessary, repeatedly solving the common sub-subproblems
 A dynamic-programming algorithm solves each sub-subproblem just once and then saves
its answer in a table, thereby avoiding the work of recomputing the answer every time
it solves each sub-subproblem
 To save results of subproblems in dynamic programming either tabulation or
memoization techniques can be used.
 Let’s grasp the concept of tabulation and memoization, first lets consider the most
common example in programming called Fibonacci series. (try to see the recurrence
tree)
Memoization
 Aka top-down approach used to implement the DP algorithm by solving the highest-level subproblem and then
solve the next sub-problem recursively and the next.
 Suppose there are two sub-problems, sub-problem A and sub-problem B. When sub-problem B is called
recursively, then it can use the solution of sub-problem A, which has already been used. Since A and all the sub-
problems are memoized, it avoids solving the entire recursion tree generated by B and saves computation time.
 Having this global array for memoization the Fibonacci algorithm can be modified as:
int memo[n];
for i=0 ups to n:
memo[i]=-∞
 int fibo(int n){
if (memo[n] !=- ∞)
return memo[n];
If(n<=1)
return n;
else:
return(memo[n] =fibo(n-2) + fibo(n-1));
}
Tabulation

 Aka bottom-up approach is a technique that is used to implement the DP algorithms by solving the lowest
level sub-problem. The solution to the lowest level sub-problem will help to solve next level sub-problem,
and so forth.
 We solve all the sub-problems iteratively until we solve all the sub-problems. This approach saves the
time when a sub-problem needs a solution of the sub-problem that has been solved before.
 Having this global array for tabulation the Fibonacci algorithm can be modified as:
int memo[n];
int fibo(int n){
memo[0]=0, memo[1]=1;
for i=2 ups to n:
memo[n] =memo[n-2] + memo[n-1];
return memo[n];
}
 
Multistage graphs, all pairs shortest
pattern
 A Multistage graph is a directed, weighted graph in
which the nodes can be divided into a set of stages
such that all edges are from a stage to next stage
only (In other words there is no edge between
vertices of same stage and from a vertex of
current stage to previous stage).
 The vertices of a multistage graph are divided into
n number of disjoint subsets S = { S1 ,
S2 , S3 ……….. Sn },  where S1 is the source and Sn is the
sink ( destination ). The cardinality of S1 and Sn are
equal to 1. i.e., |S1| = |Sn| = 1.
We are given a multistage graph, a source and a
destination, we need to find shortest path from
source to destination. By convention, we consider
source at stage 1 and destination as last stage.
Following is an example graph we will consider in
this article :-
Now there are various strategies we can apply :-
 The Brute force method of finding all possible paths between Source and Destination
and then finding the minimum. That’s the WORST possible strategy.
 Dijkstra’s Algorithm of Single Source shortest paths. This method will find shortest
paths from source to all other nodes which is not required in this case. So it will take a
lot of time and it doesn’t even use the SPECIAL feature that this MULTI-STAGE graph
has.
 Simple Greedy Method – At each node, choose the shortest outgoing path. If we apply
this approach to the example graph give above we get the solution as 1 + 4 + 18 = 23.
But a quick look at the graph will show much shorter paths available than 23. So the
greedy method fails !
 The best option is Dynamic Programming. So we need to find Optimal Sub-structure,
Recursive Equations and Overlapping Sub-problems.
Optimal binary search trees
 In computer science, an optimal binary search tree (Optimal BST),
sometimes called a weight-balanced binary tree,it is a binary search tree
 which provides the smallest possible search time (or expected search time)
for a given sequence of accesses (or access probabilities). Optimal BSTs are
generally divided into two types: static and dynamic.
 In the static optimality problem, the tree cannot be modified after it has
been constructed. In this case, there exists some particular layout of the
nodes of the tree which provides the smallest expected search time for the
given access probabilities. Various algorithms exist to construct or
approximate the statically optimal tree given the information on the access
probabilities of the elements.
In the dynamic optimality problem, the tree can be modified at any time,
typically by permitting tree rotations. The tree is considered to have a
cursor starting at the root which it can move or use to perform
modifications. In this case, there exists some minimal-cost sequence of
these operations which causes the cursor to visit every node in the target
access sequence in order. The splay tree is conjectured to have a
constant competitive ratio compared to the dynamically optimal tree in all
cases, though this has not yet been proven.
0/1 Knapsack
 Meaning of knapsack in English. knapsack. a bag carried on the back or over
the shoulder, used especially by people who go walking or climbing for
carrying food, clothes, etc.
 Knapsack is basically means bag. A bag of given capacity. We want to pack n
items in your luggage. The ith item is worth v i dollars and weight w i pounds.
Take as valuable a load as possible, but cannot exceed W pounds. v i w i W
are integers. List (Array) of weight and their corresponding value.
 In 0-1 Knapsack, items cannot be broken which means the thief should take
the item as a whole or should leave it. This is reason behind calling it as 0-1
Knapsack. Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or
1, where other constraints remain the same.
What is 0/1 knapsack problem?
Knapsack Problem. Definition - What does Knapsack Problem mean? The knapsack
problem is an optimization problem used to illustrate both problem and solution. It
derives its name from a scenario where one is constrained in the number of items
that can be placed inside a fixed-size knapsack.
Fractional Knapsack: Fractional knapsack problem can be solved by
Greedy Strategy where as 0 /1 problem is not. It cannot be solved by
Dynamic Programming Approach. In this item cannot be broken which
means thief should take the item as a whole or should leave it. That's why it
is called 0/1 knapsack Problem. Each item is taken or not taken.
Examples of knapsack in a Sentence. The cameras are among a number of
personal possessions on display: a walking stick, a watercolor box, Homer’s Civil
War press pass, a fishing net, a knapsack, a family photo album.
Reliability design
 The reliability of a product is strongly influenced by decisions made during
the design process.
 The key message here is reliability occurs at the point of decision. Each time
someone makes a decision, selects a component, chooses a material, assumes
a use profile, the eventual product reliability takes shape.
 Design for Reliability, DfR, is about making good decisions across the
organization concerning reliability.
The Design for Reliability process
Travelling sales man problem
 Travelling salesman problem is not new for delivery-based businesses. Its
recent expansion has insisted that industry experts find optimal solutions in
order to facilitate delivery operations.  
 The major challenge is to find the most efficient routes for performing multi-
stop deliveries. Without the shortest routes, your delivery agent will take
more time to reach the final destination. Sometimes problems may arise if
you have multiple route options but fail to recognize the efficient one. 
 Eventually, travelling salesman problem would cost your time and result in 
late deliveries. So, before it becomes an irreparable issue for your business,
let us understand the travelling salesman problem and find optimal solutions
in this blog.
What is the Travelling Salesman
Problem (TSP)?
The Travelling Salesman Problem (TSP) is a combinatorial problem that deals with
finding the shortest and most efficient route to follow for reaching a list of specific
destinations.
It is a common algorithmic problem in the field of delivery operations that might
hamper the multiple delivery process and result in financial loss.  TSP turns out when
you have multiple routes available but choosing minimum cost path is really hard for
you or a travelling person. 
This is where most traveling people or computer scientists spend more time calculating
the least distance to reach the location. Unfortunately, they end up extending delivery
time and face consequences. 
However, TSP can be eliminated by determining the optimized path using the
approximate algorithms or automated processes. On that note, let us find approximate
solutions for the rising Travelling Salesman Problem (TSP).
What are Some Popular Solutions to Travelling Salesman Problem?
These are some of the near-optimal solutions to find the shortest route to a
combinatorial optimization problem.
1. Nearest Neighbor Algorithm
The Nearest Neighbor Method is probably the most basic TSP heuristic. The method
followed by this algorithm states that the driver must start with visiting the nearest
destination. Once all the cities in the loop are covered, the driver can head back to the
starting point.
Solving TSP using this method, requires the user to choose a city at random and then
move on to the closest unvisited city and so on. Once all the cities on the map are
covered, you must return to the city you started from.
2. The Branch & Bound Method
The Branch & Bound method follows the technique of breaking one
problem into several little chunks of problems. So, it solves a series of
problems. Each of these sub-problems may have multiple solutions. The
solution you choose for one problem may have an effect on the solutions
of subsequent sub-problems.
3. The Brute Force Approach
The Brute Force Approach takes into consideration all possible minimum cost
permutation of routes using a dynamic programming approach. First,
calculate the total number of routes. Draw and list all the possible routes that
you get from the calculation. The distance of each route must be calculated
and the shortest route will be the most optimal solution.
Game trees
 Understanding the game tree
To better understand the game tree, it can be thought of as a technique for
analyzing adversarial games, which determine the actions that player takes to
win the game. In game theory, a game tree is a directed graph whose nodes are
positions in a game (e.g., the arrangement of the pieces in a board game) and
whose edges are moves (e.g., to move pieces from one position on a board to
another).
The complete game tree for a game is the game tree starting at the initial
position and containing all possible moves from each position; the complete tree
is the same tree as that obtained from the extensive-form game representation.
To be more specific, the complete game is a norm for the game in game theory.
Which can clearly express many important aspects. For example, the sequence of
actions that stakeholders may take, their choices at each decision point,
information about actions taken by other stakeholders when each stakeholder
makes a decision, and the benefits of all possible game results.
The first two plies of the game tree for tic-tac-toe.
The diagram shows the first two levels, or plies, in the game tree for tic-tac-toe. The rotations and reflections of
positions are equivalent, so the first player has three choices of move: in the center, at the edge, or in the corner.
The second player has two choices for the reply if the first player played in the center, otherwise five choices. And
so on.

The number of leaf nodes in the complete game tree is the number of possible different ways the game can be
played. For example, the game tree for tic-tac-toe has 255,168 leaf nodes.
Game trees are important in artificial intelligence because one way to pick the
best move in a game is to search the game tree using any of numerous 
tree search algorithms, combined with minimax-like rules to prune the tree. The
game tree for tic-tac-toe is easily searchable, but the complete game trees for
larger games like chess are much too large to search. Instead, a 
chess-playing program searches a partial game tree: typically as many plies
from the current position as it can search in the time available. Except for the
case of "pathological" game trees[3] (which seem to be quite rare in practice),
increasing the search depth (i.e., the number of plies searched) generally
improves the chance of picking the best move.
Two-person games can also be represented as and-or trees. For the first player
to win a game, there must exist a winning move for all moves of the second
player. This is represented in the and-or tree by using disjunction to represent
the first player's alternative moves and using conjunction to represent all of the
second player's moves.
Solving game trees
Deterministic algorithm version
With a complete game tree, it is possible to "solve" the game – that is to say, find a
sequence of moves that either the first or second player can follow that will
guarantee the best possible outcome for that player (usually a win or a tie). The 
deterministic algorithm (which is generally called backward induction or 
retrograde analysis) can be described recursively as follows.
1. Color the final ply of the game tree so that all wins for player 1 are colored one
way (Blue in the diagram), all wins for player 2 are colored another way (Red in
the diagram), and all ties are colored a third way (Grey in the diagram).
2. Look at the next ply up. If there exists a node colored opposite as the current
player, color this node for that player as well. If all immediately lower nodes are
colored for the same player, color this node for the same player as well.
Otherwise, color this node a tie.
3. Repeat for each ply, moving upwards, until all nodes are colored. The color of the
root node will determine the nature of the game.
The diagram shows a game tree for an arbitrary game, colored using the above
algorithm.
It is usually possible to solve a game (in this technical sense of "solve") using only
a subset of the game tree, since in many games a move need not be analyzed if
there is another move that is better for the same player (for example 
alpha-beta pruning can be used in many deterministic games).
Any subtree that can be used to solve the game is known as a decision tree, and
the sizes of decision trees of various shapes are used as measures of 
game complexity.
Randomized algorithms version
Randomized algorithms can be used in solving game trees. There are two main
advantages in this type of implementation: speed and practicality. Whereas a
deterministic version of solving game trees can be done in Ο(n), the following
randomized algorithm has an expected run time of θ(n0.792) if every node in the
game tree has degree 2. Moreover, it is practical because randomized algorithms
are capable of "foiling an enemy", meaning an opponent cannot beat the system
of game trees by knowing the algorithm used to solve the game tree because the
order of solving is random.
The following is an implementation of randomized game tree solution
algorithm:
def gt_eval_rand(u) -> bool:
"""Returns True if this node evaluates to a win, otherwise
False"""
if u.leaf:
return u.win
else:
random_children = (gt_eval_rand(child) for child in
random_order(u.children))
if u.op == "OR":
return any(random_children)
if u.op == "AND":
return all(random_children)
The algorithm makes use of the idea of "short-circuiting": if the root node is
considered an "OR" operator, then once one True is found, the root is
classified as True; conversely, if the root node is considered an "AND"
operator then once one False is found, the root is classified as False.
Disconnected components
Depth first search
 Depth-first search (DFS) is an algorithm for traversing or searching tree or 
graph data structures. The algorithm starts at the root node (selecting some
arbitrary node as the root node in the case of a graph) and explores as far as
possible along each branch before backtracking. Extra memory, usually a
stack, is needed to keep track of the nodes discovered so far along a specified
branch which helps in backtracking of the graph.
 A version of depth-first search was investigated in the 19th century by French
mathematician Charles Pierre Trémaux[1] as a strategy for solving mazes.
Depth-first search
Properties
The time and space analysis of DFS differs according to its application area. In
theoretical computer science, DFS is typically used to traverse an entire graph, and
takes time {\displaystyle O(|V|+|E|)}. where {\displaystyle |
V|} is the number of vertices and {\displaystyle |E|} the number of edges.
This is linear in the size of the graph. In these applications it also uses space {\
displaystyle O(|V|)}in the worst case to store the stack of vertices on the
current search path as well as the set of already-visited vertices. Thus, in this
setting, the time and space bounds are the same as for breadth-first search and the
choice of which of these two algorithms to use depends less on their complexity
and more on the different properties of the vertex orderings the two algorithms
produce.
For applications of DFS in relation to specific domains, such as searching for
solutions in artificial intelligence or web-crawling, the graph to be traversed is often
either too large to visit in its entirety or infinite (DFS may suffer from 
non-termination).
In such cases, search is only performed to a limited depth; due to limited
resources, such as memory or disk space, one typically does not use data
structures to keep track of the set of all previously visited vertices. When search is
performed to a limited depth, the time is still linear in terms of the number of
expanded vertices and edges (although this number is not the same as the size of
the entire graph because some vertices may be searched more than once and
others not at all) but the space complexity of this variant of DFS is only
proportional to the depth limit, and as a result, is much smaller than the space
needed for searching to the same depth using breadth-first search. For such
applications, DFS also lends itself much better to heuristic methods for choosing a
likely-looking branch. When an appropriate depth limit is not known a priori, 
iterative deepening depth-first search applies DFS repeatedly with a sequence of
increasing limits. In the artificial intelligence mode of analysis, with a 
branching factor greater than one, iterative deepening increases the running time
by only a constant factor over the case in which the correct depth limit is known
due to the geometric growth of the number of nodes per level.
DFS may also be used to collect a sample of graph nodes. However, incomplete
DFS, similarly to incomplete BFS, is biased towards nodes of high degree.
Animated example of a depth-first search
For the following graph:

1 2
a depth-first search starting at the node A, assuming that the left edges in the
shown graph are chosen before right edges, and assuming the search
remembers previously visited nodes and will not repeat them (since this is a
small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The
edges traversed in this search form a Trémaux tree, a structure with important
applications in graph theory. Performing the same search without remembering
previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A,
B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or
G.
Iterative deepening is one technique to avoid this infinite loop and would reach
all nodesa
Output of a depth-first search
The four types of edges defined by a spanning tree

The result of a depth-first search of a graph can be conveniently described in


terms of a spanning tree of the vertices reached during the search. Based on
this spanning tree, the edges of the original graph can be divided into three
classes: forward edges, which point from a node of the tree to one of its
descendants, back edges, which point from a node to one of its ancestors,
and cross edges, which do neither. Sometimes tree edges, edges which
belong to the spanning tree itself, are classified separately from forward edges.
If the original graph is undirected then all of its edges are tree edges or back
edges.

You might also like