[go: up one dir, main page]

0% found this document useful (0 votes)
0 views18 pages

Ada Unit 3

Dynamic Programming is a programming technique that efficiently solves problems with overlapping subproblems and optimal substructure by storing solutions to subproblems for future reference. It can be implemented using memoization (top-down approach) or tabulation (bottom-up approach). The document also discusses the 0-1 Knapsack problem and Multistage Graphs, highlighting their algorithms and applications in optimization.

Uploaded by

Afreen Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views18 pages

Ada Unit 3

Dynamic Programming is a programming technique that efficiently solves problems with overlapping subproblems and optimal substructure by storing solutions to subproblems for future reference. It can be implemented using memoization (top-down approach) or tabulation (bottom-up approach). The document also discusses the 0-1 Knapsack problem and Multistage Graphs, highlighting their algorithms and applications in optimization.

Uploaded by

Afreen Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Dynamic Programming

Dynamic Programming is a technique in computer programming that helps to efficiently solve a class
of problems that have overlapping subproblems and optimal substructure property.

If any problem can be divided into subproblems, which in turn are divided into smaller subproblems,
and if there are overlapping among these subproblems, then the solutions to these subproblems can
be saved for future reference. In this way, efficiency of the CPU can be enhanced. This method of
solving a solution is referred to as dynamic programming.

Such problems involve repeatedly calculating the value of the same subproblems to find the
optimum solution.

Dynamic Programming Example

Let's find the fibonacci sequence upto 5th term. A fibonacci series is the sequence of numbers in
which each number is the sum of the two preceding ones. For example, 0,1,1, 2, 3. Here, each
number is the sum of the two preceding numbers.

Algorithm

Let n be the number of terms.

1. If n = 0, return 0.

2. If n = 1, return 1.

3. Else, return the sum of two preceding numbers.

We are calculating the fibonacci sequence up to the 5th term.

1. The first term is 0.

2. The second term is 1.

3. The third term is sum of 0 (from step 1) and 1(from step 2), which is 1.

4. The fourth term is the sum of the third term (from step 3) and second term (from step 2)
i.e. 1 + 1 = 2.

5. The fifth term is the sum of the fourth term (from step 4) and third term (from step 3) i.e. 2 +
1 = 3.

Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results of the previous steps as
shown below. This is called a dynamic programming approach.

F(0) = 0

F(1) = 1

F(2) = F(1) + F(0)

F(3) = F(2) + F(1)


F(4) = F(3) + F(2)

How Dynamic Programming Works

Dynamic programming works by storing the result of subproblems so that when their solutions are
required, they are at hand and we do not need to recalculate them.

This technique of storing the value of subproblems is called memoization. By saving the values in the
array, we save time for computations of sub-problems we have already come across.

var m = map(0 → 0, 1 → 1)

function fib(n)

if key n is not in map m

m[n] = fib(n − 1) + fib(n − 2)

return m[n]

Dynamic programming by memoization is a top-down approach to dynamic programming. By


reversing the direction in which the algorithm works i.e. by starting from the base case and working
towards the solution, we can also implement dynamic programming in a bottom-up manner.

function fib(n)

if n = 0

return 0

else

var prevFib = 0, currFib = 1

repeat n − 1 times

var newFib = prevFib + currFib

prevFib = currFib

currFib = newFib

return currentFib

Dynamic Programming

Dynamic Programming is a method for designing algorithms.

An algorithm designed with Dynamic Programming divides the problem into subproblems, finds
solutions to the subproblems, and puts them together to form a complete solution to the problem
we want to solve.

To design an algorithm for a problem using Dynamic Programming, the problem we want to solve
must have these two properties:

 Overlapping Subproblems: Means that the problem can be broken down into smaller
subproblems, where the solutions to the subproblems are overlapping. Having subproblems
that are overlapping means that the solution to one subproblem is part of the solution to
another subproblem.

 Optimal Substructure: Means that the complete solution to a problem can be constructed
from the solutions of its smaller subproblems. So not only must the problem have
overlapping subproblems, the substructure must also be optimal so that there is a way to
piece the solutions to the subproblems together to form the complete solution.

Techniques Used in Dynamic


Programming
It might be difficult to design an algorithm using Dynamic Programming, but
the concept of Dynamic Programming is actually not that hard: Solve the
problem, but since the subproblems are overlapping, do it in a smart way so
that a specific subproblem only needs to be solved once.

To be able to use solutions to previously solved subproblems in Dynamic


Programming, the previously found solutions must be stored somehow, and
that can be done using memoization or tabulation.

Memoization is a technique used in Dynamic Programming, where the


solution is found recursively. As the algorithm runs, solutions to subproblems
are stored, and before trying to compute the solution to a subproblem, it
checks first to see if that solution has already been computed, to avoid doing
the same computation more than once.

The memoization technique is called "top-down" because the initial function


call is for the main problem, and it results in new function calls for solving
smaller and smaller subproblems.

Tabulation is a technique used in Dynamic Programming, where solutions to


the overlapping subproblems are stored in a table (array), starting with the
most basic subproblems.

The tabulation technique is not recursive, and it is called "bottom-up"


because of the way the final solution is built up by solving the most basic
subproblems first. Since the most basic subproblem solutions are stored in
the table first, when solving a subproblem later that relies on previous
subproblems, the algorithm can just pick these solutions right from the table,
no need to compute them again.

To get a better sense of how memoization works, and is considered "top-


down", and how tabulation works, and is "bottom-up", take a look at the two
images below.

As you can see in the images above, the tabulation approach starts at the
bottom by solving F(0) first, while the memoization approach start at the top
with F(10) and breaking it into smaller and smaller subproblems from there.
Unlike in fractional knapsack, the items are always stored fully without using the fractional part of
them. Its either the item is added to the knapsack or not. That is why, this method is known as the 0-
1 Knapsack problem.

Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other constraints remain
the same.

0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not ensure an optimal
solution in this method. In many instances, Greedy approach may give an optimal solution.

0-1 Knapsack Algorithm

Problem Statement − A thief is robbing a store and can carry a maximal weight of W into his
knapsack. There are n items and weight of ith item is wi and the profit of selecting this item is pi.
What items should the thief take?

Let i be the highest-numbered item in an optimal solution S for W dollars. Then S = S {i} is an optimal
solution for W wi dollars and the value to the solution S is Vi plus the value of the sub-problem.

We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, ,
i and the maximum weight w.

The algorithm takes the following inputs

 The maximum weight W

 The number of items n

 The two sequences v = <v1, v2, , vn> and w = <w1, w2, , wn>

The set of items to take can be deduced from the table, starting at c[n, w] and tracing backwards
where the optimal values came from.

If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i-1, w].
Otherwise, item i is part of the solution, and we continue tracing with c [i-1, w-W].

Dynamic-0-1-knapsack (v, w, n, W)

for w = 0 to W do

c[0, w] = 0

for i = 1 to n do

c[i, 0] = 0

for w = 1 to W do

if wi w then

if vi + c[i-1, w-wi] then

c[i, w] = vi + c[i-1, w-wi]

else c[i, w] = c[i-1, w]

else
c[i, w] = c[i-1, w]

The following examples will establish our statement.

Example

Let us consider that the capacity of the knapsack is W = 8 and the items are as shown in the following
table.

Item A B C D

Profit 2 4 7 10

Weight 1 3 5 7

Solution

Using the greedy approach of 0-1 knapsack, the weight thats stored in the knapsack would be A+B =
4 with the maximum profit 2 + 4 = 6. But, that solution would not be the optimal solution.

Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.

Step 1

Construct an adjacency table with maximum weight of knapsack as rows and items with respective
weights and profits as columns.

Values to be stored in the table are cumulative profits of the items whose weights do not exceed the
maximum weight of the knapsack (designated values of each row)

So we add zeroes to the 0th row and 0th column because if the weight of item is 0, then it weighs
nothing; if the maximum weight of knapsack is 0, then no item can be added into the knapsack.

The remaining values are filled with the maximum profit achievable with respect to the items and
weight per column that can be stored in the knapsack.

The formula to store the profit values is −

c[i,w]=max{c[i−1,w−w[i]]+P[i]}c[i,w]=max{c[i−1,w−w[i]]+P[i]}

By computing all the values using the formula, the table obtained would be −
To find the items to be added in the knapsack, recognize the maximum profit from the table and
identify the items that make up the profit, in this example, its {1, 7}.

The optimal solution is {1, 7} with the maximum profit is 12.

Top-Down Approach -

In the above approach, there are a lot of duplicate calls for the same state ‘N’ and ‘W’, which is
leading to exponential complexity. For high ‘N’ and ‘W’. the above algorithm will run forever.

We can avoid calling a recursive function by using an array and storing the state in that array. If we
encounter the same state again during execution, we will return from the array instead of
recomputing again. This approach will drastically decrease the time complexity.

Algorithm Complexity:

Time Complexity: O(N * W)


There will be ‘N * W’ calls in a recursive function as we are using a 2D array, and if the value is
present in this array, then we are not making duplicate calls, so there will be utmost N * W
combinations. Therefore the overall time complexity is O(N * W).

Space Complexity: O(N * W)

📘 Step-by-Step Explanation

1. Create a 2D array dp[n+1][W+1], initialized with 0.

2. Traverse each item i (from 1 to n).

3. For each weight w (from 1 to W):

o If the weight of current item w[i-1] ≤ w, we have two choices:

 Include the item: add its value to the remaining capacity (w - w[i-1]).

 Exclude the item: skip it, keep the old value.

o Choose the maximum of the two.

4. The final answer is dp[n][W] — the best value for n items and full capacity W

As we are using a 2D array of size ‘N * W’. Therefore the overall space complexity is O(N * W).

Multistage Graph

A multistage graph G = (V, E) is a directed graph where vertices are partitioned into k (where k > 1)
number of disjoint subsets S = {s1,s2,,sk} such that edge (u, v) is in E, then u Є si and v Є s1 + 1 for some
subsets in the partition and |s1| = |sk| = 1.

The vertex s Є s1 is called the source and the vertex t Є sk is called sink.

G is usually assumed to be a weighted graph. In this graph, cost of an edge (i, j) is represented by c(i,
j). Hence, the cost of path from source s to sink t is the sum of costs of each edges in this path.

The multistage graph problem is finding the path with minimum cost from source s to sink t.

Consider the following example to understand the concept of multistage graph.

According to the formula, we have to calculate the cost (i, j) using the following steps

Step 1: Cost (K-2, j)


In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options to choose the
minimum cost at this step.

Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7

Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5

Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5

Step 2: Cost (K-3, j)

Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3. So, the value i = 2
and j = 2 and 3.

Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +

Cost(6, 8) + Cost(8, 9)} = 8

Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) + Cost(6, 8) +
Cost(8, 9)} = 10

Step 3: Cost (K-4, j)

Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8) + Cost(8, 9))} =
12

c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13

Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.

What is the Multistage Graph?

A multistage graph is a directed graph where the nodes are divided into multiple stages, and edges
only exist between nodes of consecutive stages. Each edge is assigned a weight representing the cost
or distance associated with transitioning from one stage to the next. Multistage graphs are
commonly used to model optimization problems involving a sequence of decisions or stages, where
the goal is to minimize or maximize some objective function while traversing the graph from the
initial stage to the final stage.

In a multistage graph, the first stage typically represents the initial state or starting point of the
problem, while the last stage represents the final state or goal. Intermediate stages represent
intermediate states or decision points where choices must be made to progress towards the final
goal. The edges in the graph represent transitions between stages, and the weights on these edges
represent the costs or benefits associated with making those transitions.

The Multistage Graph Problem has several practical and theoretical applications in computer
science and engineering, especially in fields where optimal decision-making across multiple stages
is crucial. Below are the most important applications, explained in detail:

✅ Applications of Multistage Graphs

🔸 1. Route Optimization in Transportation


Multistage graphs are commonly used in navigation systems, such as GPS route planning. The
stages represent waypoints (e.g., cities or checkpoints), and the edges represent roads or paths
with associated travel times or distances.

 Example: Finding the shortest route from your home to your destination while passing
through several required checkpoints or cities.

 The algorithm finds the path with the minimum total distance or time through all stages.

🔸 2. Compiler Design (Code Optimization)

In compiler design, code generation and optimization can be treated as a multistage problem. Each
stage might represent different intermediate representations or optimizations.

 Example: Converting high-level code to efficient machine code while minimizing instruction
count or execution time.

 Each decision (like choosing a register or instruction) can affect the next stage, so optimal
sequences are computed using dynamic programming over a multistage graph.

🔸 3. Project Scheduling (PERT/CPM)

In Project Management, tasks are often dependent on previous tasks and are performed in stages.
Each task is a node, and dependencies form edges.

 Multistage graphs can help compute the minimum time to complete a project, identify
critical paths, and manage task dependencies.

 This is useful in PERT (Program Evaluation and Review Technique) and CPM (Critical Path
Method).

🔸 4. Speech Recognition and Natural Language Processing (NLP)

Multistage graphs are used in Hidden Markov Models (HMMs) and Viterbi algorithms to find the
most probable sequence of words or phonemes in a sentence.

 Each stage corresponds to a word or time step, and the graph represents possible
transitions.

 The goal is to find the most probable path through all stages based on acoustic or linguistic
models.

🔸 5. Network Routing and Packet Transmission

Multistage graphs are used in network protocols to find the shortest or most efficient path for data
to travel from a source to destination through intermediate routers.

 Each stage can represent a network layer or router level.


 The algorithm ensures least-cost path with considerations like bandwidth, latency, and hop
count.

🔸 6. Decision-Making Systems

Many AI-based decision support systems use multistage graphs to model sequences of decisions,
where each decision affects future choices.

 Example: Business investment planning, where each stage represents a quarter or year,
and decisions must be optimized across all stages for maximum return.

🔸 7. Multistage Manufacturing or Production Systems

In manufacturing systems, production often happens in multiple stages (raw materials →


components → assembly → packaging).

 Each stage has options with costs and outputs.

 Multistage graphs help in minimizing total production cost or maximizing efficiency.

🔸 8. Game Development and AI

In game AI, decision trees or state graphs can be modeled as multistage graphs where each move
leads to a new stage of the game.

 Optimal strategies can be computed by treating the game states as nodes and transitions
as possible actions with scores.

🧠 Summary Table

Application Area Use Case

GPS / Maps Shortest route with stops

Compiler Design Optimized code generation

Project Management Task scheduling and critical path analysis

NLP / Speech Recognition Word prediction and sequence decoding

Network Routing Efficient data packet delivery

Decision Systems Planning over multiple time horizons

Manufacturing Cost-effective production sequencing

Game AI Move optimization across game stages


Reliability Design Using Dynamic Programming (DP) is a method to maximize the reliability of a
system (usually composed of multiple components) under cost constraints by selecting optimal
combinations of components — such as series or parallel redundant components.

🧠 Problem Definition

Given:

 A system with n components

 For each component, multiple choices of subcomponents with:

o Reliability r[i][j]

o Cost c[i][j]

o Each component can have one or more redundant elements (parallel design)

Goal:

Maximize the overall system reliability under a total budget constraint using dynamic
programming.

🔄 System Reliability Models

1. Series System

 Total Reliability:

Rtotal=R1×R2×…×RnR_{\text{total}} = R_1 \times R_2 \times \ldots \times R_n

 If any one component fails, the system fails.

2. Parallel System (Redundancy)

 Total Reliability of k parallel units:

R=1−(1−r)kR = 1 - (1 - r)^k

 Reliability improves with more redundant components.

🎯 Dynamic Programming Approach

Let’s assume:

 B = Total budget

 n = Number of system components

 Each component has multiple redundant choices (1 or more units)

 dp[i][b] = Maximum reliability achievable for first i components with budget b


📋 Steps in DP Formulation

1. Initialization:

dp[0][b]=1(neutral for multiplication)dp[0][b] = 1 \quad \text{(neutral for multiplication)}

2. Recurrence Relation:
For each component i = 1 to n, for each budget b = 0 to B:

o Try adding different k copies (redundancy level) of a unit:

 Cost: k × c[i][j]

 Reliability: R=1−(1−r[i][j])kR = 1 - (1 - r[i][j])^k

 Update:

dp[i][b]=max⁡(dp[i][b],dp[i−1][b−cost]×R)dp[i][b] = \max(dp[i][b], dp[i-1][b - cost] \times R)

3. Answer:

dp[n][B]=Maximum achievable system reliability with budget Bdp[n][B] = \text{Maximum achievable


system reliability with budget } B

🧮 Example

Suppose:

 Budget B = 10

 2 Components

 Choices:

o Component 1: (R=0.90, Cost=2), (R=0.95, Cost=3)

o Component 2: (R=0.80, Cost=2), (R=0.85, Cost=3)

Try all combinations of component choice + redundancy that fit budget.

⏱ Time Complexity

O(n⋅B⋅m⋅k)O(n \cdot B \cdot m \cdot k)

 n = number of components

 B = budget

 m = number of choices per component

 k = max number of redundant units tried (bounded by budget)

✅ Applications

 Fault-tolerant embedded systems


 Network reliability planning

 Satellite and aerospace system design

 Industrial automation and safety systems

The Floyd-Warshall algorithm is a graph algorithm that is deployed to find the shortest path between
all the vertices present in a weighted graph. This algorithm is different from other shortest path
algorithms; to describe it simply, this algorithm uses each vertex in the graph as a pivot to check if it
provides the shortest way to travel from one point to another.

Floyd-Warshall algorithm works on both directed and undirected weighted graphs unless these
graphs do not contain any negative cycles in them. By negative cycles, it is meant that the sum of all
the edges in the graph must not lead to a negative number.

Since, the algorithm deals with overlapping sub-problems the path found by the vertices acting as
pivot are stored for solving the next steps it uses the dynamic programming approach.

Floyd-Warshall algorithm is one of the methods in All-pairs shortest path algorithms and it is solved
using the Adjacency Matrix representation of graphs.

Floyd-Warshall Algorithm

Consider a graph, G = {V, E} where V is the set of all vertices present in the graph and E is the set of
all the edges in the graph. The graph, G, is represented in the form of an adjacency matrix, A, that
contains all the weights of every edge connecting two vertices.

Algorithm

Step 1 − Construct an adjacency matrix A with all the costs of edges present in the graph. If there is
no path between two vertices, mark the value as ∞.

Step 2 − Derive another adjacency matrix A1 from A keeping the first row and first column of the
original adjacency matrix intact in A1. And for the remaining values, say A1[i,j], if A[i,j]>A[i,k]
+A[k,j] then replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values. Here, in this
step, k = 1 (first vertex acting as pivot).

Step 3 − Repeat Step 2 for all the vertices in the graph by changing the k value for every pivot vertex
until the final matrix is achieved.

Step 4 − The final adjacency matrix obtained is the final solution with all the shortest paths.

Pseudocode

Floyd-Warshall(w, n){ // w: weights, n: number of vertices

for i = 1 to n do // initialize, D (0) = [wij]

for j = 1 to n do{

d[i, j] = w[i, j];

for k = 1 to n do // Compute D (k) from D (k-1)


for i = 1 to n do

for j = 1 to n do

if (d[i, k] + d[k, j] < d[i, j]){

d[i, j] = d[i, k] + d[k, j];

return d[1..n, 1..n];

Example

Consider the following directed weighted graph G = {V, E}. Find the shortest paths between all the
vertices of the graphs using the Floyd-Warshall algorithm.

Solution

Step 1

Construct an adjacency matrix A with all the distances as values.

A=0∞3∞250∞∞∞∞102∞6∞405∞7∞30A=05∞6∞∞01∞73∞04∞∞∞2032∞∞50

Step 2

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A=0∞3∞25∞6∞A=05∞6∞∞3∞2

A1=0∞3∞2508∞7∞102∞6∞405∞7∞30A1=05∞6∞∞01∞73804∞∞∞20327∞50

Step 3

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A2=∞508∞71∞7A2=5∞01∞78∞7

A2=0∞3∞2508∞7610286∞4051271530A2=056612∞01∞7380415∞∞20327850
Step 4

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A3=3861028415A3=6138041528

A3=0435250810761028654051271530A3=0566124015738041551020327850

Step 5

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A4=5102654053A4=6545102035

A4=04352508107610276540597730A4=05669401573804751020327750

Step 6

Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first
rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

A5=277597730A5=977327750

A5=04352508107610276540597730A5=05669401573804751020327750

Analysis

From the pseudocode above, the Floyd-Warshall algorithm operates using three for loops to find the
shortest distance between all pairs of vertices within a graph. Therefore, the time complexity of the
Floyd-Warshall algorithm is O(n3), where n is the number of vertices in the graph. The space
complexity of the algorithm is O(n2).

Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the pairs of
vertices in a weighted graph. This algorithm works for both the directed and undirected weighted
graphs. But, it does not work for the graphs with negative cycles (where the sum of the edges in a
cycle is negative).

A weighted graph is a graph in which each edge has a numerical value associated with it.

Floyd-Warhshall algorithm is also called as Floyd's algorithm, Roy-Floyd algorithm, Roy-Warshall


algorithm, or WFI algorithm.

This algorithm follows the dynamic programming approach to find the shortest paths.

How Floyd-Warshall Algorithm Works?

Let the given graph be:


Initial Graph

Follow the steps below to find the shortest path between all the pairs of vertices.

1. Create a matrix A0 of dimension n*n where n is the number of vertices. The row and the
column are indexed as i and j respectively. i and j are the vertices of the graph.

Each cell A[i][j] is filled with the distance from the ith vertex to the jth vertex. If there is no
path from ith vertex to jth vertex, the cell is left as infinity.

Fill each cell with the distance between


ith and jth vertex

2. Now, create a matrix A1 using matrix A0. The elements in the first column and the first row
are left as they are. The remaining cells are filled in the following way.

Let k be the intermediate vertex in the shortest path from source to destination. In this
step, k is the first vertex. A[i][j] is filled with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j]).

That is, if the direct distance from the source to the destination is greater than the path
through the vertex k, then the cell is filled with A[i][k] + A[k][j].

In this step, k is vertex 1. We calculate the distance from source vertex to destination vertex

through this vertex k. Calculate


the distance from the source vertex to destination vertex through this vertex k
For example: For A1[2, 4], the direct distance from vertex 2 to 4 is 4 and the sum of the
distance from vertex 2 to 4 through vertex (ie. from vertex 2 to 1 and from vertex 1 to 4) is 7.
Since 4 < 7, A0[2, 4] is filled with 4.
3. Similarly, A2 is created using A1. The elements in the second column and the second row are
left as they are.

In this step, k is the second vertex (i.e. vertex 2). The remaining steps are the same as in step

2. Calculate the distance from


the source vertex to destination vertex through this vertex 2

4. Similarly, A3 and A4 is also created.


Calculate the distance from the source vertex to destination vertex through this vertex

3 Calculate the distance from


the source vertex to destination vertex through this vertex 4

5. A4 gives the shortest path between each pair of vertices.

Floyd-Warshall Algorithm

n = no of vertices

A = matrix of dimension n*n

for k = 1 to n

for i = 1 to n

for j = 1 to n

Ak[i, j] = min (Ak-1[i, j], Ak-1[i, k] + Ak-1[k, j])

return A

1. All-Pairs Shortest Path (APSP) in Networks


Floyd-Warshall is ideal for computing the shortest paths between every pair of routers or nodes in a
communication network.

 Example: In an internet backbone, find the minimum transmission delay between any two
servers.

🔹 2. Routing Table Construction

Used in link-state routing protocols like OSPF (Open Shortest Path First) to build complete routing
tables efficiently.

 Each node calculates the best path to all other nodes using this algorithm.

🔹 3. Transitive Closure of a Graph

If you just want to check reachability (i.e., is there a path from node A to B?), you can modify Floyd-
Warshall to compute transitive closure.

 Useful in compiler optimizations, where you check if a variable is accessible in certain


scopes.

🔹 4. Detecting Negative Cycles

You can detect the presence of a negative-weight cycle if dist[i][i] < 0 for any vertex i after running
the algorithm.

 Important in financial systems and currency arbitrage to detect infeasible transaction loops.

🔹 5. Map Navigation Systems

Used when precomputing distances between all locations in a city map or game world where fast
lookup of path lengths is needed.

 Ideal when the graph is relatively small (few hundred nodes), but many path queries are
required.

🔹 6. Game Theory / AI Path Planning

In multi-agent systems or games, precomputing paths between all locations allows for fast decision-
making for AI characters.

You might also like