[go: up one dir, main page]

0% found this document useful (0 votes)
26 views38 pages

Daa Module 3

The document explains greedy algorithms, backtracking, and dynamic programming as problem-solving techniques. Greedy algorithms make local optimal choices without reconsideration, while backtracking explores all possibilities incrementally, and dynamic programming optimizes overlapping subproblems. Each method has its advantages, drawbacks, and specific applications in optimization problems.

Uploaded by

hibanahm12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views38 pages

Daa Module 3

The document explains greedy algorithms, backtracking, and dynamic programming as problem-solving techniques. Greedy algorithms make local optimal choices without reconsideration, while backtracking explores all possibilities incrementally, and dynamic programming optimizes overlapping subproblems. Each method has its advantages, drawbacks, and specific applications in optimization problems.

Uploaded by

hibanahm12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

DAA MODULE 3

SECOND PART
GREEDY ALGORITHM
A greedy algorithm is an approach for solving a problem by selecting the best option available at
the moment. It doesn't worry whether the current best result will bring the overall optimal result.
The algorithm never reverses the earlier decision even if the choice is wrong. It works in a
top-down approach.
This algorithm may not produce the best result for all the problems. It's because it always goes for
the local best choice to produce the global best result.The algorithm can be used with any problem
if the problem has the following properties:
1. Greedy Choice Property
If an optimal solution to the problem can be found by choosing the best choice at each step without
reconsidering the previous steps once chosen, the problem can be solved using a greedy
approach. This property is called greedy choice property.
2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the optimal solution to its subproblems,
then the problem can be solved using a greedy approach. This property is called optimal
substructure.
Elements of the Greedy strategy
A Greedy Algorithm obtains an optimal solution to a problem by making a
sequence of choices.For each decision point in the algorithm, the choice that
seems best at that moment is chosen.This algorithm has following steps:

1.Determine the optimal substructure of the problem.

2.Develop a recursive solution

3.Prove that at any stage of the recursion one of the optimal choice is the greedy
choice.

4.Convert the recursive algorithm to an iterative algorithm.


GREEDY ALGORITHM
Advantages of Greedy Approach
● The greedy approach is easy to implement.
● Typically have less time complexity.
● Greedy algorithms can be used for optimization purposes or finding close to
optimization in case of Hard problems.
Drawback of Greedy Approach
The greedy algorithm doesn't always produce the optimal solution. This is the major
disadvantage of the algorithm

All greedy algorithms follow a basic structure:


1. declare an empty result = 0.
2. We make a greedy choice to select, If the choice is feasible add it to the final
result.
3. return the result.
Greedy Algorithm

1. To begin with, the solution set (containing answers) is empty.

2. At each step, an item is added to the solution set until a solution is reached.
3. If the solution set is feasible, the current item is kept.

4. Else, the item is rejected and never considered again.


GREEDY ALGORITHM
Characteristics of Greedy approach:
● There is an ordered list of resources(profit, cost, value, etc.)
● Maximum of all the resources(max profit, max value, etc.) are taken.
● For example, in the fractional knapsack problem, the maximum value/weight is taken
first according to available capacity.
Applications of Greedy Algorithms:
● Finding an optimal solution (Activity selection, Fractional Knapsack, Job Sequencing,
Huffman Coding).
● Finding close to the optimal solution for NP-Hard problems like TSP.
Standard Greedy Algorithms :
● Prim’s Algorithm
● Kruskal’s Algorithm
● Dijkstra’s Algorithm
Difference between the Greedy Algorithm and the Divide and Conquer Algorithm:

1.Divide and conquer is used to obtain a solution to the given problem, it does not aim for the
optimal solution.The greedy method is used to obtain an optimal solution to the given problem.
2.In this technique, the problem is divided into small subproblems. These subproblems are solved
independently. Finally, all the solutions to subproblems are collected together to get the solution to
the given problem.
In Greedy Method, a set of feasible solutions are generated and pick up one feasible solution is the
optimal solution.
3.Divide and conquer is less efficient and slower because it is recursive in nature.
A greedy method is comparatively efficient and faster as it is iterative in nature.
4.Divide and conquer may generate duplicate solutions.
In the Greedy method, the optimal solution is generated without revisiting previously generated
solutions, thus it avoids the re-computation
5.Divide and conquer algorithms mostly run in polynomial time.
Greedy algorithms also run in polynomial time but take less time than Divide and conquer
6.Examples: Merge sort,Quick Sort,Strassen Matrix Multiplication
Examples: Fractional Knapsack problem,Job Sequencing Problem,Activity selection problem
BACKTRACKING
Backtracking can be defined as a general algorithmic technique that considers
searching every possible combination in order to solve a computational
problem.

Backtracking is a problem-solving algorithmic technique that involves finding a


solution incrementally by trying different options and undoing them if they
lead to a dead end.

It is commonly used in situations where you need to explore multiple


possibilities to solve a problem, like searching for a path in a maze or solving
puzzles like Sudoku. When a dead end is reached, the algorithm backtracks
to the previous decision point and explores a different path until a solution is
found or all possibilities have been exhausted.
BACKTRACKING ALGORITHM(pseudocode)
Backtrack(s)

ifs is not a solution

return false

if is a new solution

add to list of solutions

backtrack(expand s)
In any backtracking algorithm, the algorithm seeks a path to a feasible solution that
includes some intermediate checkpoints. If the checkpoints do not lead to a viable
solution, the problem can return to the checkpoints and take another path to find a
solution.
Types of Backtracking Problems
Problems associated with backtracking can be categorized into 3 categories:
● Decision Problems: Here, we search for a feasible solution.
● Optimization Problems: For this type, we search for the best solution.
● Enumeration Problems: We find set of all possible feasible solutions to the
problems of this type.
Backtracking considers searching every possible combination in order to solve an
optimization problem.It uses Depth-first approach
Basic Elements:-
● Candidate: A candidate is a potential choice or element that can be added to the
current solution.
● Solution: The solution is a valid and complete configuration that satisfies all problem
constraints.
● Partial Solution: A partial solution is an intermediate or incomplete configuration
being constructed during the backtracking process.
● Decision Space: The decision space is the set of all possible candidates or choices
at each decision point.
● Decision Point: A decision point is a specific step in the algorithm where a candidate
is chosen and added to the partial solution.
● Feasible Solution: A feasible solution is a partial or complete solution that adheres
to all constraints.
● Dead End: A dead end occurs when a partial solution cannot be extended without
violating constraints.
● Backtrack: Backtracking involves undoing previous decisions and returning to a
prior decision point.
● Search Space: The search space includes all possible combinations of candidates
and choices.
● Optimal Solution: In optimization problems, the optimal solution is the best possible
solution.
The algorithm is as follows:
● Step 1: Return success if the current point is a viable solution.
● Step 2: Otherwise, if all paths have been exhausted (i.e., the current point is
an endpoint), return failure because there is no feasible solution.
● Step 3: If the current point is not an endpoint, backtrack and explore other
points, then repeat the preceding steps.
Application of Backtracking:
N Queen problem, Rat in a Maze problem, Knight’s Tour Problem, Sudoku solver,
and Graph coloring problems,Network Routing and Congestion Control,
Decryption, Text Justification
Sum of Subset problem using Backtracking
In sum of subsets problem,we are given some positive numbers wᵢ,1≤ i ≤ n,called weights
and m,and the problem is to find all the subsets of wᵢ whose sums are m.
Example,if n = 4,(w₁,w₂,w₃,w₄) = (11,13,24,7) and m = 31.So, the subsets for which the
sum is 31 are (11,13,7) and (24,7). We can also represent the solution vector by giving
the indexes,i.e.,(1,2,4) and (3,4).This way of representing the solution vector is called
variable size tuple formation. But we use fixed size tuple formation to create
backtracking solution in which xᵢ = 1,if weight is selected otherwise it is 0.So the solution
vector using fixed size tuple for the example is (1,1,0,1) and (0,0,1,1).
The bounding functions we use in this problem are ∑i=1kwixi + ∑ni=k+1 wi >=m
and ∑i=1kwixi + wk+1 <=m
DYNAMIC PROGRAMMING
Dynamic programming is a computer programming technique where an algorithmic
problem is first broken down into sub-problems, the results are saved, and then
the sub-problems are optimized to find the overall solution — which usually has to
do with finding the maximum and minimum range of the algorithmic query.
It is a method of mathematical optimization as well as a methodology for computer
programming. It applies to issues one can break down into either overlapping
subproblems or optimum substructures.
Difference between divide & conquer and Dynamic
programming

Divide and Conquer algorithms partition the problem into independent


subproblems,solve the sub problems recursively,and combine their so;utions to
solve the original problem.

In contrast dynamic programming is applicable when subproblems are not


independent,

Dynamic programming algorithm solves every subproblem just once and then
saves its answer in a table,thereby avoiding the work of recomputing the answer
every time the subproblem is encountered.

It is applicable to optimization problems,that has many solutions


Development of dynamic programming is broken into a sequence of four
steps:
1.Characterize the structure of an optimal solution.
2.Recursively define the value of an optimal solution
3.Compute the value of an optimal solution in a bottom -up fashion.
4.Construct an optimal solution from computed information.
● The problem should be able to be divided into smaller overlapping
sub-problem.
● Final optimum solution can be achieved by using an optimum solution of
smaller sub-problems.
● Dynamic algorithms use memorization.
Techniques to solve Dynamic Programming Problems:

1. Top-Down(Memoization):
Break down the given problem in order to begin solving it. If you see that the
problem has already been solved, return the saved answer. If it hasn’t been
solved, solve it and save it. This is usually easy to think of and very intuitive, This
is referred to as memoization
2. Bottom-Up((Tabulation)
Analyze the problem and see in what order the subproblems are solved, and work
your way up from the trivial subproblem to the given problem. This process
ensures that the subproblems are solved before the main problem.This is referred
as dynamic programming.
Dynamic programming is used to solve optimization problems. It is used to solve
many real-life problems such as,
(i) Make a change problem
(ii) Knapsack problem
(iii) Optimal binary search tree
iv) Matrix chain Multiplication

v)Assembly -line scheduling

(vi)Longest common Subsequence

(vii) Dijkstra's algorithm


Elements of Dynamic Programming

Overlapping subproblems
When the answers to the same subproblem are needed more than once to solve
the main problem, we say that the subproblems overlap. In overlapping issues,
solutions are put into a table so developers can use them repeatedly instead of
recalculating them. The recursive program for the Fibonacci numbers has several
subproblems that overlap, but a binary search doesn’t have any subproblems that
overlap.

Optimal substructure
The optimal substructure property of a problem says that you can find the best
answer to the problem by taking the best solutions to its subproblems and putting
them together. Most of the time, recursion explains how these optimal
substructures work.
Memoization
In computing, memoization is used to speed up computer programs by eliminating
the repetitive computation of results, and by avoiding repeated calls to functions
that process the same input.
Memoization is a specific form of caching that is used in dynamic programming.
The purpose of caching is to improve the performance of our programs and keep
data accessible that can be used later. It basically stores the previously calculated
result of the subproblem and uses the stored result for the same subproblem.
We can use the memoization technique where the use of the
previously-calculated results comes into the picture. This kind of problem is
mostly used in the context of recursion,especially with problems that involve
overlapping subproblems
Longest Common Subsequence Algorithm
The longest common subsequence problem is finding the longest sequence which
exists in both the given strings.
But before we understand the problem, let us understand what the term
subsequence is −
Let us consider a sequence S = <s1, s2, s3, s4, …,sn>. And another sequence Z =
<z1, z2, z3, …,zm> over S is called a subsequence of S, if and only if it can be
derived from S deletion of some elements. In simple words, a subsequence
consists of consecutive elements that make up a small part in a sequence.
Common Subsequence
Suppose, X and Y are two sequences over a finite set of elements. We can say
that Z is a common subsequence of X and Y, if Z is a subsequence of both X and
Y.
Longest Common Subsequence
If a set of sequences are given, the longest common subsequence problem is to
find a common subsequence of all the sequences that is of maximal length.
Naive Method
Let X be a sequence of length m and Y a sequence of length n. Check for every
subsequence of X whether it is a subsequence of Y, and return the longest
common subsequence found.
There are 2m subsequences of X. Testing sequences whether or not it is a
subsequence of Y takes O(n) time. Thus, the naive algorithm would take O(n2m)
time.
Longest Common Subsequence Algorithm
Let X=<x1,x2,x3....,xm> and Y=<y1,y2,y3....,ym> be the sequences. To compute the
length of an element the following algorithm is used.
Step 1 − Construct an empty adjacency table with the size, n × m, where n = size of
sequence X and m = size of sequence Y. The rows in the table represent the elements
in sequence X and columns represent the elements in sequence Y.
Step 2 − The zeroeth rows and columns must be filled with zeroes. And the remaining
values are filled in based on different cases, by maintaining a counter value.
Step 3 − Once the table is filled, backtrack from the last value in the table.
Backtracking here is done by tracing the path where the counter incremented first.
Step 4 − The longest common subseqence obtained by noting the elements in the
traced path.

● Case 1 − If the counter encounters common element in both X and Y sequences,


increment the counter by 1.
● Case 2 − If the counter does not encounter common elements in X and Y
sequences at T[i, j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in
T[i, j].
Example

In this example, we have two strings X=BACDB and Y=BDCB to find the longest
common subsequence.

Following the algorithm, we need to calculate two tables 1 and 2.

Given n = length of X, m = length of Y

X = BDCB, Y = BACDB

Constructing the LCS Tables

In the table below, the zeroeth rows and columns are filled with zeroes. Remianing
values are filled by incrementing and choosing the maximum values according to
the algorithm.
Algorithm: LCS-Length-Table-Formulation (X, Y)
m := length(X)
n := length(Y)
for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do
C[0, j] := 0
for i = 1 to m do
for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1
B[i, j] := “↖‘’
else
if C[i -1, j ] ≥ C[i, j -1]
C[i, j] := C[i - 1, j ]
B[i, j] := ‘’↑”
else
C[i, j] := C[i, j - 1]
B[i, j] := ‘’←”
return C and B
Algorithm: Print-LCS (B, X, i, j)
if i=0 and j=0
return
if B[i, j] = ‘’↖”
Print-LCS(B, X, i-1, j-1)
Print(xi)
else if B[i, j] = ‘’↑”
Print-LCS(B, X, i-1, j)
else
Print-LCS(B, X, i, j-1)
Branch and Bound Algorithm

Introduction to Branch and Bound Technique


The Branch and Bound Technique is a problem solving strategy, which is most
commonly used in optimization problems, where the goal is to minimize a
certain value. The optimized solution is obtained by means of a state space
tree (A state space tree is a tree where the solution is constructed by adding
elements one by one, starting from the root. Note that the root contains no
element). This method is best when used for combinatorial problems with
exponential time complexity, since it provides a more efficient solution to such
problems.
Branch and bound is an algorithm design paradigm which is generally used for solving
combinatorial optimization problems. These problems are typically exponential in terms of
time complexity and may require exploring all possible permutations in worst case. The
Branch and Bound Algorithm technique solves these problems relatively quickly.

Example to show working of Branch and Bound Algorithm

Let us consider the 0/1 Knapsack problem to understand Branch and Bound.

There are many algorithms by which the knapsack problem can be solved:

● DP solution for 0/1 Knapsack


● Backtracking Solution for 0/1 Knapsack.
Different search techniques in branch and bound:
The Branch algorithms incorporate different search techniques to traverse a state space
tree. Different search techniques used in B&B are listed below:
1. LC search
2. BFS
3. DFS
1. LC search (Least Cost Search):
It uses a heuristic cost function to compute the bound values at each node. Nodes are added
to the list of live nodes as soon as they get generated.
The node with the least value of a cost function selected as a next E-node.
2.BFS(Breadth First Search):
It is also known as a FIFO search.
It maintains the list of live nodes in first-in-first-out order i.e, in a queue, The live nodes are
searched in the FIFO order to make them next E-nodes.
3. DFS (Depth First Search):
It is also known as a LIFO search.
It maintains the list of live nodes in last-in-first-out order i.e. in a stack.
The live nodes are searched in the LIFO order to make them next E-nodes.
● It is appropriate to use a branch and bound approach if the given problem is discrete
optimization. Discrete optimization refers to problems in which the variables belong
to the discrete set. Examples of such problems include 0-1 Integer Programming and
Network Flow problems.
● When it comes to combinatory optimization problems, branch and bound work well.
An optimization problem is optimized by finding its maximum or minimum based on
its objective function. The combinatory optimization problems include Boolean
Satisfiability and Integer Linear Programming.
Basic Concepts of Branch and Bound:
▸ Generation of a state space tree:
As in the case of backtracking, B&B generates a state space tree to efficiently search the
solution space of a given problem instance.
In B&B, all children of an E-node in a state space tree are produced before any live node
gets converted in an E-node. Thus, the E-node remains an E-node until i becomes a dead
node.
● Evaluation of a candidate solution:
Unlike backtracking, B&B needs additional factors evaluate a candidate solution:
1. A way to assign a bound on the best values of the given criterion functions to each
node in a state space tree: It is produced by the addition of further components to the
partial solution given by that node.
2. The best values of a given criterion function obtained so far: It describes the upper
bound for the maximization problem and the lower bound for the minimization
problem.
● A feasible solution is defined by the problem states that satisfy all the given
constraints.
● An optimal solution is a feasible solution, which produces the best value of a given
objective function.
● Bounding function :
It optimizes the search for a solution vector in the solution space of a given problem
instance.
KNAPSACK PROBLEM
Given N items where each item has some weight and profit associated with it and
also given a bag with capacity W, [i.e., the bag can hold at most W weight in it].
The task is to put the items into the bag such that the sum of profits associated
with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the bag or
cannot put it at all [It is not possible to put a part of an item into the bag].

Given N items with weights W[0..n-1], values V[0..n-1] and a knapsack with
capacity C, select the items such that:

1. The sum of weights taken into the knapsack is less than or equal to C.
2. The sum of values of the items in the knapsack is maximum among all the
possible combinations.
KNAPSACK PROBLEM
Input: N = 4, C = 15, V[]= {10, 10, 12, 18}, W[]= {2, 4, 6, 9}
Output:
Items taken into the knapsack are
1101
Maximum profit is 38
Explanation:
1 in the output indicates that the item is included in the knapsack while 0 indicates that the item is excluded.
Since the maximum possible cost allowed is 15, the ways to select items are:
(1 1 0 1) -> Cost = 2 + 4 + 9 = 15, Profit = 10 + 10 + 18 = 38.
(0 0 1 1) -> Cost = 6 + 9 = 15, Profit = 12 + 18 = 30
(1 1 1 0) -> Cost = 2 + 4 + 6 = 12, Profit = 32
Hence, maximum profit possible within a cost of 15 is 38.
Input: N = 4, C = 21, V[]= {18, 20, 14, 18}, W[]= {6, 3, 5, 9}
Output:
Items taken into the knapsack are
1101
Maximum profit is 56
Explanation:
Cost = 6 + 3 + 9 = 18
Profit = 18 + 20 + 18 = 56

You might also like