[go: up one dir, main page]

0% found this document useful (0 votes)
14 views23 pages

Unit - 4 Daa

Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems, storing their solutions to avoid redundant calculations. Key principles include optimal substructure and overlapping subproblems, with techniques like memoization and tabulation used for implementation. Applications of dynamic programming include the Rod Cutting Problem, Floyd-Warshall Algorithm for shortest paths, Matrix Chain Multiplication, and the Traveling Salesman Problem.

Uploaded by

rahulcswe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views23 pages

Unit - 4 Daa

Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems, storing their solutions to avoid redundant calculations. Key principles include optimal substructure and overlapping subproblems, with techniques like memoization and tabulation used for implementation. Applications of dynamic programming include the Rod Cutting Problem, Floyd-Warshall Algorithm for shortest paths, Matrix Chain Multiplication, and the Traveling Salesman Problem.

Uploaded by

rahulcswe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT – 4 : DYNAMIC PROGRAMMING

Dynamic Programming:
Dynamic programming is a technique that breaks the problems into sub-problems, and
saves the result for future purposes so that we do not need to compute the result again.
The subproblems are optimized to optimize the overall solution is known as optimal
substructure property. The main use of dynamic programming is to solve optimization
problems. Here, optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem. The dynamic programming
guarantees to find the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each
subproblem just once, and then storing their solutions to avoid repetitive computations.
One of the most basic, classic examples of this process is the Fibonacci Sequence.

Characteristics of Dynamic Programming Algorithm:


• In general, dynamic programming (DP) is one of the most powerful techniques for
solving a certain class of problems.
• There is an elegant way to formulate the approach and a very simple thinking
process, and the coding part is very easy.
• Essentially, it is a simple idea, after solving a problem with a given input, save the
result as a reference for future use, so you won’t have to re-solve it.. briefly
‘Remember your Past’ :).
• It is a big hint for DP if the given problem can be broken up into smaller sub-
problems, and these smaller subproblems can be divided into still smaller ones, and
in this process, you see some overlapping subproblems.
• Additionally, the optimal solutions to the subproblems contribute to the optimal
solution of the given problem (referred to as the Optimal Substructure Property).
• The solutions to the subproblems are stored in a table or array (memoization) or in
a bottom-up manner (tabulation) to avoid redundant computation.
• The solution to the problem can be constructed from the solutions to the
subproblems.
• Dynamic programming can be implemented using a recursive algorithm, where the
solutions to subproblems are found recursively, or using an iterative algorithm,
where the solutions are found by working through the subproblems in a specific
order.
Principles of Dynamic Programming:
1. Optimal Substructure: A problem exhibits optimal substructure if its optimal
solution can be constructed efficiently from the optimal solutions of its subproblems.
This allows you to break the problem into smaller, manageable components.
2. Overlapping Subproblems: Unlike divide-and-conquer approaches, which solve
subproblems independently, dynamic programming takes advantage of the fact that the
same subproblems are often solved multiple times. DP solves each subproblem once
and stores the result (usually in a table) to avoid redundant computations.
3. Memoization (Top-Down): This is a technique where you solve the problem by
recursively breaking it down into subproblems and storing the results of solved
subproblems. Whenever the same subproblem is encountered again, the stored result is
used instead of recalculating it.
4. Tabulation (Bottom-Up): In this approach, you solve the subproblems in a
systematic order, typically by starting from the smallest subproblems and iteratively
solving larger problems using the results of the smaller ones. The results are stored in a
table, and you build up the solution step by step.
5. State Definition: Define a state that represents the problem at a specific stage. The
state usually encapsulates the information needed to compute the solution to that
subproblem, such as indices, remaining capacity, or remaining budget.
6. Recurrence Relation: Establish a recurrence relation that expresses the solution to
the problem as a function of the solutions to its subproblems. This recurrence relation
is the heart of the dynamic programming solution.

Applications:
Rod Cutting Problem:
The Rod Cutting Problem is a classic example of dynamic programming. Given a rod
of length n and a table of prices that contains the prices of all pieces of length smaller
than or equal to n, the goal is to determine the maximum revenue obtainable by cutting
the rod and selling the pieces.
Problem Definition:
• Input:
o n: The length of the rod.
o price[]: An array of prices where price[i] denotes the price of a rod of length i+1.
• Output: The maximum revenue obtainable by cutting up the rod and selling the
pieces.
Approach:
1. Brute Force: Consider all possible ways to cut the rod and calculate the revenue for
each cut. This leads to an exponential time complexity as it involves solving the
same subproblems multiple times.
2. Dynamic Programming: Break down the problem into smaller subproblems, store
the solutions to these subproblems, and build up the final solution efficiently.

Optimal Substructure:
If the best way to cut a rod of length n involves cutting off a piece of length i, then the
problem reduces to finding the best way to cut a rod of length n-i.

Dynamic Programming Solution:


Bottom-Up Approach:
• Create an array dp[] where dp[i] stores the maximum revenue obtainable for a rod
of length i.
• Initialize dp[0] = 0 since a rod of length 0 has no value.
• For each length i (from 1 to n), compute dp[i] by considering all possible first cuts
(length j) and using the formula:

• Finally, dp[n] will contain the maximum revenue for a rod of length n.

Example:
Given a rod of length 4 and price array price[] = {1, 5, 8, 9}:
• Possible cuts and corresponding revenues:
o Cut the rod into lengths (1, 1, 1, 1): Revenue = 1+1+1+1 = 4
o Cut the rod into lengths (2, 2): Revenue = 5+5 = 10
o Cut the rod into lengths (3, 1): Revenue = 8+1 = 9
o Cut the rod into lengths (4): Revenue = 9
The maximum revenue is 10, achieved by cutting the rod into two pieces of length 2.
Floyd-Warshall Algorithm:
The Floyd Warshall Algorithm is an all-pair shortest path algorithm
unlike Dijkstra and Bellman Ford which are single source shortest path algorithms.
This algorithm works for both the directed and undirected weighted graphs. But it
does not work for the graphs with negative cycles (where the sum of the edges in a cycle
is negative). It follows Dynamic Programming approach to check every possible path
going via every possible node in order to calculate shortest distance between every pair
of nodes.
Idea Behind Floyd Warshall Algorithm:
Suppose we have a graph G[][] with V vertices from 1 to N. Now we have to evaluate
a shortestPathMatrix[][] where shortestPathMatrix[i][j] represents the shortest path
between vertices i and j.
Obviously, the shortest path between i to j will have some k number of intermediate
nodes. The idea behind floyd warshall algorithm is to treat each and every vertex
from 1 to N as an intermediate node one by one.
The following figure shows the above optimal substructure property in floyd warshall
algorithm:

Floyd Warshall Algorithm Algorithm:


• Initialize the solution matrix same as the input graph matrix as a first step.
• Then update the solution matrix by considering all vertices as an intermediate
vertex.
• The idea is to pick all vertices one by one and updates all shortest paths which
include the picked vertex as an intermediate vertex in the shortest path.
• When we pick vertex number k as an intermediate vertex, we already have
considered vertices {0, 1, 2, .. k-1} as intermediate vertices.
• For every pair (i, j) of the source and destination vertices respectively, there are two
possible cases.
o k is not an intermediate vertex in shortest path from i to j. We keep the value
of dist[i][j] as it is.
o k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j], if dist[i][j] > dist[i][k] + dist[k][j]
Pseudo-Code of Floyd Warshall Algorithm:
For k = 0 to n – 1
For i = 0 to n – 1
For j = 0 to n – 1
Distance[i, j] = min(Distance[i, j], Distance[i, k] + Distance[k, j])
where i = source Node, j = Destination Node, k = Intermediate Node

Illustration of Floyd Warshall Algorithm:


Suppose we have a graph as shown in the image:

Step 1: Initialize the Distance[][] matrix using the input graph such that Distance[i][j]=
weight of edge from i to j, also Distance[i][j] = Infinity if there is no edge from i to j.

Step 2: Treat node A as an intermediate node and calculate the Distance[][] for every
{i,j} node pair using the formula:
= Distance[i][j] = minimum (Distance[i][j], (Distance from i to A) + (Distance
from A to j ))
= Distance[i][j] = minimum (Distance[i][j], Distance[i][A] + Distance[A][j])
Step 3: Treat node B as an intermediate node and calculate the Distance[][] for every
{i,j} node pair using the formula:
= Distance[i][j] = minimum (Distance[i][j], (Distance from i to B) + (Distance
from B to j ))
= Distance[i][j] = minimum (Distance[i][j], Distance[i][B] + Distance[B][j])

Step 4: Treat node C as an intermediate node and calculate the Distance[][] for every
{i,j} node pair using the formula:
= Distance[i][j] = minimum (Distance[i][j], (Distance from i to C) + (Distance
from C to j ))
= Distance[i][j] = minimum (Distance[i][j], Distance[i][C] + Distance[C][j])

Step 5: Treat node D as an intermediate node and calculate the Distance[][] for every
{i,j} node pair using the formula:
= Distance[i][j] = minimum (Distance[i][j], (Distance from i to D) + (Distance
from D to j ))
= Distance[i][j] = minimum (Distance[i][j], Distance[i][D] + Distance[D][j])
Step 6: Treat node E as an intermediate node and calculate the Distance[][] for every
{i,j} node pair using the formula:
= Distance[i][j] = minimum (Distance[i][j], (Distance from i to E) + (Distance
from E to j ))
= Distance[i][j] = minimum (Distance[i][j], Distance[i][E] + Distance[E][j])

Step 7: Since all the nodes have been treated as an intermediate node, we can now return
the updated Distance[][] matrix as our answer matrix.

Complexity Analysis of Floyd Warshall Algorithm:


• Time Complexity: O(V3), where V is the number of vertices in the graph and we
run three nested loops each of size V
• Auxiliary Space: O(V2), to create a 2-D matrix in order to store the shortest
distance for each pair of nodes.
Real World Applications of Floyd-Warshall Algorithm:
• In computer networking, the algorithm can be used to find the shortest path between
all pairs of nodes in a network. This is termed as network routing.
• Flight Connectivity In the aviation industry to find the shortest path between the
airports.
• GIS(Geographic Information Systems) applications often involve analyzing
spatial data, such as road networks, to find the shortest paths between locations.
• Kleene’s algorithm which is a generalization of floyd warshall, can be used to find
regular expression for a regular language.

Matrix Chain Multiplication Algorithm:


Matrix Chain Multiplication is an algorithm that is applied to determine the lowest cost
way for multiplying matrices. The actual multiplication is done using the standard way
of multiplying the matrices, i.e., it follows the basic rule that the number of rows in one
matrix must be equal to the number of columns in another matrix. Hence, multiple scalar
multiplications must be done to achieve the product.
To brief it further, consider matrices A, B, C, and D, to be multiplied; hence, the
multiplication is done using the standard matrix multiplication. There are multiple
combinations of the matrices found while using the standard approach since matrix
multiplication is associative. For instance, there are five ways to multiply the four
matrices given above −
• (A(B(CD)))
• (A((BC)D))
• ((AB)(CD))
• ((A(BC))D)
• (((AB)C)D)

Now, if the size of matrices A, B, C, and D are l × m, m × n, n × p, p × q respectively,


then the number of scalar multiplications performed will be lmnpq. But the cost of the
matrices change based on the rows and columns present in it. Suppose, the values of l,
m, n, p, q are 5, 10, 15, 20, 25 respectively, the cost of (A(B(CD))) is 5 × 100 × 25 =
12,500; however, the cost of (A((BC)D)) is 10 × 25 × 37 = 9,250.
So, dynamic programming approach of the matrix chain multiplication is adopted in
order to find the combination with the lowest cost.
Matrix chain multiplication algorithm is only applied to find the minimum cost way to
multiply a sequence of matrices. Therefore, the input taken by the algorithm is the
sequence of matrices while the output achieved is the lowest cost parenthesization.
Algorithm:
• Count the number of parenthesizations. Find the number of ways in which the input
matrices can be multiplied using the formulae −

• Once the parenthesization is done, the optimal substructure must be devised as the
first step of dynamic programming approach so the final product achieved is
optimal. In matrix chain multiplication, the optimal substructure is found by
dividing the sequence of matrices A[i….j] into two parts A[i,k] and A[k+1,j]. It must
be ensured that the parts are divided in such a way that optimal solution is achieved.
• Using the following formula, find the lowest cost parenthesization of the sequence
of matrices by constructing cost tables and corresponding k values table.

• Once the lowest cost is found, print the corresponding parenthesization as the output.
Travelling Salesman Problem:
Given a set of cities and the distance between every pair of cities, the problem is to find
the shortest possible route that visits every city exactly once and returns to the starting
point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle
problem is to find if there exists a tour that visits every city exactly once. Here we know
that Hamiltonian Tour exists (because the graph is complete) and in fact, many such
tours exist, the problem is to find a minimum weight Hamiltonian Cycle.

For example, consider the graph shown in the above figure. A TSP tour in the graph is
1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80. The problem is a famous
NP-hard problem. There is no polynomial-time know solution for this problem. The
following are different solutions for the traveling salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate the cost of every permutation and keep track of the minimum cost
permutation.
4) Return the permutation with minimum cost.
Dynamic Programming Approach:
Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted
edges. An edge e(u, v) represents that vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.
Suppose we have started at city 1 and after visiting some cities now we are in city j.
Hence, this is a partial tour. We certainly need to know j, since this will determine which
cities are most convenient to visit next. We also need to know all the cities visited so
far, so that we don't repeat any of them. Hence, this is an appropriate sub-problem.
For a subset of cities S ϵϵ {1,2,3,...,n} that includes 1, and j ϵϵ S, let C(S, j) be the
length of the shortest path visiting each node in S exactly once, starting at 1 and ending
at j.
When |S|> 1 , we define 𝑪C(S,1)= ∝∝ since the path cannot start and end at 1.
Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end
at j. We should select the next city in such a way that

Algorithm: Traveling-Salesman-Problem
C ({1}, 1) = 0
for s = 2 to n do
for all subsets S є {1, 2, 3, … , n} of size s and containing 1
C (S, 1) = ∞
for all j є S and j ≠ 1
C (S, j) = min {C (S – {j}, i) + d(i, j) for i є S and i ≠ j}
Return minj C ({1, 2, 3, …, n}, j) + d(j, i)

Analysis:
There are at the most 2n.n sub-problems and each one takes linear time to solve.
Therefore, the total running time is O(2n.n2).

Example:
In the following example, we will illustrate the steps to solve the travelling salesman
problem.
From the above graph, the following table is prepared.
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3,
select the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the
minimum value for d [4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected.
Therefore, we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our
last node of the path is 4). Select path 4 to 3 (cost is 9), then go to s = ϕ step. We get the
minimum value for d [3, 1] (cost is 6).

Longest Common Subsequence (LCS):


The longest common subsequence (LCS) is defined as the longest subsequence that is
common to all the given sequences, provided that the elements of the subsequence are
not required to occupy consecutive positions within the original sequences.
If S1 and S2 are the two given sequences then, Z is the common subsequence
of S1 and S2 if Z is a subsequence of both S1 and S2. Furthermore, Z must be a strictly
increasing sequence of the indices of both S1 and S2.
In a strictly increasing sequence, the indices of the elements chosen from the original
sequences must be in ascending order in Z.
If S1 = {B, C, D, A, A, C, D} Then, {A, D, B} cannot be a subsequence of S1 as the
order of the elements is not the same (ie. not strictly increasing sequence).
Let us understand LCS with an example.
If
S1 = {B, C, D, A, A, C, D}
S2 = {A, C, D, B, A, C}
Then, common subsequences are {B, C}, {C, D, A, C}, {D, A, C}, {A, A, C}, {A, C},
{C, D}, ...
Among these subsequences, {C, D, A, C} is the longest common subsequence. We are
going to find this longest common subsequence using dynamic programming.
Using Dynamic Programming to find the LCS:
Let us take two sequences:

The following steps are followed for finding the longest common subsequence.
1. Create a table of dimension n+1*m+1 where n and m are the lengths
of X and Y respectively. The first row and the first column are filled with zeros.

2. Fill each cell of the table using the following logic.


3. If the character correspoding to the current row and current column are matching,
then fill the current cell by adding one to the diagonal element. Point an arrow to the
diagonal cell.
4. Else take the maximum value from the previous column and previous row element
for filling the current cell. Point an arrow to the cell with maximum value. If they
are equal, point to any of them.
5. Step 2 is repeated until the table is filled.

6. The value in the last row and the last column is the length of the longest common
subsequence.

7. In order to find the longest common subsequence, start from the last element and
follow the direction of the arrow. The elements corresponding to () symbol form the
longest common subsequence.
Thus, the longest common subsequence is CA.

Longest Common Subsequence Applications:


The following are some applications of Longest common subsequence:
1. Version Control Systems (e.g., Git)
2. DNA sequence comparison in bioinformatics
3. Text comparison and plagiarism detection
4. Data differencing and file synchronization
5. Natural Language Processing tasks like spell checkers and similarity analysis

Backtracking Algorithm:
A backtracking algorithm is a problem-solving algorithm that uses a brute force
approach for finding the desired output.
The term backtracking suggests that if the current solution is not suitable, then backtrack
and try other solutions. Thus, recursion is used in this approach.
This approach is used to solve problems that have multiple solutions.

State Space Tree:


A space state tree is a tree representing all the possible states (solution or nonsolution)
of the problem from the root as an initial state to the leaf as a terminal state.
Backtracking Algorithm:
Backtrack(x)
if x is not a solution
return false
if x is a new solution
add to list of solutions
backtrack(expand x)

Example Backtracking Approach:


Problem: You want to find all the possible ways of arranging 2 boys and 1 girl on 3
benches. Constraint: Girl should not be on the middle bench.
Solution: There are a total of 3! = 6 possibilities. We will try all the possibilities and
get the possible solutions. We recursively try all the possibilities.
All the possibilities are:

The following state space tree shows the possible solutions.


Backtracking Algorithm Applications:
1. To find all Hamiltonian Paths present in a graph.
2. To solve the N Queen problem.
3. Maze solving problem.
4. The Knight's tour problem.
5. Solving puzzles (e.g., Sudoku, crossword puzzles)
6. Scheduling problems
7. Resource allocation problems
8. Network optimization problems

8-Queen Problem:
Problem Statement: Given an 8*8 chess board, you must place 8 queens on the board
so that no two queens attack each other. Print all possible matrices satisfying the
conditions with positions with queens marked with '1' and empty spaces with '0'. You
must solve the 8 queens problem using backtracking.
Note: A queen can move vertically, horizontally and diagonally in any number of steps.

Algorithm:
• Create a function backtrack that simply places the queens on corresponding rows
and columns and marks them visited.
• The working of backtrack is as follows:
o If the current row is equal to 8, then a solution has been found. Therefore, add
this to the answer.
o Traverse through the columns of the current row. At each column, try to place
the queen in (row, col):
▪ Calculate the diagonal and anti-diagonal which the current square belongs
to. If it is unvisited, place the queen in the (row, col).
▪ Skip this column, if the queen cannot be visited.
o If the queen has been placed successfully, call the backtrack function of row +
1.
o Since, all paths have now been explored, clear the values of the queens placed
so far and the visiting arrays, so that next distinct solution can be found.
Example:
One possible solution to the 8 queens problem using backtracking is shown below. In
the first row, the queen is at E8 square, so we have to make sure no queen is in column
E and row 8 and also along its diagonals. Similarly, for the second row, the queen is on
the B7 square, thus, we have to secure its horizontal, vertical, and diagonal squares. The
same pattern is followed for the rest of the queens.

Knapsack Problem:
We discussed the fractional knapsack problem using the greedy approach earlier. It is
shown that Greedy approach gives an optimal solution for Fractional Knapsack.
However, this will cover 0-1 Knapsack problem using dynamic programming
approach and its analysis.
Unlike in fractional knapsack, the items are always stored fully without using the
fractional part of them. Its either the item is added to the knapsack or not. That is why,
this method is known as the 0-1 Knapsack problem.
Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other
constraints remain the same. Means that the items are either completely or no items are
filled in a knapsack.
0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not ensure
an optimal solution in this method. In many instances, Greedy approach may give an
optimal solution.
0-1 Knapsack Algorithm:
Problem Statement − A thief is robbing a store and can carry a maximal weight
of W into his knapsack. There are n items and weight of ith item is wi and the profit of
selecting this item is pi. What items should the thief take?
Let i be the highest-numbered item in an optimal solution S for W dollars. Then S’ = S
− {i} is an optimal solution for W – wi dollars and the value to the solution S is Vi plus
the value of the sub-problem.
We can express this fact in the following formula: define c[i, w] to be the solution for
items 1,2, … , i and the maximum weight w.
The algorithm takes the following inputs
• The maximum weight W
• The number of items n
• The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>
The set of items to take can be deduced from the table, starting at c[n, w] and tracing
backwards where the optimal values came from.
If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing
with c[i-1, w]. Otherwise, item i is part of the solution, and we continue tracing with c
[i-1, w-W].

Example:
Let us consider that the capacity of the knapsack is W = 8 and the items are as shown
in the following table.
Item A B C D
Profit 2 4 7 10
Weight 1 3 5 7

Solution:
Using the greedy approach of 0-1 knapsack, the weight that’s stored in the knapsack
would be A+B = 4 with the maximum profit 2 + 4 = 6. But, that solution would not be
the optimal solution.
Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.
Step 1:
Construct an adjacency table with maximum weight of knapsack as rows and items with
respective weights and profits as columns.
Values to be stored in the table are cumulative profits of the items whose weights do not
exceed the maximum weight of the knapsack (designated values of each row)
So, we add zeroes to the 0th row and 0th column because if the weight of item is 0, then
it weighs nothing; if the maximum weight of knapsack is 0, then no item can be added
into the knapsack.

The remaining values are filled with the maximum profit achievable with respect to the
items and weight per column that can be stored in the knapsack.
The formula to store the profit values is −

By computing all the values using the formula, the table obtained would be –
To find the items to be added in the knapsack, recognize the maximum profit from the
table and identify the items that make up the profit, in this example, its {1, 7}.

The optimal solution is {1, 7} with the maximum profit is 12.

Analysis:
This algorithm takes Ɵ(n.w) times as table c has (n+1).(w+1) entries, where each entry
requires Ɵ(1) time to compute.

Real-World Applications of the 0/1 Knapsack Problem:


1. Budget Allocation: Optimize project selection under a fixed budget to maximize
returns.
2. Marketing Campaigns: Select marketing campaigns that yield the highest ROI
within a limited budget.
3. Cargo Loading: Maximize the value of cargo loaded into a truck with a fixed weight
capacity.
4. Investment Portfolio: Pick stocks or assets to maximize returns without exceeding
the investment limit.
5. Event Scheduling: Choose events to attend that maximize benefits without
exceeding time constraints.

You might also like