DP Combined Report
DP Combined Report
1. Introduction
Dynamic Programming (DP) is a method to solve optimization problems by breaking them into
An optimal policy has the property that, regardless of the initial state, the remaining decisions must
also be optimal.
3. Mathematical Formulation
For a discrete-time system x_{k+1} = f(x, u), with cost J = sum L(x, u), the Bellman equation is:
4. Example Problem
Minimize J = sum (x^2 + u^2) with x_{k+1} = x + u, x = 1. Backward recursion is used to compute the
5. Simulation
Implement a discrete-time simulation where at each step the control minimizes the cost-to-go
6. Conclusion
DP is powerful for systems with discrete dynamics and uncertainty, but suffers from the curse of
dimensionality.
Dynamic Programming in Optimal Control
7. Simulation Results
Below is a plot showing the simulation results for the discussed optimal control method.
Simulation Plot