[go: up one dir, main page]

0% found this document useful (0 votes)
17 views2 pages

DP Combined Report

Dynamic Programming (DP) is a technique for solving optimization problems by dividing them into simpler subproblems. The document discusses Bellman's Principle of Optimality, the mathematical formulation of the Bellman equation, and provides an example problem with a simulation to visualize the results. While DP is effective for discrete systems, it faces challenges due to the curse of dimensionality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

DP Combined Report

Dynamic Programming (DP) is a technique for solving optimization problems by dividing them into simpler subproblems. The document discusses Bellman's Principle of Optimality, the mathematical formulation of the Bellman equation, and provides an example problem with a simulation to visualize the results. While DP is effective for discrete systems, it faces challenges due to the curse of dimensionality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Dynamic Programming in Optimal Control

1. Introduction

Dynamic Programming (DP) is a method to solve optimization problems by breaking them into

simpler subproblems and solving recursively.

2. Bellman's Principle of Optimality

An optimal policy has the property that, regardless of the initial state, the remaining decisions must

also be optimal.

3. Mathematical Formulation

For a discrete-time system x_{k+1} = f(x, u), with cost J = sum L(x, u), the Bellman equation is:

V(x) = min [ L(x, u) + V_{k+1}(f(x, u)) ]

4. Example Problem

Minimize J = sum (x^2 + u^2) with x_{k+1} = x + u, x = 1. Backward recursion is used to compute the

value function and control inputs.

5. Simulation

Implement a discrete-time simulation where at each step the control minimizes the cost-to-go

function. Visualize the state trajectory and value function evolution.

6. Conclusion

DP is powerful for systems with discrete dynamics and uncertainty, but suffers from the curse of

dimensionality.
Dynamic Programming in Optimal Control
7. Simulation Results

Below is a plot showing the simulation results for the discussed optimal control method.

Simulation Plot

You might also like