• Optimal Control
• In Categories Of
• Robotics
• Tagged With
• Control Theory
• Optimal Control

Optimal Control Framework

Given: A controlled dynamical system：$x^{n+1} = f(x^n, u^n)$

A cost function：$V = \phi(x^N, \alpha) + \sum^{N-1}_{i=0}L(x^i, u^i, \alpha)$

Goal: Find the sequence of commands that minimizes(maximizes) the cost function

Bellman’s Principle of Optimality

Optimize it using dynamic programming:

Special Assumption: Linear System Dynamics $x^{n+1} = Ax^n + Bu^n$

Quadratic cost function $L(x^i, u^i, \alpha) ＝ x^{i^T}Qx^i + u^{i^T}Ru^{i^T}$

Goal: - Bring the system to a setpoint and keep it there - Note: this an also be did with a nonlinear system by a local linearization

• As A linear control law expressed as:

Rewrite the optimal cost at stage i as a quadratic form:

Thus,

Finite horizon approximation

To be continued…

Motion Predictive Control

To be continued…

Fast MPC

To be continued…