Optimal Control

Optimal Control

Optimal Control Framework

Given: A controlled dynamical system:$ x^{n+1} = f(x^n, u^n)$

A cost function:$V = \phi(x^N, \alpha) + \sum^{N-1}_{i=0}L(x^i, u^i, \alpha)$

Goal: Find the sequence of commands that minimizes(maximizes) the cost function

Bellman’s Principle of Optimality

Optimize it using dynamic programming:

$$ J_i(Xi) = \mathop{arg min}{u_i\in u(xi)}{{L(x^i, u^i, \alpha) + V^*{i+1}x_{(i+1)}}} $$

Linear quadratic regulator

Special Assumption: Linear System Dynamics $$ x^{n+1} = Ax^n + Bu^n $$

Quadratic cost function $$ L(x^i, u^i, \alpha) = x^{i^T}Qx^i + u^{i^T}Ru^{i^T} $$

Goal: - Bring the system to a setpoint and keep it there - Note: this an also be did with a nonlinear system by a local linearization

$$ \begin{aligned} V^_i(Xi) & = \mathop{arg min}{u_i\in u(x_i)}{{L(x^i, u^i, \alpha) + V^{i+1}x{(i+1)}}}
& = \mathop{arg min}_{u_i\in u(xi)}{{x^{i^T}Qx^i + u^{i^T}Ru^{i^T} + V^*{i+1}x{(i+1)}}}
& = \mathop{arg min}
{u_i\in u(xi)}{{x^{i^T}Qx^i + u^{i^T}Ru^{i^T} + V^*{i+1}(Ax^n + Bu^n)}}
\end{aligned} $$

  • As A linear control law expressed as:

$$ u^{i^*} = -K^ix^i $$

Rewrite the optimal cost at stage i as a quadratic form:

$$ {V^i}^* = {x^i}^TP^ix^i $$

Thus,

$$ V^_i(Xi) = \mathop{arg min}{u_i\in u(x_i)}{ {x^{i^T}Qx^i + u^{i^T}Ru^{i^T} + V^_{i+1}(Ax^n + Bu^n)} }
$$

Finite horizon approximation

To be continued…

Motion Predictive Control

To be continued…

Fast MPC

To be continued…