• Optimal Control
  • In Categories Of
  • Robotics
  • Tagged With
  • Control Theory
  • Optimal Control

    Optimal Control Framework

    Given: A controlled dynamical system:$ x^{n+1} = f(x^n, u^n)$

    A cost function:$V = \phi(x^N, \alpha) + \sum^{N-1}_{i=0}L(x^i, u^i, \alpha)$

    Goal: Find the sequence of commands that minimizes(maximizes) the cost function

    Bellman’s Principle of Optimality

    Optimize it using dynamic programming:

    Linear quadratic regulator

    Special Assumption: Linear System Dynamics

    Quadratic cost function

    Goal: - Bring the system to a setpoint and keep it there - Note: this an also be did with a nonlinear system by a local linearization

    • As A linear control law expressed as:

    Rewrite the optimal cost at stage i as a quadratic form:

    Thus,

    Finite horizon approximation

    To be continued…

    Motion Predictive Control

    To be continued…

    Fast MPC

    To be continued…