Differential dynamic programming

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3][4]

Finite-horizon discrete-time problems

The dynamics

 

 

 

 

(1)

describe the evolution of the state given the control from time to time . The total cost is the sum of running costs and final cost , incurred when starting from state and applying the control sequence until the horizon is reached:

where , and the for are given by Eq. 1. The solution of the optimal control problem is the minimizing control sequence Trajectory optimization means finding for a particular , rather than for all possible initial states.

Dynamic programming

Let be the partial control sequence and define the cost-to-go as the partial sum of costs from to :

The optimal cost-to-go or value function at time is the cost-to-go given the minimizing control sequence:

Setting , the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

 

 

 

 

(2)

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

is the argument of the operator in Eq. 2, let be the variation of this quantity around the -th pair:

and expand to second order

 

 

 

 

(3)

The notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index for readability, primes denoting the next time-step , the expansion coefficients are

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to we have

 

 

 

 

(4)

giving an open-loop term and a feedback gain term . Plugging the result back into (3), we now have a quadratic model of the value at time :

Recursively computing the local quadratic models of and the control modifications , from down to , constitutes the backward pass. As above, the Value is initialized with . Once the backward pass is completed, a forward pass computes a new trajectory:

The backward passes and forward passes are iterated until convergence.

Regularization and line-search

Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence [6] .[7] Regularization in the DDP context means ensuring that the matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification by some .

See also

References

  1. Mayne, D. Q. (1966). "A second-order gradient method of optimizing non-linear discrete time systems". Int J Control. 3: 85–95. doi:10.1080/00207176608921369.
  2. Mayne, David H. and Jacobson, David Q. (1970). Differential dynamic programming. New York: American Elsevier Pub. Co. ISBN 0-444-00070-4.
  3. de O. Pantoja, J. F. A. (1988). "Differential dynamic programming and Newton's method". International Journal of Control. 47 (5): 1539–1553. doi:10.1080/00207178808906114. ISSN 0020-7179.
  4. Liao, L. Z.; C. A Shoemaker (1992). "Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems". Cornell University, Ithaca, NY.
  5. Morimoto, J.; G. Zeglin; C.G. Atkeson (2003). "Minimax differential dynamic programming: Application to a biped walking robot". Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. 2. pp. 1927–1932.
  6. Liao, L. Z; C. A Shoemaker (1991). "Convergence in unconstrained discrete-time differential dynamic programming". IEEE Transactions on Automatic Control. 36 (6): 692. doi:10.1109/9.86943.
  7. Tassa, Y. (2011). Theory and implementation of bio-mimetic motor controllers (PDF) (Thesis). Hebrew University.

External links

This article is issued from Wikipedia - version of the 9/27/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.