This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE363 Winter 200809 Lecture 5 Linear Quadratic Stochastic Control linearquadratic stochastic control problem solution via dynamic programming 51 Linear stochastic system linear dynamical system, over finite time horizon: x t +1 = Ax t + Bu t + w t , t = 0 ,...,N 1 w t is the process noise or disturbance at time t w t are IID with E w t = 0 , E w t w T t = W x is independent of w t , with E x = 0 , E x x T = X Linear Quadratic Stochastic Control 52 Control policies statefeedback control: u t = t ( x t ) , t = 0 ,...,N 1 t : R n R m called the control policy at time t roughly speaking: we choose input after knowing the current state, but before knowing the disturbance closedloop system is x t +1 = Ax t + B t ( x t ) + w t , t = 0 ,...,N 1 x ,...,x N , u ,...,u N 1 are random Linear Quadratic Stochastic Control 53 Stochastic control problem objective: J = E parenleftBigg N 1 summationdisplay t =0 ( x T t Qx t + u T t Ru t ) + x T N Q f x N parenrightBigg with Q , Q f , R > J depends (in complex way) on control policies ,..., N 1 linearquadratic stochastic control problem : choose control policies ,..., N 1 to minimize J (linear refers to the state dynamics; quadratic to the objective) an infinite dimensional problem: variables are...
View
Full
Document
This note was uploaded on 02/04/2012 for the course ECE 222 taught by Professor Goengi during the Spring '11 term at Maryland.
 Spring '11
 goengi
 C Programming

Click to edit the document details