This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Lecture 4 Last time: Left off with characteristic function. 4. Prove ( ) = i ( ) where X = X 1 + X 2 + ... + X ( X i independent) t t x x n Let S X 1 + X + ... X where the X i are independent. = 2 n ... t jtS = E e jt X 1 + X 2 + + X n ) ( ) = E e ( s jtX 1 E e jtX jtX n = E e 2 ... E e t = ( ) X i This is the main reason why use of the characteristic function is convenient. This would also follow from the more devious reasoning of the density function for the sum of n independent random variables being the n th order convolution of the individual density functions and the knowledge that convolution in the direct variable domain becomes multiplication in the transform domain. 5. MacLaurin series expansion of t ( ) Because f(x) is nonnegative and f ( ) x dx = 1 (or, even better, f ( ) x dx = 1 ), it follows that f ( ) x dx = 1 converges so that f(x) is Fourier transformable. Thus the characteristic function t ( ) exists for all distributions and the inverse relation t ( ) is analytic for all ( ) f ( x ) holds for all distributions. This implies that t real values of t. Then it can be expanded in a power series, which converges for all finite values of t . 1 2 2 n n ( t ) = (0) + ( ) ( ) t + 1 ( ) ( ) t + ... + 1 ( ) ( ) t + ... 2! n ! jtx d x , t ) ( ) = f ( x e (0) = 1 Page 1 of 6 1 16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde d t n jtx f x n ( ) = ( )( jx ) e dx n dt n n n n 0 ) ( ) ( ) = j x n f ( x d x = j X n n n ( ) X t + ... + 1 ( ) X t + ......
View Full
Document
 Fall '04
 WallaceVanderVelde

Click to edit the document details