{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

hwsol5 - EE 278 Statistical Signal Processing Wednesday...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
EE 278 Wednesday, August 12, 2009 Statistical Signal Processing Handout #15 Homework #5 Solutions 1. (10 points) Absolute value random walk. a. This is a straightforward calculation and we can use results from lecture notes. If k 0 then P { Y n = k } = P { X n = + k or X n = k } . If k > 0 then P { Y n = k } = 2P { X n = k } , while P { Y n = 0 } = P { X n = 0 } . Thus P { Y n = k } = ( n ( n + k ) / 2 ) ( 1 2 ) n 1 k > 0 , n k is even , n k 0 ( n n/ 2 ) ( 1 2 ) n k = 0 , n is even , n 0 0 otherwise b. If Y 20 = | X 20 | = 0 then there are only two sample paths with max 1 i< 20 | X i | = 10 . These two paths are shown in Figure 1. Since the total number of sample paths is ( 20 10 ) and all paths are equally likely, P braceleftbig max 1 i< 20 Y i = 10 | Y 20 = 0 bracerightbig = 2 ( 20 10 ) = 2 184756 = 1 92378 . 0 10 10 20 n 10 X n Figure 1: Sample paths for problem 1. 2. (10 points) Random walk with random start. a. We must show that for every sequence of indexes i 1 , i 2 , . . . , i n such that i 1 < i 2 < . . . < i n , the increments X i 1 , X i 2 X i 1 , . . . , X i n X i n - 1 are independent. This is true by the definition of the { X i } random process; each X i j X i j - 1 is the sum of a different set of Z i ’s, and the Z i ’s are i.i.d. and independent of X 0 , which appears only in the first increment. b. Starting at an even number (0 or ± 2) can be ruled out, since there is no way that the process
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
could then end up at X 11 = 2. Using Bayes rule for the remaining possibilities, we get P( X 0 = 1 | X 11 = 2) = P( X 11 = 2 | X 0 = 1)P( X 0 = 1) P( X 11 = 2) = ( 1 5 ) ( 11 7 ) ( 1 2 ) 7 ( 1 2 ) 4 ( 1 5 ) ( 11 7 ) ( 1 2 ) 7 ( 1 2 ) 4 + ( 1 5 ) ( 11 6 ) ( 1 2 ) 6 ( 1 2 ) 5 = ( 11 7 ) ( 11 7 ) + ( 11 6 ) = 1 1 + 11!7!4! 11!6!5! = 1 1 + 7 5 = 5 12 Similarly, P( X 0 = 1 | X 11 = 2) = 7 12 . To summarize, P( X 0 = x | X 11 = 2) = 5 12 x = 1 7 12 x = +1 0 otherwise 3. (15 points) Markov processes. a. We are given that f ( x n +1 | x 1 , x 2 , . . . , x n ) = f ( x n +1 | x n ). From the chain rule, in general, f ( x 1 , x 2 , . . . , x n ) = f ( x 1 ) f ( x 2 | x 1 ) f ( x 3 | x 1 , x 2 ) · · · f ( x n | x 1 , x 2 , . . . , x n 1 ) . Thus, by the definition of Markovity, f ( x 1 , x 2 , . . . , x n ) = f ( x 1 ) f ( x 2 | x 1 ) f ( x 3 | x 2 ) · · · f ( x n | x n 1 ) . (1) We will need the following to prove the second equality. f ( x i +1 | x k , x k +1 , . . . , x i ) = integraltext · · · integraltext f ( x 1 , . . . , x i , x i +1 ) dx 1 · · · dx k 1 f ( x 1 , . . . , x k ) = integraltext · · · integraltext f ( x 1 , . . . , x i ) f ( x i +1 | x i ) dx 1 · · · dx k 1 f ( x 1 , . . . , x k ) = f ( x i +1 | x i ) (2) Now, applying the chain rule in reverse we get f ( x 1 , x 2 , . . . , x n ) = f ( x n ) f ( x n 1 | x n ) f ( x n 2 | x n 1 , x n ) · · · f ( x 1 | x 2 , x 3 , . . . , x n ) . Next, f ( x i | x i +1 , . . . , x n ) = f ( x i , x i +1 , . . . , x n ) f ( x i +1 , . . . , x n ) = f ( x i ) f ( x i +1 | x i ) f ( x i +1 ) = f ( x i | x i +1 ) , (3) where the second equality follows from (1) and (2). Therefore f ( x 1 , x 2 , . . . , x n ) = f ( x n ) f ( x n 1 | x n ) f ( x n 2 | x n 1 , x n ) · · · f ( x 1 | x 2 , x 3 , . . . , x n ) = f ( x n ) f ( x n 1 | x n ) f ( x n 2 | x n 1 ) · · · f ( x 1 | x 2 ) , where the second line follows from (3). b. First consider f ( x k +1 , . . . , x n | x 1 , . . . , x k ) = f ( x k +1 | x 1 , . . . , x k ) f ( x k +2 | x 1 , . . . , x k +1 ) · · · f ( x n | x 1 , . . . , x n 1 ) = f ( x k +1 | x k ) f ( x k +2 | x k , x k +1 ) · · · f ( x n | x k , . . . , x n 1 ) = f ( x k +1 , . . . , x n | x k ) Page 2 of 13 EE 278, Summer 2009
Background image of page 2
where the second equality follows from (2). Integrating both sides over x k +1 , . . . , x n 1 (i.e., using the law of total probability), we get f ( x n | x 1 , . . . , x k ) = f ( x n | x k ).
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}