This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE 278 Wednesday, August 12, 2009 Statistical Signal Processing Handout #15 Homework #5 Solutions 1. (10 points) Absolute value random walk. a. This is a straightforward calculation and we can use results from lecture notes. If k ≥ then P { Y n = k } = P { X n = + k or X n = − k } . If k > 0 then P { Y n = k } = 2P { X n = k } , while P { Y n = 0 } = P { X n = 0 } . Thus P { Y n = k } = ( n ( n + k ) / 2 ) ( 1 2 ) n − 1 k > , n − k is even , n − k ≥ ( n n/ 2 ) ( 1 2 ) n k = 0 , n is even , n ≥ otherwise b. If Y 20 =  X 20  = 0 then there are only two sample paths with max 1 ≤ i< 20  X i  = 10. These two paths are shown in Figure 1. Since the total number of sample paths is ( 20 10 ) and all paths are equally likely, P braceleftbig max 1 ≤ i< 20 Y i = 10  Y 20 = 0 bracerightbig = 2 ( 20 10 ) = 2 184756 = 1 92378 . 10 10 20 n − 10 X n Figure 1: Sample paths for problem 1. 2. (10 points) Random walk with random start. a. We must show that for every sequence of indexes i 1 ,i 2 ,...,i n such that i 1 < i 2 < ... < i n , the increments X i 1 ,X i 2 − X i 1 ,...,X i n − X i n 1 are independent. This is true by the definition of the { X i } random process; each X i j − X i j 1 is the sum of a different set of Z i ’s, and the Z i ’s are i.i.d. and independent of X , which appears only in the first increment. b. Starting at an even number (0 or ± 2) can be ruled out, since there is no way that the process could then end up at X 11 = 2. Using Bayes rule for the remaining possibilities, we get P( X = − 1  X 11 = 2) = P( X 11 = 2  X = − 1)P( X = − 1) P( X 11 = 2) = ( 1 5 )( 11 7 )( 1 2 ) 7 ( 1 2 ) 4 ( 1 5 )( 11 7 )( 1 2 ) 7 ( 1 2 ) 4 + ( 1 5 )( 11 6 )( 1 2 ) 6 ( 1 2 ) 5 = ( 11 7 ) ( 11 7 ) + ( 11 6 ) = 1 1 + 11!7!4! 11!6!5! = 1 1 + 7 5 = 5 12 Similarly, P( X = 1  X 11 = 2) = 7 12 . To summarize, P( X = x  X 11 = 2) = 5 12 x = − 1 7 12 x = +1 otherwise 3. (15 points) Markov processes. a. We are given that f ( x n +1  x 1 ,x 2 ,...,x n ) = f ( x n +1  x n ). From the chain rule, in general, f ( x 1 ,x 2 ,...,x n ) = f ( x 1 ) f ( x 2  x 1 ) f ( x 3  x 1 ,x 2 ) ··· f ( x n  x 1 ,x 2 ,...,x n − 1 ) . Thus, by the definition of Markovity, f ( x 1 ,x 2 ,...,x n ) = f ( x 1 ) f ( x 2  x 1 ) f ( x 3  x 2 ) ··· f ( x n  x n − 1 ) . (1) We will need the following to prove the second equality. f ( x i +1  x k ,x k +1 ,...,x i ) = integraltext ··· integraltext f ( x 1 ,...,x i ,x i +1 ) dx 1 ··· dx k − 1 f ( x 1 ,...,x k ) = integraltext ··· integraltext f ( x 1 ,...,x i ) f ( x i +1  x i ) dx 1 ··· dx k − 1 f ( x 1 ,...,x k ) = f ( x i +1  x i ) (2) Now, applying the chain rule in reverse we get f ( x 1 ,x 2 ,...,x n ) = f ( x n ) f ( x n − 1  x n ) f ( x n − 2  x n − 1 ,x n ) ··· f ( x 1  x 2 ,x 3 ,...,x n ) ....
View
Full
Document
 Spring '11
 Eggers
 Signal Processing

Click to edit the document details