&iacute;™•&euml;&yen;&nbsp;_&euml;&deg;_&euml;žœ&euml;&curren;&euml;&sup3;€&igrave;ˆ˜_&igrave;†”&e

# í™•ë¥ _ë°_ëžœë¤ë³€ìˆ˜_ì†”&e

This preview shows pages 1–5. Sign up to view the full content.

Problem 11.2.1 Solution (a) Note that Y i = X n =−∞ h n X i n = 1 3 X i + 1 + 1 3 X i + 1 3 X i 1 (1) By matching coefficients, we see that h n = ± 1 / 3 n =− 1 , 0 , 1 0 otherwise (2) (b) By Theorem 11.5, the output autocorrelation is R Y [ n ] = X i =−∞ X j =−∞ h i h j R X [ n + i j ] (3) = 1 9 1 X i =− 1 1 X j =− 1 R X [ n + i j ] (4) = 1 9 ( R X [ n + 2] + 2 R X [ n + 1] + 3 R X [ n ] + 2 R X [ n 1] + R X [ n 2] ) (5) Substituting in R X [ n ] yields R Y [ n ] = 1 / 3 n = 0 2 / 9 | n | = 1 1 / 9 | n | = 2 0 otherwise (6) Problem 11.2.2 Solution Applying Theorem 11.4 with sampling period T s = 1 / 4000 s yields R X [ k ] = R X ( kT s ) = 10 sin ( 2000 π s ) + sin ( 1000 π s ) 2000 π s (1) = 20 sin ( 0 . 5 π k ) + sin ( 0 . 25 π k ) π k (2) = 10 sinc ( 0 . 5 k ) + 5 sinc ( 0 . 25 k ) (3) Problem 11.2.3 Solution (a) By Theorem 11.5, the expected value of the output is µ W = µ Y X n =−∞ h n = 2 µ Y = 2 (1) 385

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
(b) Theorem 11.5 also says that the output autocorrelation is R W [ n ] = X i =−∞ X j =−∞ h i h j R Y [ n + i j ] (2) = 1 X i = 0 1 X j = 0 R Y [ n + i j ] (3) = R Y [ n 1] + 2 R Y [ n ] + R Y [ n + 1] (4) For n =− 3, R W [ 3] = R Y [ 4] + 2 R Y [ 3] + R Y [ 2] = R Y [ 2] = 0 . 5 (5) Following the same procedure, its easy to show that R W [ n ] is nonzero for | n |= 0 , 1 , 2. Specifically, R W [ n ] = 0 . 5 | n | = 3 3 | n | = 2 7 . 5 | n | = 1 10 n = 0 0 otherwise (6) (c) The second moment of the output is E [ W 2 n ]= R W [ 0 10. The variance of W n is Var [ W n E ± W 2 n ( E [ W n ] ) 2 = 10 2 2 = 6 (7) Problem 11.2.4 Solution (a) By Theorem 11.5, the mean output is µ V = µ Y X n =−∞ h n = ( 1 + 1 Y = 0 (1) (b) Theorem 11.5 also says that the output autocorrelation is R V [ n ] = X i =−∞ X j =−∞ h i h j R Y [ n + i j ] (2) = 1 X i = 0 1 X j = 0 h i h j R Y [ n + i j ] (3) R Y [ n 1] + 2 R Y [ n ] R Y [ n + 1] (4) For n 3, R V [ 3] R Y [ 4] + 2 R Y [ 3] R Y [ 2] = R Y [ 2] 0 . 5 (5) 386
Following the same procedure, its easy to show that R V [ n ] is nonzero for | n |= 0 , 1 , 2. Specifically, R V [ n ] = 0 . 5 | n | = 3 1 | n | = 2 0 . 5 | n | = 1 2 n = 0 0 otherwise (6) (c) Since E [ V n ]= 0, the variance of the output is E [ V 2 n R V [ 0 2. The variance of W n is Var [ V n E ± W 2 n R V [0] = 2 (7) Problem 11.2.5 Solution We start with Theorem 11.5: R Y [ n ] = X i =−∞ X j =−∞ h i h j R X [ n + i j ] (1) = R X [ n 1] + 2 R X [ n ] + R X [ n + 1] (2) First we observe that for n ≤− 2or n 2, R Y [ n ] = R X [ n 1] + 2 R X [ n ] + R X [ n + 1] = 0 (3) This suggests that R X [ n 0 for | n | > 1. In addition, we have the following facts: R Y [0] = R X [ 1] + 2 R X [0] + R X [1] = 2 (4) R Y [ 1] = R X [ 2] + 2 R X [ 1] + R X [0] = 1 (5) R Y [1] = R X [0] + 2 R X [1] + R X [2] = 1 (6) A simple solution to this set of equations is R X [ 0 1 and R X [ n 0 for n 6= 0. Problem 11.2.6 Solution The mean of Y n = ( X n + Y n 1 )/ 2 can be found by realizing that Y n is an infinite sum of the X i ’s. Y n = ² 1 2 X n + 1 4 X n 1 + 1 8 X n 2 + ... ³ (1) Since the X i ’s are each of zero mean, the mean of Y n is also 0. The variance of Y n can be expressed as [ Y n ² 1 4 + 1 16 + 1 64 + ³ [ X X i = 1 ( 1 4 ) i σ 2 = ( 1 1 1 / 4 1 2 = σ 2 / 3 (2) The above infinite sum converges to 1 1 1 / 4 1 = 1 / 3, implying Var [ Y n ] = ( 1 / 3 ) X ] = 1 / 3 (3) 387

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The covariance of Y i + 1 Y i can be found by the same method.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 33

í™•ë¥ _ë°_ëžœë¤ë³€ìˆ˜_ì†”&e

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online