hw12_soln

hw12_soln - Probability and Stochastic Processes: A...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J. Goodman Problem Solutions : Yates and Goodman,8.1.1 8.1.3 8.2.2 8.3.2 8.3.3 and 9.1.2 5 © σ2 n. Realizing that σ2 X X £  ¤  σ2 X 9  £ ¦ Var M9 X 25, we obtain 25 9 £ ¥ £¦ ¡¡ ¢¢¡ § x £ ¨ (a) Using Theorem 8.1, σ2 n M £ Problem 8.1.1 Recall that X1 X2 Xn are independent exponential random variables with mean value µX so that for x 0, FX x 1 e x 5.  ¥ (b) e 75 X9 !¡¡¡ #¢¢"! P X1 0 247 63 ¦ ¥ ¡¡ ¢¢¡ 1 75 X9 . 7 e  § § ¥ £   £ ¦  ¥ § 7 can be approximated using the Central Limit Theorem 9µX 9σX 1 £ § 63 Φ § & §   Φ65  0 1151. ¡   $¦ !¡¡¡ %¢¢"! ¤ £   ¥ £ ¦  ¥ ¥ ¥ § £ ¦ ¥  ¦  ¥ £ Problem 8.1.3 X1 X2 Xn are independent uniform random variables with mean value µX 7 and σ2 X £ ¦ 7 1 ¥  ¦ § Consulting with Table 4.1 yields P M9 X 63  X9 ¦  P X1 £¦ 1 § 7 P M9 X ¥ 1 1 § 7 1 ¨ 7 in terms of X1 Now the probability that M9 X (CLT). P M9 X FX 7 © P M9 X 1 £¦ (c) First we express P M9 X 7 ¨ P X1 © 1  7 ¡ P X1 3 ¡¡ ¢¢¡ (a) Since X1 is a uniform random variable, it must have a uniform PDF over an interval a b . From a b 2 and that Var X b a 2 12. Hence, given Appendix A, we can look up that µX the mean and variance, we obtain the following equations for a and b.  ¦ £ ¦ ! ¥ ! £ £  £ ¥    £ '¦ Var X 16 ¦  £¦ § £ ¥ ¥ (b) From Theorem 8.1, we know that £  1 7 10 from which we can state the distribution of 1 6 4 x 10 0 otherwise Var M16 X £ fX x b2 ¥ 4 and b a § 3 ¦ 12  Solving these equations yields a X. 2  a  b 3 16 ¥  (c) ∞ 10 1 6 dx ¥ ( )£ 9  ¦ ¥ ¤ ( )£  9 ¦ fX1 x dx 16 £ 9  P X1  (d) The variance of M16 X is much less than Var X1 . Hence, the PDF of M16 X should be much 9 to more concentrated about E X than the PDF of X1 . Thus we should expect P M16 X be much less than P X1 9 . ¦  ¦  ¥  ¦  ¥   1 P X1 § 9 ¥ £ ¦ P M16 X 1 Φ 2 66 X16 !000 #¢¢"!  ¥ § 1 144 ¦   9  ¥  £ ¦ P M16 X  ¥ By a Central Limit Theorem approximation, £ § & 0 0039 9.   § P X1 §  9 16µX 16σX ¥ 144 Φ ¡  1 2  ¦ ¦  ¥ As we predicted, P M16 X 1 £¦ 9 ¡ P M16 X  ¥ Problem 8.2.2 We know from the Chebyshev inequality that ¤ 3 5  § 3 4 σ2 X c2 c kσX , we obtain 3 % EX 1 k2 kσ ¤  § PX  £ Choosing c EX  PX 3 6 Problem 8.3.2 are iid random variables each with mean 75 and standard deviation 15. X1 X2 ¡¡ ¢¢¡ (a) We would like to find the value of n such that 76 0 99 £ ¦ Mn X ¡ ¥  P 74  When we know only the mean and variance of Xi , our only real tool is the Chebyshev inequality which says that ¤ ¤ 1 1 Var X n §  3 % EX   § 7¦ P Mn X £ 3 6 ¥ § 1 1 § £ ¦ ¥ ¤   This yields n 76 225 n ¤ Mn X 0 99 ¡ P 74 22500. (b) If each Xi is a Gaussian, the sample mean, Mn X will also be Gaussian with mean and variance ¦ £  £ ¦  £ ¦ Var X n    2 75 £ ¥8  Var Mn X EX 225 n  ¥ E Mn X ¥8  In this case, 0 99 £ 1 n 15 ¡ § 7¦ £ § 7¦ £ & 9 ¥ & n 15 §¥ Φ σ & n 15 µ 9 2Φ σ 74 Φ ¦ £ Φ µ § 76 Φ 9 ¥ 76 §  ¦ ¥8   9 Thus, n Mn X § P 74 1521. Since even under the Gaussian assumption, the number of samples n is so large that even if the Xi are not Gaussian, the sample mean may be approximated by a Gaussian. Hence, about 1500 samples probably is about right. However, in the absence of any information about the PDF of Xi beyond the mean and variance, we cannot make any guarantees stronger than that given by the Chebyshev inequality. 9 Problem 8.3.3 Both questions can be answered using the following equation from Example 8.8: PA 1 PA nc2 3 %  § 3 4 £  9900 n £  ¡ £ ¨ £ £ @  ¦ ¡ 99 ¨ 0 01 or n £ 3 4  ¡ § £ ¥  107 n 99 ¡ §   ¤ 3 5  § 3 4 A The confidence level 0 01 is met if 9 9 0 0099 n10 10 A 3 5  PA 1 PA nc2 c £ ¡ PA 10 5 . This implies ¡ 3 4 P Rn 10 3 P A A § (b) In this case, we meet the requirement by choosing c 0 01. This requires n ¨ 3 %  Thus to have confidence level 0 01, we require that 9900 n  ¤ 0 001 ¡  PA 0 001 yielding ¡ ¤ c (a) In this part, we meet the requirement by choosing c P Rn 990 000. 0 0099 nc2 ¡ PA ¤  P Rn 0 01, we ¤  The unusual part of this problem is that we are given the true value of P A . Since P A can write ¡ ¥  c § PA ¦ @  P Rn 107 n 109 . ¡ ¡ Problem 9.1.2 (a) We wish to develop a hypothesis test of the form c £ EK 0 05 ¡ 3 C5  § PK 3 B to determine if the coin we’ve been flipping is indeed a fair one. We would like to find the value of c, which will determine the upper and lower limits on how many heads we can get 3 away from the expected number out of 100 flips and still accept our hypothesis. Under our fair coin hypothesis, the expected number of heads, and the standard deviation of the process are EK 50 £ 100 1 2 1 2 5 £ 0  £ 0   σK Now in order to find c we make use of the central limit theorem and divide the above inequality through by σK to arrive at EK σk c σK 0 05 £ 3 5   § K ¡ 3 P Taking the complement, we get EK σk c σK    K 0 95 £ § c σK ¡  § P Using the Central Limit Theorem we can write c σK 1 § 2Φ 0 95 £ c σK Φ ¡ § £ c σK § Φ This implies Φ c σK 0 975 or c 5 1 96. That is, c 9 8 flips. So we see that if we observe more then 50 10 60 or less then 50 10 40 heads, then with significance level α 0 05 we should reject the hypothesis that the coin is fair. £ ¡ £ ¡ £  £ § £¦ ¡ ¥  ! ¡  (b) Now we wish to develop a test of the form c 0 01 £ ¡ PK   Thus we need to find the value of c that makes the above probability true. This value will tell us that if we observe more than c heads, then with significance level α 0 01, we should reject the hypothesis that the coin is fair. To find this value of c we look to evaluate the CDF £ 0 01 100 £  k k k 100 D ¦ 12  100 0i ¡¡ ¢¢¡ 0 ∑k i 1 ¡ ¥ £¦ FK k 100 E ¥  Computation reveals that c 62 flips. So if we observe 62 or greater heads, then with a significance level of 0.01 we should reject the fair coin hypothesis. Another way to obtain this result is to use a Central Limit Theorem approximation. First, we express our rejection region in terms of a zero mean, unit variance random variable. EK σK  § K c  P  § 1 § c EK σK  £   § 0 01 5, the CLT approximation is c ¡ 4 £     £   From Table 4.1, we have c 50 5 2 35 or c the hypothesis if we observe 62 or more heads. 50 5 0 01 61 75. Once again, we see that we reject ¡ Φ § 1 § c £  PK ¡ £ £ 50 and σK PK  1 £ Since E K c ¡ PK £ ¦ § ¥ ...
View Full Document

This note was uploaded on 02/11/2012 for the course EEE 352 taught by Professor Ferry during the Spring '08 term at ASU.

Ask a homework question - tutors are online