This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ECE 3100 Homework #9 Solutions 1. Fall 2009 Since the covariance remains unchanged when a constant is added to a random variable, we can assume that X and Y have zero mean. Then cov(X + Y, X − Y ) = E [(X + Y )(X − Y )] = E [X 2 − Y 2 ] = E [X 2 ] − E [Y 2 ] = Var(X ) − Var(Y ) = 0. 2. Let Z = X − Y , we will ﬁrst compute the CDF of Z . P[Z ≤ z ] = P[X − Y  ≤ z ] = P[−z ≤ X − Y ≤ z ] = P[Y − z ≤ X ≤ z + Y ] Since X and Y are independent, the last probability can be computed as ∞ P[Y − z ≤ X ≤ z + Y ] = fY (y )P[y − z ≤ X ≤ z + y Y = y ]dy 0 ∞ = λe−λy P[y − z ≤ X ≤ z + y ]dy
0 4. Denote by Ni , i = 1, ..., 4, the number of defective transistors in the ith core, then each Ni is a Poisson RV with mean λ moreover the 4 RVs are independent. We are interested in determining the PMF of N = N1 + N2 + N3 + N4 . Consider ﬁrst the RV M1 = N1 + N2 , we have n P[M1 = n] = P[N1 + N2 = n] = P[N1 = k, N2 = n − k ]
k=0 Since N1 and N2 are independent we deduce the following P[M1 = n] =
n k=0 P[N1 = k ]P[N2 = n − k ]
n λk λn−k k ! (n − k )! = e−2λ =e Notice that P[y − z ≤ X ≤ z + y ] = P[X ≤ z + y ] − P[X < y − z ], at this point two cases can be distinguished. Case 1: If y ≤ z , we have P[X < y − z ] = 0 (because X takes only positive values). Hence, using the formula for the CDF of an exponential RV, we obtain P[y − z ≤ X ≤ z + y ] = P[X ≤ z + y ] = 1 − e−λ(z +y) . Case 2: If y > z , using again the formula for the CDF of an exponential RV, we obtain P[y − z ≤ X ≤ z + y ] = 1 − e
−λ(z +y ) λn n 2 n! n −2λ (2λ) =e , n! n n! n= k n−k (here applied with a = b = 1). It follows where we have used the binomial theorem (a + b) k=0 k!(n−k)! a b then that M1 is a Poisson RV with mean 2λ, similarly M2 = N3 + N4 will be a Poisson RV with mean 2λ. Notice that M1 depends only on N1 and N2 , while M2 is a function of N3 and N4 , however the RVs Ni are independent of each other, hence M1 and M2 are independent RVs. We conclude therefore that N = M1 + M2 is a Poisson RV with mean 4λ. = e−2λ
1 5. (a) Let Ti , i = 1, ..., n, be independent exponential RVs with respective means µi . Consider T = min(T1 , ..., Tn ), we are interested in determining the PDF of T . Using the independence of the random variables T1 , ..., Tn , the complementary CDF of T is given by k=0 n n −2λ λ n! k=0 n! k !(n − k )! − (1 − e −λ(y −z ) )=e −λ(y −z ) −e −λ(z +y ) . Going back to P[Z ≤ z ] we have z ∞ P[Z ≤ z ] = λe−λy P[y − z ≤ X ≤ z + y ]dy + λe−λy P[y − z ≤ X ≤ z + y ]dy 0 z z ∞ = λe−λy (1 − e−λ(z +y) )dy + λe−λy (e−λ(y−z ) − e−λ(z +y) )dy z 0 z z ∞ = λe−λy dy − e−λz λe−2λy dy + (eλz − e−λz ) λe−2λy dy 0 z ∞0 ∞ = 1 − e−λz − e−λz λe−2λy dy + eλz λe−2λy dy 1 1 = 1 − e−λz − e−λz + eλz e−2λz 2 2 = 1 − e−λz .
0 z P[T > t] = P[min(T1 , ..., Tn ) > t] = P[T1 > t, ..., Tn > t] = P[T1 > t] × ... × P[Tn > t] = e−µ1 t × ... × e−µn t
1 Hence T is an exponential RV with mean µ , where µ = 5. (b) Let the random variable K indicate which of the n components caused the jPhone to fail, i.e., From part (a) we know that Yk = min(T1 , ..., Tk−1 , Tk+1 , ..., Tn ) is an exponential RV with mean ν1 , where νk = k µ − µk . Notice now that Yk is a function of (T1 , ..., Tk−1 , Tk+1 , ..., Tn ) which are independent of Tk , hence Tk and Yk are independent. Consequently, the following sequence of equalities follows P[K = k ] = P[Tk ≤ Yk ] ∞ = fTk (t)P[Yk ≥ tTk = t]dt 0 ∞ = µk e−µk t P[Yk ≥ t]dt 0 ∞ = µk e−µk t e−νk t dt 0 ∞ = µk e−(µk +νk )t dt = µk µk = n . µk + νk i=1 µi 2
0 = e−(µ1 +...+µn )t . n i=1 µi . P[K = k ] = P[Tk = T ] = P[Tk ≤ min(T1 , ..., Tk−1 , Tk+1 , ..., Tn )]. From the last equality we see that Z = X − Y  is an exponential random variable with mean given by fZ (z ) = λe−λz for z > 0 and fZ (z ) = 0 otherwise. 1 λ and hence with PDF 3. (a) Let Y = g (X ) = sin(X ). Then for X ∈ [0, 2π ], Y takes values in the interval [−1, 1], and is not uniformly distributed over [0,1]. The Penn student is therefore incorrect. 3. (b) Let g (x) =
x 2π . Then Y = g (X ) = X 2π , and Y ∈ [0, 1]. Thus fY (y ) = 0 for y ∈ [0, 1]. For y ∈ [0, 1], we have / 1 = y, 2π FY (y ) = P(g (X ) ≤ y ) = P(X/2π ≤ y ) = P(X ≤ 2π y ) = 2π y · and fY (y ) =
d dy FY (y ) = 1. Thus Y is uniform over [0, 1]. 1 6. R1 and R2 have standard deviation 100Ω. Thus Var(R1 ) = Var(R2 ) = 1002 Ω2 = 0.01(kΩ)2 . We thus have R1 ∼ N (1, 0.01) and R1 ∼ N (1, 0.01) with R1 and R2 independent. We wish to compute R2 1 0.05 R2 0.95 R2 1.05 P =P < +P > R1 + R2 − 2 > 2 R1 + R2 2 R1 + R2 2 = P(2R2 < 0.95R1 + 0.95R2 ) + P(2R2 > 1.05R1 + 1.05R2 ) = P(0.95R1 − 1.05R2 > 0) + P(0.95R2 − 1.05R1 > 0).
2 2 Let Z1 = 0.95R1 − 1.05R2 and Z2 = 0.95R2 − 1.05R1 . Using the fact that X ∼ N (µ1 , σ1 ), Y ∼ N (µ2 , σ2 ), 2 2 X and Y independent, implies aX + bY ∼ N (aµ1 + bµ2 , a2 σ1 + b2 σ2 ), we have Z1 ∼ N (−0.1, 0.02005) and Z2 ∼ N (−0.1, 0.02005). Since Z1 and Z2 have the same distribution, the required probability becomes P(Z1 > 0) + P(Z2 > 0) = 2P(Z1 > 0). We can compute P(Z1 > 0) either numerically by integrating the Gaussian PDF or from Normal distribution tables. In both cases, we obtain P(Z1 > 0) = 0.223, and thus R2 1 0.05 P = 2P(Z1 > 0) = 0.446. R1 + R2 − 2 > 2 3 ...
View
Full
Document
This note was uploaded on 01/16/2010 for the course ECE 3100 at Cornell University (Engineering School).
 '05
 HAAS

Click to edit the document details