This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 5.5. SOLVED PROBLEMS 81 Example 5.5.10. Let X be uniformly distributed in [0, 2π ] and Y = sin(X ). Calculate the p.d.f. fY of Y . Since Y = g (X ), we know that fY (y ) = 1 fX (xn ) g (xn ) where the sum is over all the xn such that g (xn ) = y . For each y ∈ (−1, 1), there are two values of xn in [0, 2π ] such that g (xn ) = sin(xn ) = y . For those values, we ﬁnd that g (xn ) =  cos(xn ) = and 1 − sin2 (xn ) = 1 − y2, fX (xn ) = Hence, 1 . 2π fY (y ) = 2 1 1− y2 1 = 2π π 1 1 − y2 . Example 5.5.11. Let {X, Y } be independent random variables with X exponentially distributed with mean 1 and Y uniformly distributed in [0, 1]. Calculate E (max{X, Y }). Let Z = max{X, Y }. Then P (Z ≤ z ) = P (X ≤ z, Y ≤ z ) = P (X ≤ z )P (Y ≤ z ) z (1 − e−z ), for z ∈ [0, 1] = 1 − e−z , for z ≥ 1. Hence, 1 − e−z + ze−z , for z ∈ [0, 1] fZ (z ) = e−z , for z ≥ 1. 82 CHAPTER 5. RANDOM VARIABLES Accordingly,
∞ 1 E (Z ) =
0 zfZ (z )dz = z (1 − e−z + ze−z )dz +
1 ∞ ze−z dz 0 To do the calculation we note that
1 0 1 0 zdz = [z 2 /2]1 = 1/2, 0
1 0 −1 1 0 ze−z dz = − = −e zde−z = −[ze−z ]1 + 0 − [e−z ]1 0 = 1 − 2e
−1 e−z dz . 1 0 z 2 e−z dz = − = −e
∞ 1 1 0 −1 z 2 de−z = −[z 2 e−z ]1 + 0 + 2(1 − 2e
1 0 −1 1 0 −1 2ze−z dz ) = 2 − 5e . ze−z dz = 1 − ze−z dz = 2e−1 . Collecting the pieces, we ﬁnd that E (Z ) = 1 − (1 − 2e−1 ) + (2 − 5e−1 ) + 2e−1 = 3 − 5e−1 ≈ 1.16. 2 Example 5.5.12. Let {Xn , n ≥ 1} be i.i.d. with E (Xn ) = µ and var(Xn ) = σ 2 . Use Chebyshev’s inequality to get a bound on α := P ( X1 + · · · + Xn − µ ≥ ). n Chebyshev’s inequality (4.8.1) states that α≤ 1
2 var( X1 + · · · + Xn 1 nvar(X1 ) σ2 )= 2 = 2. 2 n n n This calculation shows that the sample mean gets closer and closer to the mean: the variance of the error decreases like 1/n. 5.5. SOLVED PROBLEMS 83 Example 5.5.13. Let X =D P (λ). You pick X white balls. You color the balls independently, each red with probability p and blue with probability 1 − p. Let Y be the number of red balls and Z the number of blue balls. Show that Y and Z are independent and that Y =D P (λp) and Z =D P (λ(1 − p)). We ﬁnd P (Y = m, Z = n) = P (X = m + n) m+n m p (1 − p)n m λm+n m+n m λm+n (m + n)! m = p (1 − p)n = × p (1 − p)n (m + n)! m (m + n)! m!n! (λp)m −λp (λ(1 − p))n −λ(1−p) =[ e ]×[ e ], m! n! which proves the result. 6.7. SOLVED PROBLEMS 95 6.7 Solved Problems Example 6.7.1. Let (X, Y ) be a point picked uniformly in the quarter circle {(x, y )  x ≥ 0, y ≥ 0, x2 + y 2 ≤ 1}. Find E [X  Y ]. Given Y = y , X is uniformly distributed in [0, E [X  Y ] = 1 2 1 − y 2 ]. Hence 1 − Y 2. Example 6.7.2. A customer entering a store is served by clerk i with probability pi , i = 1, 2, . . . , n. The time taken by clerk i to service a customer is an exponentially distributed random variable with parameter αi . a. Find the pdf of T , the time taken to service a customer. b. Find E [T ]. c. Find V ar[T ]. Designate by X the clerk who serves the customer. a. fT (t) =
n i=1 pi fT X [ti] = n −αi t i=1 pi αi e n 1 i=1 pi αi .
i b. E [T ] = E (E [T  X ]) = E ( α1 ) = X 1 c. We ﬁrst ﬁnd E [T 2 ] = E (E [T 2  X ]) = E ( α2 ) = n 2 i=1 pi α2 . i Hence, var(T ) = E (T 2 ) − (E (T ))2 = n 2 i=1 pi α2 i −( n 12 i=1 pi αi ) . Example 6.7.3. The random variables Xi are i.i.d. and such that E [Xi ] = µ and var(Xi ) = σ 2 . Let N be a random variable independent of all the Xi s taking on nonnegative integer values. Let S = X1 + X2 + . . . + XN . a. Find E (S ). b. Find var(S ). a. E (S )] = E (E [S  N ]) = E (N µ) = µE (N ). 96 CHAPTER 6. CONDITIONAL EXPECTATION b. First we calculate E (S 2 ). We ﬁnd E (S 2 ) = E (E [S 2  N ]) = E (E [(X1 + X2 + . . . + XN )2  N ])
2 2 = E (E [X1 + · · · + XN + i=j Xi Xj  N ]) = 2 E (N E (X1 ) + N (N − 1)E (X1 X2 )) = E (N (µ2 + σ 2 ) + N (N − 1)µ2 ) = E (N )σ 2 + E (N 2 )µ2 . Then, var(S ) = E (S 2 ) − (E (S ))2 = E (N )σ 2 + E (N 2 )µ2 − µ2 (E (N ))2 = E (N )σ 2 + var(N )µ2 . Example 6.7.4. Let X, Y be independent and uniform in [0, 1]. Calculate E [X 2  X + Y ]. Given X + Y = z , the point (X, Y ) is uniformly distributed on the line {(x, y )  x ≥ 0, y ≥ 0, x + y = z }. Draw a picture to see that if z > 1, then X is uniform on [z − 1, 1] and if z < 1, then X is uniform on [0, z ]. Thus, if z > 1 one has
1 z −1 E [X 2  X + Y = z ] = Similarly, if z < 1, then x2 1 1 x3 1 1 − (z − 1)3 dx = [ ]z −1 = . 2−z 2−z 3 3(2 − z ) E [X 2  X + Y = z ] =
0 z 1 1 x3 z2 x2 dx = [ ]z = . 0 z z3 3 Example 6.7.5. Let (X, Y ) be the coordinates of a point chosen uniformly in [0, 1]2 . Calculate E [X  XY ]. This is an example where we use the straightforward approach, based on the deﬁnition. The problem is interesting because is illustrates that approach in a tractable but nontrivial example. Let Z = XY .
1 E [X  Z = z ] =
0 xf[X Z ] [x  z ]dx. 6.7. SOLVED PROBLEMS 97 Now, f[X Z ] [x  z ] = Also, fX,Z (x, z ) . fZ (z ) fX,Z (x, z )dxdz = P (X ∈ (x, x + dx), Z ∈ (z, z + dz )) = P (X ∈ (x, x + dx))P [Z ∈ (z, z + dz )  X = x] = dxP (xY ∈ (z, z + dz )) zz dz dz = dxP (Y ∈ ( , + )) = dx 1{z ≤ x}. xx x x Hence, fX,Z (x, z ) = Consequently,
1 1 1 x, if x ∈ [0, 1] and z ∈ [0, x] 0, otherwise. fZ (z ) = Finally, 0 fX,Z (x, z )dx = z 1 dx = −ln(z ), 0 ≤ z ≤ 1. x f[X Z ] [x  z ] = − and 1 , for x ∈ [0, 1] and z ∈ [0, x], xln(z ) 1 E [X  Z = z ] =
z x(− 1 z−1 )dx = , xln(z ) ln(z ) XY − 1 . ln(XY ) so that E [X  XY ] = Examples of values: E [X  XY = 1] = 1, E [X  XY = 0.1] = 0.39, E [X  XY ≈ 0] ≈ 0. Example 6.7.6. Let X, Y be independent and exponentially distributed with mean 1. Find E [cos(X + Y )  X ]. 98 CHAPTER 6. CONDITIONAL EXPECTATION We have
∞ E [cos(X + Y )  X = x] =
0 cos(x + y )e−y dy = Re{
0 ∞ ei(x+y)−y dy } = Re{ eix cos(x) − sin(x) }= . 1−i 2 Example 6.7.7. Let X1 , X2 , . . . , Xn be i.i.d. U [0, 1] and Y = max{X1 , . . . , Xn }. Calculate E [X1  Y ]. Intuition suggests, and it is not too hard to justify, that if Y = y , then X1 = y with probability 1/n, and with probability (n − 1)/n the random variable X1 is uniformly distributed in [0, y ]. Hence, E [X1  Y ] = n−1Y n+1 1 Y+ = Y. n n2 2n Example 6.7.8. Let X, Y, Z be independent and uniform in [0, 1]. Calculate E [(X + 2Y + Z )2  X ]. One has, E [(X + 2Y + Z )2  X ] = E [X 2 + 4Y 2 + Z 2 + 4XY + 4Y Z + 2XZ  X ]. Now, E [X 2 + 4Y 2 + Z 2 + 4XY + 4Y Z + 2XZ  X ] = X 2 + 4E (Y 2 ) + E (Z 2 ) + 4XE (Y ) + 4E (Y )E (Z ) + 2XE (Z ) = X 2 + 4/3 + 1/3 + 2X + 1 + X = X 2 + 3X + 8/3. Example 6.7.9. Let X, Y, Z be three random variables deﬁned on the same probability space. Prove formally that E (X − E [X  Y ]2 ) ≥ E (X − E [X  Y, Z ]2 ). Let X1 = E [X  Y ] and X2 = E [X  Y, Z ]. Note that E ((X − X2 )(X2 − X1 )) = E (E [(X − X2 )(X2 − X1 )  Y, Z ]) 6.7. SOLVED PROBLEMS 99 and E [(X − X2 )(X2 − X1 )  Y, Z ] = (X2 − X1 )E [X − X2  Y, Z ] = X2 − X2 = 0. Hence, E ((X − X1 )2 ) = E ((X − X2 + X2 − X1 )2 ) = E ((X − X2 )2 )+ E ((X2 − X1 )2 ) ≥ E ((X − X2 )2 ). Example 6.7.10. Pick the point (X, Y ) uniformly in the triangle {(x, y )  0 ≤ x ≤ 1 and 0 ≤ y ≤ x}. a. Calculate E [X  Y ]. b. Calculate E [Y  X ]. c. Calculate E [(X − Y )2  X ]. a. Given {Y = y }, X is U [y, 1], so that E [X  Y = y ] = (1 + y )/2. Hence, E [X  Y ] = 1+Y . 2 b. Given {X = x}, Y is U [0, x], so that E [Y  X = x] = x/2. Hence, E [Y  X ] = c. Since given {X = x}, Y is U [0, x], we ﬁnd E [(X − Y )2  X = x] =
0 x X . 2 1 1 (x − y )2 dy = x x X2 . 3 x 0 y 2 dy = x2 . Hence, 3 E [(X − Y )2  X ] = Example 6.7.11. Assume that the two random variables X and Y are such that E [X  Y ] = Y and E [Y  X ] = X . Show that P (X = Y ) = 1. We show that E ((X − Y )2 ) = 0. This will prove that X − Y = 0 with probability one. Note that E ((X − Y )2 ) = E (X 2 ) − E (XY ) + E (Y 2 ) − E (XY ). 100 CHAPTER 6. CONDITIONAL EXPECTATION Now, E (XY ) = E (E [XY  X ]) = E (XE [Y  X ]) = E (X 2 ). Similarly, one ﬁnds that E (XY ) = E (Y 2 ). Putting together the pieces, we get E ((X − Y )2 ) = 0. Example 6.7.12. Let X, Y be independent random variables uniformly distributed in [0, 1]. Calculate E [X X < Y ]. Drawing a unit square, we see that given {X < Y }, the pair (X, Y ) is uniformly distributed in the triangle left of the diagonal from the upper left corner to the bottom right corner of that square. Accordingly, the p.d.f. f (x) of X is given by f (x) = 2(1 − x). Hence,
1 E [X X < Y ] =
0 1 x × 2(1 − x)dx = . 3 108 CHAPTER 7. GAUSSIAN RANDOM VARIABLES 7.4 Summary We deﬁned the Gaussian random variables N (0, 1), N (µ, σ 2 ), and N (µ , Σ ) both in terms of their density and their characteristic function. Jointly Gaussian random variables that are uncorrelated are independent. If X, Y are jointly Gaussian, then E [X  Y ] = E (X ) + cov(X, Y )var(Y )−1 (Y − E (Y )). In the vector case,
− E [X  Y ] = E (X ) + ΣX,Y ΣY 1 (Y − E (Y ), when ΣY is invertible. We also discussed the noninvertible case. 7.5 Solved Problems Example 7.5.1. The noise voltage X in an electric circuit can be modelled as a Gaussian random variable with mean zero and variance equal to 10−8 . a. What is the probability that it exceeds 10−4 ? What is the probability that it exceeds 2 × 10−4 ? What is the probability that its value is between −2 × 10−4 and 10−4 ? b. Given that the noise value is positive, what is the probability that it exceeds 10−4 ? c. What is the expected value of X ? Let Z = 104 X , then Z =D N (0, 1) and we can reformulate the questions in terms of Z . a. Using (7.1) we ﬁnd P (Z > 1) = 0.159 and P (Z > 2) = 0.023. Indeed, P (Z > d) = P (Z  > d)/2, by symmetry of the density. Moreover, P (−2 < Z < 1) = P (Z < 1)−P (Z ≤ −2) = 1−P (Z > 1)−P (Z > 2) = 1−0.159−0.023 = 0.818. b. We have P [Z > 1  Z > 0] = P (Z > 1) = 2P (Z > 1) = 0.318. P (Z > 0) 7.5. SOLVED PROBLEMS 109 c. Since Z = 104 X , one has E (Z ) = 104 E (X ). Now,
∞ ∞ ∞ E (Z ) =
−∞ z fZ (z )dz = 2
∞ 0 0 zfZ (z )dz = 2 2 . π 0 1 1 √ z exp{− z 2 }dz 2 2π =− Hence, 2 π 1 d[exp{− z 2 }] = 2 E (X ) = 10−4 2 . π Example 7.5.2. Let U = {Un , n ≥ 1} be a sequence of independent standard Gaussian random variables. A lowpass ﬁlter takes the sequence U and produces the output sequence Xn = Un + Un+1 . A highpass ﬁlter produces the output sequence Yn = Un − Un+1 . a. Find the joint pdf of Xn and Xn−1 and ﬁnd the joint pdf of Xn and Xn+m for m > 1. b. Find the joint pdf of Yn and Yn−1 and ﬁnd the joint pdf of Yn and Yn+m for m > 1. c. Find the joint pdf of Xn and Ym . We start with some preliminary observations. First, since the Ui are independent, they are jointly Gaussian. Second, Xn and Yn are linear combinations of the Ui and thus are also jointly Gaussian. Third, the jpdf of jointly gaussian random variables Z is fZ (z ) = 1 (2π )n det(C ) 1 exp[− (z − m )C −1 (z − m )] 2 where n is the dimension of Z , m is the vector of expectations of Z , and C is the covariance matrix E [(Z − m )(Z − m )T ]. Finally, we need some basic facts from algebra. If C = ab d −b , then det(C ) = ad − bc and C −1 = 1 . We are now ready to det(C ) cd −c a answer the questions. U a. Express in the form X = AU . Xn Xn−1 0
1 2 1 2 1 2 1 2 = Un 0 Un+1 Un−1 112 CHAPTER 7. GAUSSIAN RANDOM VARIABLES Then det(C ) = 1 4 − 1 14 = 3 16 and C −1 = 16 3 −1 4
1 2 −1 4 1 2 fXn Yn (xn , yn ) = 2 √ π3 2 exp[− 4 (x2 − xn yn + yn )] 3n ii. Consider m=n+1. Xn Yn+1 1 2 1 2 1 2 −1 2 Un Un+1 10 01 20 02 1 2 1 2 = Then E [[Xn Yn+1 ]T ] = AE [U ] = 0 . C = AE [U U T ]AT = Then det(C ) =
1 4 1 2 1 2 1 2 −1 2 1 2 1 2 0
1 2 −1 2 = 0 and C −1 = fXn Yn+1 (xn , yn+1 ) =
1 π 2 exp[− 1 (x2 + yn+1 )] 4n iii. For all other m. Xn Ym 1 2 1 2 = 0 0 0 −1 2 0 Un+1 1 Um−1 2 Um Un Then E [[Xn Ym ]T ] = AE [U ] = 0 . C = AE [U U T ]AT = 1 2 1 2 0 0
1 2 0 0 −1 2 0 1 0 0 0 0 0 1 0 0 −1 2 1 0001 02 20 02 1000 1 2 1 2 0 1 2 = 0 0
1 2 Then det(C ) = 1 4 and C −1 = fXn Ym (xn , ym ) =
1 π 2 exp[− 1 (x2 + ym )] 4n 7.5. SOLVED PROBLEMS 113 Example 7.5.3. Let X, Y, Z, V be i.i.d. N (0, 1). Calculate E [X + 2Y 3X + Z, 4Y + 2V ]. We have E [X + 2Y 3X + Z, 4Y + 2V ] = a Σ−1 where a = [E ((X + 2Y )(3X + Z )), E ((X + 2Y )(4Y + 2V ))] = [3, 8] and Σ= Hence, E [X +2Y 3X +Z, 4Y +2V ] = [3, 8] 10−1 0 0 20−1 3X + Z = 3 (3X +Z )+ 4 (4Y +2V ). 10 10 4Y + 2V var(3X + Z ) E ((3X + Z )(4Y + 2V )) E ((3X + Z )(4Y + 2V )) var(4Y + 2V ) 10 0 0 20 . = 3X + Z 4Y + 2V Example 7.5.4. Assume that {X, Yn , n ≥ 1} are mutually independent random variables ˆ with X = N (0, 1) and Yn = N (0, σ 2 ). Let Xn = E [X  X + Y1 , . . . , X + Yn ]. Find the smallest value of n such that ˆ P (X − Xn  > 0.1) ≤ 5%. ˆ We know that Xn = an (nX + Y1 + · · · + Yn ). The value of an is such that ˆ E ((X − Xn )(X + Yj )) = 0, i.e., E ((X − an (nX + Yj ))(X + Yj )) = 0, which implies that an = Then ˆ var(X − Xn ) = var((1 − nan )X − an (Y1 + · · · + Yn )) = (1 − nan )2 + n(an )2 σ 2 = σ2 . n + σ2 1 . n + σ2 114 CHAPTER 7. GAUSSIAN RANDOM VARIABLES σ ˆ Thus we know that X − Xn = N (0, n+σ2 ). Accordingly, 2 ˆ P (X − Xn  > 0.1) = P (N (0, where αn =
σ2 . n+σ 2 σ2 0.1 ) > 0.1) = P (N (0, 1) > ) 2 n+σ αn For this probability to be at most 5% we need 0.1 = 2, i.e., αn = αn σ2 0.1 = , 2 n+σ 2 so that n = 19σ 2 . The result is intuitively pleasing: If the observations are more noisy (σ 2 large), we need more of them to estimate X . Example 7.5.5. Assume that X, Y are i.i.d. N (0, 1). Calculate E [(X + Y )4  X − Y ]. Note that X + Y and X − Y are independent because they are jointly Gaussian and uncorrelated. Hence, E [(X + Y )4  X − Y ] = E ((X + Y )4 ) = E (X 4 +4X 3 Y +6X 2 Y 2 +4XY 3 + Y 4 ) = 3+6+3 = 12. Example 7.5.6. Let X, Y be independent N (0, 1) random variables. Show that W := X 2 + Y 2 =D Exd(1/2). That is, the sum of the squares of two i.i.d. zeromean Gaussian random variables is exponentially distributed! We calculate the characteristic function of W . We ﬁnd E (eiuW ) = =
0 ∞ −∞ 2π ∞ −∞ ∞ 0 eiu(x eiur
2 2 +y 2 ) 1 −(x2 +y2 )/2 e dxdy 2π 1 −r2 /2 e rdrdθ 2π rdr ∞ =
0 ∞ eiur e−r 2 2 /2 =
0 1 1 2 2 d[eiur −r /2 ] = . 2iu − 1 1 − 2iu 7.5. SOLVED PROBLEMS 115 On the other hand, if W =D Exd(λ), then E (eiuW ) =
0 ∞ eiux λe−λx dx = . λ 1 = . λ − iu 1 − λ−1 iu Comparing these expressions shows that X 2 + Y 2 =D Exd(1/2) as claimed. Example 7.5.7. Let {Xn , n ≥ 0} be Gaussian N (0, 1) random variables. Assume that Yn+1 = aYn + Xn for n ≥ 0 where Y0 is a Gaussian random variable with mean zero and variance σ 2 independent of the Xn ’s and a < 1. a. Calculate var(Yn ) for n ≥ 0. Show that var(Yn ) → γ 2 as n → ∞ for some value γ 2 . b. Find the values of σ 2 so that the variance of Yn does not depend on n ≥ 1. a. We see that var(Yn+1 ) = var(aYn + Xn ) = a2 var(Yn ) + var(Xn ) = a2 var(Yn ) + 1. Thus, we αn := var(Yn ), one has αn+1 = a2 αn + 1 and α0 = σ 2 . Solving these equations we ﬁnd var(Yn ) = αn = a2n σ 2 + Since a < 1, it follows that var(Yn ) → γ 2 := b. The obvious answer is σ 2 = γ 2 . 1 as n → ∞. 1 − a2 1 − a2n , for n ≥ 0. 1 − a2 116 CHAPTER 7. GAUSSIAN RANDOM VARIABLES Example 7.5.8. Let the Xn ’s be as in Example 7.5.7. a.Calculate E [X1 + X2 + X3  X1 + X2 , X2 + X3 , X3 + X4 ]. b. Calculate E [X1 + X2 + X3  X1 + X2 + X3 + X4 + X5 ]. a. We know that the solution is of the form Y = a(X1 + X2 ) + b(X2 + X3 ) + c(X3 + X4 ) where the coeﬃcients a, b, c must be such that the estimation error is orthogonal to the conditioning variables. That is, E ((X1 + X2 + X3 ) − Y )(X1 + X2 )) = E ((X1 + X2 + X3 ) − Y )(X2 + X3 )) = E ((X1 + X2 + X3 ) − Y )(X3 + X4 )) = 0. These equalities read 2 − a − (a + b) = 2 − (a + b) − (b + c) = 1 − (b + c) − c = 0, and solving these equalities gives a = 3/4, b = 1/2, and c = 1/4. b. Here we use symmetry. For k = 1, . . . , 5, let Yk = E [Xk  X1 + X2 + X3 + X4 + X5 ]. Note that Y1 = Y2 = · · · = Y5 , by symmetry. Moreover, Y1 +Y2 +Y3 +Y4 +Y5 = E [X1 +X2 +X3 +X4 +X5  X1 +X2 +X3 +X4 +X5 ] = X1 +X2 +X3 +X4 +X5 . It follows that Yk = (X1 + X2 + X3 + X4 + X5 )/5 for k = 1, . . . , 5. Hence, 3 E [X1 + X2 + X3  X1 + X2 + X3 + X4 + X5 ] = Y1 + Y2 + Y3 = (X1 + X2 + X3 + X4 + X5 ). 5 Example 7.5.9. Let the Xn ’s be as in Example 7.5.7. Find the jpdf of (X1 + 2X2 + 3X3 , 2X1 + 3X2 + X3 , 3X1 + X2 + 2X3 ). 7.5. SOLVED PROBLEMS 117 These random variables are jointly Gaussian, zero mean, and with covariance matrix Σ given by 14 11 11 Σ = 11 14 11 . 11 11 14 Indeed, Σ is the matrix of covariances. For instance, its entry (2, 3) is given by E ((2X1 + 3X2 + X3 )(3X1 + X2 + 2X3 )) = 2 × 3 + 3 × 1 + 1 × 2 = 11. We conclude that the jpdf is fX (x ) = 1 (2π )3/2 Σ1/2 1 exp{− x T Σ−1x }. 2 We let you calculate Σ and Σ−1 . Example 7.5.10. Let X1 , X2 , X3 be independent N (0, 1) random variables. Calculate E [X1 + 3X2 Y ] where X1 1 2 3 Y = X2 321 X3 By now, this should be familiar. The solution is Y := a(X1 + 2X2 + 3X3 ) + b(3X1 + 2X2 + X3 ) where a and b are such that 0 = E ((X1 +3X2 − Y )(X1 +2X2 +3X3 )) = 7 − (a +3b) − (4a +4b) − (9a +3b) = 7 − 14a − 10b and 0 = E ((X1 +3X2 − Y )(3X1 +2X2 + X3 )) = 9 − (3a +9b) − (4a +4b) − (3a + b) = 9 − 10a − 14b. Solving these equations gives a = 1/12 and b = 7/12. Example 7.5.11. Find the jpdf of (2X1 + X2 , X1 + 3X2 ) where X1 and X2 are independent N (0, 1) random variables. 118 CHAPTER 7. GAUSSIAN RANDOM VARIABLES These random variables are jointly Gaussian, zeromean, with covariance Σ given by 55 . Σ= 5 10 Hence, fX (x ) = = where Σ−1 = 1 1 exp{− x T Σ−1x } 1/2 2 2π Σ 1 1 T −1 exp{− x Σ x } 10π 2 1 25 −5 10 −5 5 . Example 7.5.12. The random variable X is N (µ, 1). Find an approximate value of µ so that P (−0.5 ≤ X ≤ −0.1) ≈ P (1 ≤ X ≤ 2). We write X = µ + Y where Y is N (0, 1). We must ﬁnd µ so that g (µ) := P (−0.5 − µ ≤ Y ≤ −0.1 − µ) − P (1 − µ ≤ Y ≤ 2 − µ) ≈ 0. We do a little search using a table of the N (0, 1) distribution or using a calculator. I ﬁnd that µ ≈ 0.065. Example 7.5.13. Let X be a N (0, 1) random variable. Calculate the mean and the variance of cos(X ) and sin(X ). a. Mean Values. We know that E (eiuX ) = e−u Therefore, E (cos(uX ) + i sin(uX )) = e−u
2 /2 2 /2 and eiθ = cos(θ) + i sin(θ). , 7.5. SOLVED PROBLEMS 119 so that E (cos(uX )) = e−u
2 /2 and E (sin(uX )) = 0. In particular, E (cos(X )) = e−1/2 and E (sin(X )) = 0. b. Variances. We ﬁrst calculate E (cos2 (X )). We ﬁnd 1 11 E (cos2 (X )) = E ( (1 + cos(2X ))) = + E (cos(2X )). 2 22 Using the previous derivation, we ﬁnd that E (cos(2X )) = e−2
2 /2 = e−2 , so that E (cos2 (X )) = (1/2) + (1/2)e−2 . We conclude that var(cos(X )) = E (cos2 (X )) − (E (cos(uX )))2 = Similarly, we ﬁnd E (sin2 (X )) = E (1 − cos2 (X )) = 1 1 −2 − e = var(sin(X )). 22 11 1 1 −2 + e − (e−1/2 )2 = + e−2 − e−1 . 22 22 Example 7.5.14. Let X be a N (0, 1) random variable. Deﬁne X, if X  ≤ 1 Y= −X, if X  > 1. Find the pdf of Y . By symmetry, X is N (0, 1). Example 7.5.15. Let {X, Y, Z } be independent N (0, 1) random variables. a. Calculate E [3X + 5Y  2X − Y, X + Z ]. b. How does the expression change if X, Y, Z are i.i.d. N (1, 1)? 120 CHAPTER 7. GAUSSIAN RANDOM VARIABLES a. Let V1 = 2X − Y, V2 = X + Z and V = [V1 , V2 ]T . Then E [3X + 5Y  V ] = a Σ−1V V where a = E ((3X + 5Y )V T ) = [1, 3] and ΣV = Hence, E [3X + 5Y  V ] = [1, 3] 52 22 −1 1 2 −2 V = [1, 3] V 6 −2 5 52 22 . 2 13 1 = [−4, 13]V = − (2X − Y ) + (X + Z ). 6 3 6 b. Now, 1 E [3X + 5Y  V ] = E (3X + 5Y ) + a Σ−1 (V − E (V )) = 8 + [−4, 13](V − [1, 2]T ) V 6 13 26 2 − (2X − Y ) + (X + Z ). = 6 3 6 Example 7.5.16. Let (X, Y ) be jointly Gaussian. Show that X − E [X  Y ] is Gaussian and calculate its mean and variance. We know that E [X  Y ] = E (X ) + Consequently, X − E [X  Y ] = X − E (X ) − cov (X, Y ) (Y − E (Y )) var(Y ) cov (X, Y ) (Y − E (Y )). var(Y ) and is certainly Gaussian. This diﬀerence is zeromean. Its variance is var(X ) + [ cov (X, Y ) [cov (X, Y )]2 cov (X, Y ) 2 ] var(Y ) − 2 cov (X, Y ) = var(X ) − . var(Y ) var(Y ) var(Y ) 2.7. SOLVED PROBLEMS 19 and P : F → [0, 1] is a σ additive set function such that P (Ω) = 1. The idea is to specify the likelihood of various outcomes (elements of Ω). If one can specify the probability of individual outcomes (e.g., when Ω is countable), then one can choose F = 2Ω , so that all sets of outcomes are events. However, this is generally not possible as the example of the uniform distribution on [0, 1] shows. (See Appendix C.) 2.6.1 Stars and Bars Method In many problems, we use a method for counting the number of ordered groupings of identical objects. This method is called the stars and bars method. Suppose we are given identical objects we call stars. Any ordered grouping of these stars can be obtained by separating them by bars. For example,  ∗ ∗ ∗ ∗ separates four stars into four groups of sizes 0, 0, 3, and 1. Suppose we wish to separate N stars into M ordered groups. We need M − 1 bars to form M groups. The number of orderings is the number of ways of placing the N identical stars and M − 1 identical bars into N + M − 1 spaces,
N +M −1 M . Creating compound objects of stars and bars is useful when there are bounds on the sizes of the groups. 2.7 Solved Problems Example 2.7.1. Describe the probability space {Ω, F , P } that corresponds to the random experiment “picking ﬁve cards without replacement from a perfectly shuﬄed 52card deck.” 1. One can choose Ω to be all the permutations of A := {1, 2, . . . , 52}. The interpretation of ω ∈ Ω is then the shuﬄed deck. Each permutation is equally likely, so that pω = 1/(52!) for ω ∈ Ω. When we pick the ﬁve cards, these cards are (ω1 , ω2 , . . . , ω5 ), the top 5 cards of the deck. 20 CHAPTER 2. PROBABILITY SPACE 2. One can also choose Ω to be all the subsets of A with ﬁve elements. In this case, each subset is equally likely and, since there are N := for ω ∈ Ω. 3. One can choose Ω = {ω = (ω1 , ω2 , ω3 , ω4 , ω5 )  ωn ∈ A and ωm = ωn , ∀m = n, m, n ∈ {1, 2, . . . , 5}}. In this case, the outcome speciﬁes the order in which we pick the cards. Since there are M := 52!/(47!) such ordered lists of ﬁve cards without replacement, we deﬁne pω = 1/M for ω ∈ Ω. As this example shows, there are multiple ways of describing a random experiment. What matters is that Ω is large enough to specify completely the outcome of the experiment. Example 2.7.2. Pick three balls without replacement from an urn with ﬁfteen balls that are identical except that ten are red and ﬁve are blue. Specify the probability space. One possibility is to specify the color of the three balls in the order they are picked. Then Ω = {R, B }3 , F = 2Ω , P ({RRR}) = 10 9 8 543 , . . . , P ({BBB }) = . 15 14 13 15 14 13
52 5 such subsets, one deﬁnes pω = 1/N Example 2.7.3. You ﬂip a fair coin until you get three consecutive ‘heads’. Specify the probability space. One possible choice is Ω = {H, T }∗ , the set of ﬁnite sequences of H and T . That is, {H, T }∗ = ∪∞ {H, T }n . n=1 This set Ω is countable, so we can choose F = 2Ω . Here, P ({ω }) = 2−n where n := length of ω. This is another example of a probability space that is bigger than necessary, but easier to specify than the smallest probability space we need. 2.7. SOLVED PROBLEMS 21 Example 2.7.4. Let Ω = {0, 1, 2, . . .}. Let F be the collection of subsets of Ω that are either ﬁnite or whose complement is ﬁnite. Is F a σ ﬁeld? No, F is not closed under countable set operations. For instance, {2n} ∈ F for each n ≥ 0 because {2n} is ﬁnite. However, A := ∪∞ {2n} n=0 is not in F because both A and Ac are inﬁnite. Example 2.7.5. In a class with 24 students, what is the probability that no two students have the same birthday? Let N = 365 and n = 24. The probability is α := N N −1 N −2 N −n+1 × × × ··· × . N N N N To estimate this quantity we proceed as follows. Note that
n ln(α) =
k=1 ln(
1 N −n+k )≈ N n ln(
1 N −n+x )dx N =N
a ln(y )dy = N [yln(y ) − y ]1 a N −n+1 ) − (n − 1). N = −(N − n + 1)ln( (In this derivation we deﬁned a = (N − n + 1)/N .) With n = 24 and N = 365 we ﬁnd that α ≈ 0.48. Example 2.7.6. Let A, B, C be three events. Assume that P (A) = 0.6, P (B ) = 0.6, P (C ) = 0.7, P (A ∩ B ) = 0.3, P (A ∩ C ) = 0.4, P (B ∩ C ) = 0.4, and P (A ∪ B ∪ C ) = 1. Find P (A ∩ B ∩ C ). We know that (draw a picture) P (A ∪ B ∪ C ) = P (A) + P (B ) + P (C ) − P (A ∩ B ) − P (A ∩ C ) − P (B ∩ C ) + P (A ∩ B ∩ C ). 22 CHAPTER 2. PROBABILITY SPACE Substituting the known values, we ﬁnd 1 = 0.6 + 0.6 + 0.7 − 0.3 − 0.4 − 0.4 + P (A ∩ B ∩ C ), so that P (A ∩ B ∩ C ) = 0.2. Example 2.7.7. Let Ω = {1, 2, 3, 4} and let F = 2Ω be the collection of all the subsets of Ω. Give an example of a collection A of subsets of Ω and probability measures P1 and P2 such that (i). P1 (A) = P2 (A), ∀A ∈ A. (ii). The σ ﬁeld generated by A is F . (This means that F is the smallest σ ﬁeld of Ω that contains A.) (iii). P1 and P2 are not the same. Let A= {{1, 2}, {2, 4}}. Assign probabilities P1 ({1}) = 1 , P1 ({2}) = 1 , P1 ({3}) = 3 , P1 ({4}) = 3 ; and P2 ({1}) = 8 8 8 8
1 12 , P2 ({2}) = 2 12 , P2 ({3}) = 5 12 , P2 ({4}) = 4 12 . Note that P1 and P2 are not the same, thus satisfying (iii). P1 ({1, 2}) = P1 ({1}) + P1 ({2}) = P2 ({1, 2}) = P2 ({1}) + P2 ({2}) = Hence P1 ({1, 2}) = P2 ({1, 2}). P1 ({2, 4}) = P1 ({2}) + P1 ({4}) = P2 ({2, 4}) = P2 ({2}) + P2 ({4}) = Hence P1 ({2, 4}) = P2 ({2, 4}). Thus P1 (A) = P2 (A)∀A ∈ A, thus satisfying (i). To check (ii), we only need to check that ∀k ∈ Ω, {k } can be formed by set operations on sets in A ∪ φ∪ Ω. Then any other set in F can be formed by set operations on {k }. {1} = {1, 2} ∩ {2, 4}C
1 8 1 12 1 8 +
2 12 1 8 =
1 4 1 4 + = + 3 8 =
4 12 1 2 1 2 2 12 + = 2.7. SOLVED PROBLEMS 23 {2} = {1, 2} ∩ {2, 4} {3} = {1, 2}C ∩ {2, 4}C {4} = {1, 2}C ∩ {2, 4}. Example 2.7.8. Choose a number randomly between 1 and 999999 inclusive, all choices being equally likely. What is the probability that the digits sum up to 23? For example, the number 7646 is between 1 and 999999 and its digits sum up to 23 (7+6+4+6=23). Numbers between 1 and 999999 inclusive have 6 digits for which each digit has a value in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. We are interested in ﬁnding the numbers x1 + x2 + x3 + x4 + x5 + x6 = 23 where xi represents the ith digit. First consider all nonnegative xi where each digit can range from 0 to 23, the number of ways to distribute 23 amongst the xi ’s is
28 5 . But we need to restrict the digits xi < 10. So we need to subtract the number of ways to distribute 23 amongst the xi ’s when xk ≥ 10 for some k . Speciﬁcally, when xk ≥ 10 we can express it as xk = 10 + yk . For all other j = k write yj = xj . The number of ways to arrange 23 amongst xi when some xk ≥ 10 is the same as the number of ways to arrange yi so that
6 i=1 yi 18 5 = 23 − 10 is 18 5 . There are 6 possible ways for some xk ≥ 10 so there are a total of 6 ways for some digit to be greater than or equal to 10, as we can see by using the stars and bars method (see 2.6.1). However, the above counts events multiple times. For instance, x1 = x2 = 10 is counted both when x1 ≥ 10 and when x2 ≥ 10. We need to account for these events that are counted multiple times. We can consider when two digits are greater than or equal to 10: xj ≥ 10 and xk ≥ 10 when j = k . Let xj = 10 + yj and xk = 10 + yk and xi = yi ∀i = j, k . Then the number of ways to distribute 23 amongst xi when there are 2 greater than or equal to 10 is equivalent to the number of ways to distribute yi when are
8 5 6 i=1 yi = 23 − 10 − 10 = 3. There ways to distribute these yi and there are 6 2 ways to choose the possible two digits that are greater than or equal to 10. 24 CHAPTER 2. PROBABILITY SPACE We are interested in when the sum of xi ’s is equal to 23. So we can have at most 2 xi ’s greater than or equal to 10. So we are done. Thus there are
28 5 −6 18 5 + 6 2 8 5 numbers between 1 through 999999 whose digits sum up to 23. The probability that a number randomly chosen has digits that sum up to 23 is (28)−6(18)+(6)(8) 5 5 25
999999 .
i P (Ai ) − Example 2.7.9. Let A1 , A2 , . . . , An , n ≥ 2 be events. Prove that P (∪n Ai ) = i=1
i<j P (Ai ∩ Aj ) + i<j<k P (Ai ∩ Aj ∩ Ak ) − · · · + (−1)n+1 P (A1 ∩ A2 ∩ . . . ∩ An ). We prove the result by induction on n. First consider the base case when n = 2. P (A1 ∪ A2 ) = P (A1 ) + P (A2 ) − P (A1 ∩ A2 ). Assume the result holds true for n, prove the result for n + 1. P (∪n+1 Ai ) = P (∪n Ai ) + P (An+1 ) − P ((∪n Ai ) ∩ An+1 ) i=1 i=1 i=1 = P (∪n Ai ) + P (An+1 ) − P (∪n (Ai ∩ An+1 )) i=1 i=1 =
i P (Ai ) −
i<j P (Ai ∩ Aj ) +
i<j<k n+1 P (Ai ∩ Aj ∩ Ak ) − . . . P (Ai ∩ An+1 )
i + (−1) −
i<j P (A1 ∩ A2 ∩ . . . ∩ An ) + P (An+1 ) − ( P (Ai ∩ Aj ∩ An+1 ) +
i<j<k P (Ai ∩ Aj ∩ Ak ∩ An+1 ) − . . . + (−1)n+1 P (A1 ∩ A2 ∩ . . . ∩ An ∩ An+1 )) =
i P (Ai ) −
i<j P (Ai ∩ Aj ) +
i<j<k n+2 P (Ai ∩ Aj ∩ Ak ) − . . . + (−1) P (A1 ∩ A2 ∩ . . . ∩ An+1 ) Example 2.7.10. Let {An , n ≥ 1} be a collection of events in some probability space {Ω, F , P }. Assume that
∞ n=1 P (An ) < ∞. Show that the probability that inﬁnitely many of those events occur is zero. This result is known as the BorelCantelli Lemma. To prove this result we must write the event “inﬁnitely many of the events An occur” 1.4 Functions of a random variable Recall that a random variable X on a probability space (Ω, F , P ) is a function mapping Ω to the real line R , satisfying the condition {ω : X (ω ) ≤ a} ∈ F for all a ∈ R. Suppose g is a function mapping R to R that is not too bizarre. Speciﬁcally, suppose for any constant c that {x : g (x) ≤ c} is a Borel subset of R. Let Y (ω ) = g (X (ω )). Then Y maps Ω to R and Y is a random variable. See Figure 1.6. We write Y = g (X ).
X g Ω
X(ω) g(X(ω)) Figure 1.6: A function of a random variable as a composition of mappings. Often we’d like to compute the distribution of Y from knowledge of g and the distribution of X . In case X is a continuous random variable with known distribution, the following three step procedure works well: (1) Examine the ranges of possible values of X and Y . Sketch the function g . (2) Find the CDF of Y , using FY (c) = P {Y ≤ c} = P {g (X ) ≤ c}. The idea is to express the event {g (X ) ≤ c} as {X ∈ A} for some set A depending on c. (3) If FY has a piecewise continuous derivative, and if the pmf fY is desired, diﬀerentiate FY . If instead X is a discrete random variable then step 1 should be followed. After that the pmf of Y can be found from the pmf of X using pY (y ) = P {g (X ) = y } =
x:g (x)=y pX (x) Example 1.4 Suppose X is a N (µ = 2, σ 2 = 3) random variable (see Section 1.6 for the deﬁnition) and Y = X 2 . Let us describe the density of Y . Note that Y = g (X ) where g (x) = x2 . The support of the distribution of X is the whole real line, and the range of g over this support is R+ . Next we ﬁnd the CDF, FY . Since P {Y ≥ 0} = 1, FY (c) = 0 for c < 0. For c ≥ 0, √ √ FY (c) = P {X 2 ≤ c} = P {− c ≤ X ≤ c} √ √ − c−2 X −2 c−2 = P{ √ ≤√ ≤√} 3 3 3 √ √ c−2 − c−2 = Φ( √ ) − Φ( √ ) 3 3 Diﬀerentiate with respect to c, using the chain rule and the fact, Φ (s) = fY (c) =
√ c √ 1 {exp(−[ √−2 ]2 ) 24πc 6 − + exp(−[ − √c6 2 ]2 )} √ √1 2π exp(− s2 ) to obtain (1.7) 2 0 9 if y ≥ 0 if y < 0 Example 1.5 Suppose a vehicle is traveling in a straight line at speed a, and that a random direction is selected, subtending an angle Θ from the direction of travel which is uniformly distributed over the interval [0, π ]. See Figure 1.7. Then the eﬀective speed of the vehicle in the B Θ a
Figure 1.7: Direction of travel and a random direction. random direction is B = a cos(Θ). Let us ﬁnd the pdf of B . The range of a cos(Θ) as θ ranges over [0, π ] is the interval [−a, a]. Therefore, FB (c) = 0 for c ≤ −a and FB (c) = 1 for c ≥ a. Let now −a < c < a. Then, because cos is monotone nonincreasing on the interval [0, π ], FB (c) = P {a cos(Θ) ≤ c} = P {cos(Θ) ≤ c } a c = P {Θ ≥ cos−1 ( )} a c cos−1 ( a ) = 1− π
1 Therefore, because cos−1 (y ) has derivative, −(1 − y 2 )− 2 , fB (c) = A sketch of the density is given in Figure 1.8.
√1 π a 2 − c2 0  c < a  c > a fB −a 0 a Figure 1.8: The pdf of the eﬀective speed in a uniformly distributed direction. 10 Θ 0 Y Figure 1.9: A horizontal line, a ﬁxed point at unit distance, and a line through the point with random direction. Example 1.6 Suppose Y = tan(Θ), as illustrated in Figure 1.9, where Θ is uniformly distributed over the interval (− π , π ) . Let us ﬁnd the pdf of Y . The function tan(θ) increases from −∞ to ∞ 22 as θ ranges over the interval (− π , π ). For any real c, 22 FY (c) = P {Y ≤ c} = P {tan(Θ) ≤ c} = P {Θ ≤ tan−1 (c)} = tan−1 (c) + π
π 2 Diﬀerentiating the CDF with respect to c yields that Y has the Cauchy pdf: fY (c) = 1 π (1 + c2 ) −∞<c<∞ Example 1.7 Given an angle θ expressed in radians, let (θ mod 2π ) denote the equivalent angle in the interval [0, 2π ]. Thus, (θ mod 2π ) is equal to θ + 2πn, where the integer n is such that 0 ≤ θ + 2πn < 2π . Let Θ be uniformly distributed over [0, 2π ], let h be a constant, and let ˜ Θ = (Θ + h mod 2π ) ˜ Let us ﬁnd the distribution of Θ. ˜ Clearly Θ takes values in the interval [0, 2π ], so ﬁx c with 0 ≤ c < 2π and seek to ﬁnd ˜ ≤ c}. Let A denote the interval [h, h + 2π ]. Thus, Θ + h is uniformly distributed over A. Let P {Θ ˜ B = n [2πn, 2πn + c]. Thus Θ ≤ c if and only if Θ + h ∈ B . Therefore, ˜ P {Θ ≤ c} =
A T B 1 dθ 2π By sketching the set B , it is easy to see that A B is either a single interval of length c, or the ˜ ˜ union of two intervals with lengths adding to c. Therefore, P {Θ ≤ c} = 2c , so that Θ is itself π uniformly distributed over [0, 2π ] Example 1.8 Let X be an exponentially distributed random variable with parameter λ. Let Y = X , which is the integer part of X , and let R = X − X , which is the remainder. We shall describe the distributions of Y and R. 11 Proposition 1.10.1 Under the above assumptions, Y is a continuous type random vector and for y in the range of g : fY (y ) = Example 1.10 fX (x) 
∂y ∂x (x)  = fX (x) ∂x (y ) ∂y Let U , V have the joint pdf: fU V (u, v ) = u + v 0 ≤ u, v ≤ 1 0 else and let X = U 2 and Y = U (1 + V ). Let’s ﬁnd the pdf fXY . The vector (U, V ) in the u − v plane is transformed into the vector (X, Y ) in the x − y plane under a mapping g that maps u, v to x = u2 and y = u(1 + v ). The image in the x − y plane of the square [0, 1]2 in the u − v plane is the set A given by A = {(x, y ) : 0 ≤ x ≤ 1, and √ √ x ≤ y ≤ 2 x} See Figure 1.12 The mapping from the square is one to one, for if (x, y ) ∈ A then (u, v ) can be
y 2 v 1 1 u 1 x Figure 1.12: Transformation from the u − v plane to the x − y plane. recovered by u = √ x and v =
y √ x − 1. The Jacobian determinant is
∂x ∂v ∂y ∂v ∂x ∂u ∂y ∂u = 2u 0 1+v u = 2u 2 Therefore, using the transformation formula and expressing u and V in terms of x and y yields
√
y x+( √x −1) fXY (x, y ) = 2x 0 if (x, y ) ∈ A else Example 1.11 Let U and V be independent continuous type random variables. Let X = U + V and Y = V . Let us ﬁnd the joint density of X, Y and the marginal density of X . The mapping g: u → v x y 24 = u+v v is invertible, with inverse given by u = x − y and v = y . The absolute value of the Jacobian determinant is given by
∂x ∂u ∂y ∂u ∂x ∂v ∂y ∂u = 11 01 =1 Therefore fXY (x, y ) = fU V (u, v ) = fU (x − y )fV (y ) The marginal density of X is given by
∞ ∞ fX (x) =
−∞ fXY (x, y )dy =
−∞ fU (x − y )fV (y )dy That is fX = fU ∗ fV . Example 1.12 Let X1 and X2 be independent N (0, σ 2 ) random variables, and let X = (X1 , X2 )T denote the twodimensional random vector with coordinates X1 and X2 . Any point of x ∈ R2 can 1 be represented in polar coordinates by the vector (r, θ)T such that r = x = (x2 + x2 ) 2 and 1 2 x1 θ = tan−1 ( x2 ) with values r ≥ 0 and 0 ≤ θ < 2π . The inverse of this mapping is given by x1 = r cos(θ) x2 = r sin(θ) We endeavor to ﬁnd the pdf of the random vector (R, Θ)T , the polar coordinates of X . The pdf of X is given by fX (x) = fX1 (x1 )fX2 (x2 ) = 1 − r22 e 2σ 2πσ 2 The range of the mapping is the set r > 0 and 0 < θ ≤ 2π . On the range, ∂x r ∂( θ ) =
∂ x1 ∂r ∂x2 ∂r ∂x1 ∂θ ∂x2 ∂θ = cos(θ) −r sin(θ) sin(θ) r cos(θ) =r Therefore for (r, θ)T in the range of the mapping, fR,Θ (r, θ) = fX (x) ∂x r − r22 = e 2σ r ∂( θ ) 2πσ 2 Of course fR,Θ (r, θ) = 0 oﬀ the range of the mapping. The joint density factors into a function of r and a function of θ, so R and Θ are independent. Moreover, R has the Rayleigh density with parameter σ 2 , and Θ is uniformly distributed on [0, 2π ]. 25 i i j ¦ ¢ U i 2 A2 4 4 i j¢ £ ¡ ¦ ¦ "¥£ ¡ i ih U 2 2 ¨ ¨ ¨ ¨ ¨ e £ H¥TS ¦ e £ "WH¥TS ih 2 i j ¦ e e d"WHd TS £ i 9h U d ¦ e £ ¢dg dS ¦¢V¥fdS e ¦ £ £ ¦ e '£ ¢5¥dS ¨ ¦ 6£ B¥ ¨ ¦ 6£ #6 B¥ 42 5@1 ¦ ¥"¡ ¤£ ¨ ¦¤ ¤£ ¡ #§5 &¥" c ¦ ¥£ ¡ ¤ 42 A@1 ¦ ¥¢¡ ¤£ ¨ ¤I ¤ QP¦ ¥£ ¡ ¦ ¥TS ¤ £ ¨ ¦¤ a ¤ I X ¤ U U ¤£ §5§b`Y7WHV¥TS ¦¤ ¤ I X ¤ U U ¤£ §5 &`Y7WHV¥TS RQP¦ H ¥" ¨ ¤ I ¤ ¤£ ¡ ¦¤£ ! 0 ¨ ¤£ ¡ §¥")¦ ¥¢" 42 A@1 42 531 ¦ EC¦ 6£ GFDB¥! 8 ¨ ¦ 6£ 9)7¥"¡ ¦ (' &¥%$#§¥" ¦ ¥©§¥¢¢ ¤ ¤£ ! ¨ ¦¤£ ! ¤ ¤£ ¡ ¨ ¦¤£ ¡
, 1. Show that if then In the same way 42 A@1 ¦ ¥%! ¤£ ¦ ¥% ¤£ ! ¨ ¤ ¤£ ! #¦ A' ¥% Then ¦7¥£¢ 6¡ ¦ £ ¨ ¥B' 42 5V1 42 5V1 c EC 8 ¨ ¦ 6 £4 9)7¥v¡ 2 4 FD 2@ £ 5¦7¥16 d! F
0 ¨ C ¦ £ 5V1 c
C D¦¢ F¥iq w£ g x v§s 8 t BDBp¥g6i vf ¡ v§s 8 t q F i rqi qi Fpgx u t r 0¨ §Fwg u f t r c qi Fpg x uvu §s t qi Fpg f u u t t i rqi 7 #¤`I §FFpg vf §et s 7c ¤ qig x QI Fw&Pe ¨ E qFpg x uv§s i qi u Fwg f v i rqig qi x Fpgye t E ¨ §FFphP§etf s c ¦ ¥d¢)¦ ¥"¢ ¤£ ! 0 ¨ ¤£ ¡
in terms of ¦ £ ¡ ¢¥ 2. Express the density . Answer: Answer: (a) of the RV ELEG–636 Homework #1, Spring 2003 1 4 ¦ £ "¥! I
¨ ¦ ¢¥£ ! ¨ ¦ £ ¢¥d! 4 ¦6 7¥£ d¡ t and if (a) ; (b) 2 2 t £ w ¦%£ ¦ {¢t w q g{ QI )t w t ¢t t 2 QIP¦¥£V¦ £ ! % ¦ ¦ `£ ! V`£ ¨ ¨ ¦¥£ )t w } q ¨ #¦ £¡ " 1 ¨ ¦ £ )
¥g 'X (9¨
Answer: (1) Let ¨
; ~  '(X w ¨ ¦ 6£ ¡ ¦¢¥£ { ¨ ¦ £ ! a ¦ 6 y Q! ¢t x)¢¥dzbB¥£ ¡ vt x)7¥ ' ¨ ¨ ¨ ¦ £ ! ¢¥%
and i i j d U d i q t! jd Fq g vugt g f e ! i ih Ud r lq t g u t k mfq g vurt! f v§s &r ! ih 4 ¦ £ ¢¥d! i i i i ¨ ¨ ¨ ¨ ¨ ¨ ¦ £ ¢¥d! 4
(b) 2 5@1 42 j ¦¦ £ s§"¥£
"¡ i nh U 2 j ¦¦ s§"¥£ F
j £ ¥TS i nh U i j¢e ¢t TS ¦ q £ih U i ¦¢er¥£ ¢t TS ¦ q£ ¦¢@¥fTS e ¦ £ £ ¦ e '£ "A¥TS i i i i i ¨ ¨ ¨ 2 j d" po"¥¢ ¦ £ ¡ X ¦ £ ¡ i nh Ud r r 2 g q hf u r g! j q t ! f u r ! ! ih U r i l mq t g f vt q g f u &r u! ! k i h ! j U Since 3. The RVs Find the densities of the following RVs: and are independent, we have are independent with exponential densities ELEG–636 Homework #1, Spring 2003 and 2 ¨ ¨ ¦ £ `G ¤£ ¤ £ 1 QI «® ¯t ) t ¨ ¨ I¦ QP¥£ ¤ £ 1 ¨ ¨ ¨ ¨ © 2 C7 8' CB 8 C 8' XdC 8 2 C ¦' £ ¥ª8 ~ C 8 i ¨ ¨ 2 C7 8' B 8 C 2 C 7' 8 ¨C ¨T 8 2 ¦ £ ~ ¥g r' ¡~ ¡ ~ ¤ `o £¢ ¨ ¦ ¨ ¥ a ~ ¨ 2 ¦ `£ P¦ (£ Xw w ¨ dI 4 ¦ %£ I 2 31 ¨ ¨ ¨ ¨ ¨ t t ~ £ dS X Yw 2 I¦ QP¢ q { y g t { £ z ! ¢t { QI ¢t `I vt w 6 y ! ¡ ! t 6 I¦ 6 `I ! BQPB¥£ ¡ ! I¦ £ ! ¦ ¨ ' e £ QP¢¥%¢H' V¥TS ¦ ' e £ S ¨ ¦ G¥Tx#%e Answer: Let , and . Since and by ﬁnding the mean and variance of . ' ' ¨ H §¨ ¦ £ ' ¨ ¦ £ ©`G ¨ ¦ `£ 4
(2) 4. The RVs and are i So, ® « ¬t ¨ ¦ £ ©¥r Thus, ELEG–636 Homework #1, Spring 2003 and independent. Show that, if 3 ~ ¨ ¨C 8 are Gaussian, so is also Gaussian. We can ﬁnd , then ELEG–636 Homework #1, Spring 2003 5. Use the moment generating function, show that the linear transformation of a Gaussian random vector is also Gaussian. Proof: Let be a real random Gaussian vector, then the density function is The moment generating function of is Using the moment generating function of , we have (a) Determine and plot the pdf of (b) Determine and plot the pdf of (c) Determine and plot the pdf of (d) Compare the pdf of with that of the Gaussian density. 4 w which has the same form of . So, is also Gaussian. 6. Let be four IID random variables with exponential distribution with ¿» Ä "º ¿ ¶ ¿ q ¿ )Ã g ¸ » º q ¿ )Ã g ¶ q ¿ )Ã g º º º Êe0 e º C q º q ¿ )Ã g 8 Ã C q º ¿ 8 C } º ¿ 8 Â TÀ q À ´ ¨!Á Then °Å ¦ °£ ¦ ° % £Ë ¦ £ sÆ 1° ° a b¦ £ 6 ÉÈ ©¦ £ E ¨ ÉE À ¨ H' º ¹· Ä º¸ · ¨ ¨ ½ ¨ ¦ £ ! ¾ ' ½ ¨ ¦ %¾ £! ¨ ½ ¦ "
¾ £¡ °Å ¦£ Å s ÆÇE¡ ° ¦ £E 6 ' À Let be a linear transform of ¿ ¸ »"º ¿ ¶ ¿ º 3· ¸ ¤ ³ s q ´µd ³² ¦ £ º ¿ 1 `I q ¸ ·t`q g ¶ ¼ ¸ » º q ¸ ·t%q g ¶ t q ¿ C q º 8 ¨ ½ ¨ ¦ ¢
¾ £¡ ¨ 1± ° Let be a real vector, then the moment generating function of q ´ ³² ¤ q ¹`q g ¶ Q¸ "q ¹%q g @t ³ s µ 1 Gy¦ £ ¸ · t ¼ » º ¸ · t ¶ ~ ¨ $BC 8
is = 1. ¨ ¦ £ #¥# 1± ° ~ C 8 ½ is already obtained, which is ELEG–636 Homework #1, Spring 2003 Since are i.i.d., So, whose inverse Fourier transform yields the pdf of This expression holds for any positive integer , including 7. The mean and covariance of a Gaussian random vector are given by, respectively, Plot the 1 , 2 , and 3 concentration ellipses representing the contours of the density function in the plane. : The radius of an ellipse with major axis a (along ) and minor axis (along ) is given by where . Compute the 1 ellipse speciﬁed by translate each point using the transformation Answer: and and then rotate and . 5 ç U æ q Û X ñ q ó¨ wmg qñ ò ï £ ¨ æ s ï £ ¨îç æ ° éã êÇ å ìÇ& X Ç å ç ë ê æ ç 1 6 ß Ë ß ËÅ 2 s 1 Ë 2 ËÅ Ý s 1 ÞÜ¡ Ý¨ ¨ 7t ¡ s á and Êa Úa ¨ ¦ §dzdT0 1 W2 qg Ù b¦ 0 £ ¦ "¥£ ¨¦ #"¥£ ² Ñ ! s ! t 7t E ° ¦ £E Î2 Ï31 qg × ØÍ Ñ 1 Ö ¨¦Í )rf£ ² z! E Ì sÆ q qg ² g ¡ ÓÒ Õ ¨ ¨¦Í ² Ô É BC q ² g Ñ ! §b 8 )rf£ Ñ ! E Ì Evaluating both sides by the characteristic functions, we have qg Ñ ¦ ÐÐÐ ¦ ¨ ¦ ¢¥£ ¡ rG&&3¢¥£ ¡ #¢¥£ ² z! Í ÎÏ@1 2 1 ß à1 Ý¨ ÞÜ¡ Û 0 ¨ ¦ Í£ )rf¢¡ Ì ¨ ô è The characteristic function of is ¦6 B¥£ t )7¥ ¨ ¦ 6£ ¡ ¡ qð qð 8 ¨ ã C pg 6 pgs 6 9)¦ g ¦6 7¥£ ¡ Ì å ¤ °ã Däâ ¤ Answer: Let ° ° ¦ £ E §&&&b¦ £ s 6 6aÐÐÐa íe ê e B6 1 ¦6 6 Q§a ¥£ i ¦ §a s ¥B 6 6£ Ë Xs 0 ¨ è ¨ è 0
( ) is thus 0 1 ¨ 1 ¨ æ è ü ê züQÿÇþ7 X ê ý 7 ç æ ç Ú ¨ æ ¨ ç ¨ #¦ §a s ¥ 6 6£ ù ù äÝ ¨ Ë Xs ù ß ß2 ù sÝ ¦ Q ûú G`Ê ¨ ¨ ¦ ¥£ q
So, ¤ Ê l q g q g q s ¶ g q s ¶ kg ÷ £ t ¡ t ¡ 7t ¡ t 7t ¡ t Ú ¤ ³ s ¢¡ q f öq 1 õt ¢%q g ¶ ¼ f º &f ¢%q g ¶ t õt ô Let 2 2 1 2 s 2 W2 s 1 6£ X ¦ ¥$øQ ¥Ç¦ 6£ 6 ¥£ 6£ ¨ ¥9)¦ §a s ¥ 6 6£ ¦ The linear transform ß 6 s6Ý or 0 ¨ Q ¦ 2 2 1 W2 s 2 W2 s 1 6£ X ¦ ¥øQ 6£ ¥Ç¦ 6 ¥£ 6 ¥£ ¦ So, is a rotation of When the function the ellipses are the same. The concentration ellipse of So, the radius of the ellipse is of the original axes. ELEG–636 Homework #1, Spring 2003 is chosen differently, the ﬁgure will be different. But the orientation of 6 ELEG–636 Homework #1, Spring 2003 x2 3 2 1 2 1 x1 7 ELEG–636 Test #1, March 25, 1999 NAME: 1. (35 pts) Let y = minfjx1 j; x2 g where x1 and x2 are i.i.d. inputs with cdf and pdf Fx and fx , respectively. For simplicity, assume fx is symmetric about 0, i.e., fx x = fx ,x. Determine the cdf and pdf of y in terms of the distribution of the inputs. Plot the pdf of y for fx uniform on ,1; 1 . Note that Fx x , Fx ,x for x 0 Fjxj x = 0 otherwise Also Thus, Fminfx1 ;x2 g x = 1 , P fx1 xgP fx2 xg = 1 , 1 , Fx x1 , Fx x
1 2 = = 2 2Fx y , Fx ,y , Fx y + Fx yFx ,y for y 0 otherwise Fx y If fx is symmetric about 0, then fx x = fx ,x and Fx x = 1 , Fx ,x, giving
Fy y = = 1 , 1 , Fjx j y1 , Fx y 1 , 1 , Fx y + Fx ,y1 , Fx y for y 0 1 , 1 , Fx y otherwise
Fy y
1 2 2 2 2Fx y , 1 , Fx y , Fx y + Fx y1 , Fx y Fx y =
Taking the derivative,
fy y for y 0 otherwise 2 4Fx y , 2Fx y , 1 Fx y for y 0 otherwise for y 0 otherwise = = 4fx y , 4fx yFx y fx y 4fxy1 , Fx y fx y for y 0 otherwise 1 ELEG–636 Test #1, March 25, 1999 NAME: 2. (35 pts) Consider the observed samples
yi = + xi for i = 1; 2; : : : ; N . We wish to estimate the location parameter using a maximum likelihood estimator operating on the observations y1 ; y2 ; : : : ; yN . Consider two cases: N 0; 2 , for i = 1; 2; : : : ; N . 2 (10 pts) The xi terms are independent with distribution xi N 0; i , for i = 1; 2; : : : ; N .
(10 pts) The xi terms are i.i.d. with distribution xi (15 pts) Are the estimates unbiased? What is the variance of the estimates? Are they consistent?
fyj y
N Y1 p 1 , yi2,2 = 2 2 2
2 j= i=1 2 e N=2 P ,
e N yi 2 i=1 2 2 , Thus, M L = arg max , 2 N X yi , 2 i=1 2 2 and taking the derivative, N X yi , M L i=1 N 1X = 0 M L = N yi i=1 For the case of changing variances,
N X yi , M L i=1 i 2 PN y PN i=1 i = 0 M L = PN 1 M L = P=1 wi yi N w
i i
2 2 i i=1 i=1 i which is a normalized ﬁlter, where wi = 12 for i = 1; 2; : : : ; N . For each estimate E fM L g = , and they are thus unbiased. Since wi 0, we have varM L N + 1 varM L N . This, combined with the fact that the estimator is unbiased means the estimate is consistent. ! 9 8 !9 , wi 2= = E PN wi xi 2 = i=1 varM L N = E fM L , 2 g = E PN w ; : PN wi ; : i i=1 i=1 P PN w x x w g PN w2 2 PN w Ef N i=1 i i i = PNj=1w i2 i j j = P=1 w i2 = PN=1w i2 = PN1 w N 8 PN
i=1 wi yi i=1 i i=1 i i=1 i i=1 i 3 Ä ß ¿ Ì ×Ø ½¸ Å Ö ¾¿¸ ¾¼¼¼ ÆÅ ½º ´¿¼ ÔØ×µ Ì Ö Ò ÓÑ Ú Ö Ð × Ü Ò Ý Ö Ò Ô Ò ÒØ Ò ÙÒ ÓÖÑÐÝ Ô ÙØ ÓÒ ×ØÖ Ü¾ · Ý¾ Ò Ø ÒØ ÖÚ Ð ¼¸½ º Ø ÖÑ Ò Ø ÓÒ Ø ÓÒ Ð ×ØÖ ÙØ ÓÒ Ö ´Ö µ Û Ö Ö Ö ½º Ò×Û Ö Ü ÑÒ Ø Ó ÒØ Ò× ØÝ Ü Ý ´Ü Ýµ Ò Ø
Ü Ý ´Ü Ý µ ÜÝ ÔÐ Ò º Ë Ò Ü Ò Ý Ö Ò Ô Ò ÒØ¸
½ ÓÖ ¼ Ü ´Üµ Ý ´Ý µ ÜÝ ½ Ì × ¬Ò × ÙÒ ÓÖÑ Ò× ØÝ ÓÚ Ö Ø Ö ÓÒ ¼ Ü Ý ½ Ò Ø ¬Ö×Ø ÕÙ Ö ÒØ Ó Ø ÜÝ ÔÐ Ò º Ô¾ ¾ Ü · Ý ¬Ò × Ò Ö Ò Ø ¬Ö×Ø ÕÙ Ö ÒØº Ð×Ó¸ ¼ Ö ½ Ø Ö ÙÒ Ö ÆÓØ Ø Ø Ö Ø ÙÒ ÓÖÑ Ò× ØÝ ÙÔ ØÓ Ö Ù× Ö × × ÑÔÐÝ Ú Ò Ý
Ö ´Ö µ Õ È Ö Ü¾ · Ý¾ Ö ÔÜ¾
Ì Ò ÓÖ ·¾ Ý Ö ½ÜÝ Ö¾ ÔÜ¾ ·¾ Ý Ö Ü Ý ´Ü Ý µ ÜÝ ÓÖ ¼ Ö ½ Ö ½º
Ö ´Ö µ Ö ´Ö ÈÖ µ Ö ´Ö µ Ö ´½µ Ö¾ Ö¾ ÓÖ ¼ Ö ½ Ì Ù×¸ Ö ´Ö µ ¾Ö ÓÖ ¼ Ö ½ Ò ¼ Ð× Û Ö º ½ Ä ß ¿ Ì ×Ø ½¸ ÔÖ Ð ¸ ¾¼¼½
½º ´¿ ÔØ×µ ÈÖÓ Ð ØÝ ÕÙ ×Ø ÓÒ× Ö Ò ÓÑ Ú Ö Ð Ò ×ØÝ ÆÅ
Ü¾ º ¯ ¯ ¯ ´½¼ ÔØ×µ Ä Ø Ü ´Ý Ü ¼µº
Ý ÖÚ Ò × ÑÔÐ ¬ ¼Ö ÜÔÖ ×× ÓÒ ÓÖ Ø ÖÑ Ò
Ý ´½ ÔØ×µ ËÙÔÔÓ× ÒÓÛ Ø Ø Ý ´Ýµº × Ò´Ü · µ¸ Û Ö ÓÒ×Ø ÒØ×º º ´½¼ ÔØ×µ ËÙÔÔÓ× ÙÖØ Ö Ø Ø Ü × ÙÒ ÓÖÑÐÝ ÓÖ Ø × ×Ô Ð × º Ð ÖÐÝ¸ ´Ý Ü ´Ý Ü ¼µ ¼µ ¼ ÓÖ Ý
È Ö´ Ý È Ö´ ×ØÖ ÙØ ¼¸
Ü ÓÚ Ö Ø ÖÑ Ò ´Ýµ Ò×Û Ö ¼º Ì Ò ÓÖ Ý ¼µ ¼µ ´ Ýµ
Ü ´ Ýµ ´¼µ Í ´Ý µ ½ ´¼µ
Ü Ü Ô Ì Ù× ´Ý Ü ÆÓÛ ÓÖ Ý ´Üµ × Ò´Ü · µ Û
ÜÒ Ò ¼µ Ô Ô ¾ Ý´½ Ü ´¼µµ Í ´Ý µ Ú ¸ ××ÙÑ Ò Ý Ö × Ò´Ý µ Ò ¸ Ò¬Ò Ø ÐÝ Ñ ÒÝ ×ÓÐÙØ ÓÒ× ¼ ¦½ ¦¾
¾ ´Ü
Ò º µ· Ð×Ó¸
¼ ¼ ´Ü µ · µ· Õ
Ò Ó×´Ü · µ
Ò ÆÓØ Ø Ø ¾ ´Ü Ò µ ¾
¼ Ó×¾ ´Ü
Ò ´Ü µ
Ü
¼ ¾ Ò¾ ´Ü · µ Õ ¾ ´Ü µ ¾ Ý¾
¾×
Ò Ò ¾º ÇÖ¸ Ì Ù×
Ý ´Ý µ ´Ü µ ´Ü µ
Ò Ò Ô ½
¾ ÁÜ Í´ Ý¾
Ý Ü ´Ü µ
Ò Ý µ Ø Ò Ø Ö × ÓÒÐÝ × Ò Ð ×ÓÐÙØ ÓÒ¸ Ò
Ý ´Ý µ ¾ Ô ½
¾ Ý¾ ½ ELEG–636 Test #1, April 14, 2003
1. (30 pts) Probability questions: NAME: (15 pts) Let x be a random variables with density fx x given below. Let y the shown function. Determine fy y and Fy y . = g x be (15 pts) Let x and y be independent, zero mean, unit variance Gaussian random variables. Deﬁne w = x2 + y 2 and z = x2 : Determine fw;z w; z . Are w and z independent? Answer: Note that
fx x =
1 x 4 + 1 2 x 0 0 , 0:5 , 0:5
y
0 0 otherwise x
x 2 Thus
Fx x = 8
12 x 8 p Since x = y for 0 y 1,
Fy y = : + 1 ux 2 0 1 x 2 2x 0 8 0 1 : 8 Fx y 0
0
1 y 8 p y 1 1y
0 = : 8 + 1 u 2 1 py , 0:5 , 0:25
3 8 y 1 1y
y
0 y 0 0
1 y 8 = : + 1 uy 2 0 1 y 1 1y
0 Taking the derivitive yields
fy y = 1 8 + 1 2 y , 0:25 +
0 y , 1 otherwise y1 1 ELEG–636 Test #1, April 14, 2003
Tha Jabobian of the transformation is
J x; y = NAME: dx2 +y2 dx2 +y2 dx dy dx2 dx2 dx dy
= = 2x 2x 2y 0 = 4 xy jj
2 The reverse transformation is easily seen to be x w z . Thus,
fw;z w; z = fx;y x; y
4 xy p pz and y = w , x
fx;y x; y
4 xy = pw , z , jj p x= z p y = w,z
x= y= + jj x= y= + fx;y x; y
4 xy jj ,pz pw , z + fx;y x; y
4 xy jj pz ,pw , z p x=, z p y =, w,z
(1) Since x and y are independent,
fx;y x; y =
1 2 e ,x2 +y2
2 Thus 1 pz pw , z e,w= uwuzuw , z 2 where the last three terms indicate w; z 0 and w z . fw;z w; z = 2 2 ELEG–636 Midterm, April 7, 2009 NAME: 1. [30 pts] Probability: (a) [15 pts] Prove the Bienayme inequality, which is a generalization of the Tchebycheﬀ inequality, E {X − an } P r{X − a ≥ } ≤ n for arbitrary a and distribution of X . (b) [15 pts] Consider the uniform distribution over [−1, 1]. i. [10 pts] Determine the moment generating function for this distribution. ii. [5 pts] Use the moment generating function to generate a simple expression for the k th moment, mk . Answer: (a)
∞ E {x − an } =
−∞ x − an fx (x)dx ≥
x−a≥ x − an fx (x)dx ≥ E {X −
n x−a≥ an } n fx (x)dx = n P r{x − a ≥ } ⇒ P r{X − a ≥ } ≤ (b) Φ(s) = ⇒ E {xk } = 1 2
1 esx dx = 1 s 2s (e −1 dk Φ(s) 1 − e−s ) s = 0 s=0 dk s s=0 dΦ(s) 1s 1 E {x} = = (e + e−s ) − 2 (es − e−s ) ds s=0 2s 2s 1 1 =0 = (es − e−s ) − (es − e−s ) 2 4 s=0 s=0 Repeat the diﬀerentiation, limit (l’Hpital’s rule) process. The analytical solution is simpler: E {xk } = 1 2
1 xk dx =
−1 1 − (−1)k+1 = 2(k + 1) 0
1 k+1 k = 1, 3, 5, . . . k = 0, 2, 4, . . . 1 ELEG–636 Midterm, April 7, 2009 NAME: 2 3. [35 pts] Let Z = X + N , where X and N are independent with distributions N ∼ N (0, σN ) 1 1 and fX (x) = 2 δ (x − 2) + 2 δ (x + 2). (a) [15 pts] Determine the MAP, MS, MAE, and ML estimates for X in terms of Z . (b) [10 pts] Determine the bias of each estimate, i.e., determine whether or not each estimate is biased. (c) [10 pts] Determine the variances of the estimates. Answer:
2 2 (a) Since X and N are independent, fZ (z ) = fX (z ) ∗ fN (z ) = 1 N (−2, σN ) + 1 N (2, σN ). 2 2 Also 2 fZ X (z x) =N (x, σN ) xM L = arg max fZ X (z x) = z ˆ
x 2 fZ X (z x)fX (x) N (x, σN )(δ (x − 2) + δ (x + 2)) = fX Z (xz ) = fZ (z ) 2fZ (z ) 2 z>0 xM AP = arg max fX Z (xz ) = ˆ x −2 z < 0 ∞ ∞ 1 xM S = ˆ xfX Z (xz )dx = xf (z x)fX (x)dx fZ (z ) −∞ Z X −∞ = 2 2 2N (2, σN )x=z − 2N (−2, σN )x=z 2fZ (z ) 2 ) 2 N (2, σN x=z − N (−2, σN )x=z =2 2 2 N (2, σN )x=z + N (−2, σN )x=z xM AE ˆ 1 = 2
xM AE ˆ fX Z (xz )dx =
−∞ 1 fZ (z ) xM AE ˆ fZ X (z x)fX (x)dx
−∞ ⇒
−∞ xM AE ˆ fZ X (z x)fX (x)dx = 1 2 2 N (2, σN )x=z + N (−2, σN )x=z 4 1 2 2 N (2, σN )x=z + N (−2, σN )x=z 2 ⇒
−∞ 2 N (x, σN )(δ (x − 2)+δ (x + 2))dx = Note the LHS is not continuous ⇒ xM AE not well deﬁned. ˆ (b) Note fZ (z ) is symmetric about 0 ⇒ E {xM L } = E {z } = 0 ⇒ xM L is unbiased ˆ ˆ (E {x} = 0). Similarly, E {xM AP } = 2P r{z > 0} − 2P r{z < 0} = 0 ⇒ xM AP is ˆ ˆ unbiased. Also, xM S is an odd function (about 0) of z ⇒ E {xM S } = 0 ⇒ xM S is ˆ ˆ ˆ unbiased.
2 2 2 2 2 2 (c) σM L = σZ = σX + σN = 4 + σN . Also, σM AP = 4 (since xM AP = ±2). Determining ˆ 2 σM S is not trivial, and will not be considered. 3 ELEG–636 Homework #1, Spring 2009 1. A token is placed at the origin on a piece of graph paper. A coin biased to heads is given, P (H ) = 2/3. If the result of a toss is heads, the token is moved one unit to the right, and if it is a tail the token is moved one unit to the left. Repeating this 1200 times, what is a probability that the token is on a unit N , where 350 ≤ N ≤ 450? Simulate the system and plot the histogram using 10,000 realizations. Solution: Let x = # of heads. Then 350 ≤ x − (1200 − x) ≤ 450 ⇒ 775 ≤ x ≤ 825 and
825 P r(775x ≤ 825) =
i=775 1200 i 2 3 i 1 3 1200−i which can be approximated using the DeMoivre–Laplace approximation
i2 i= i1 n i (p)i (1 − p)n−i ≈ Φ i2 − np np(1 − p) −Φ i1 − np np(1 − p) where Φ(x) = x 1 −x2 /2 dx −∞ 2π e 2. Random variable X is characterized by cdf FX (x) = (1 − e−x )U (x) and event C is deﬁned by C = {0.5 < X ≤ 1}. Determine and plot FX (xC ) and fX (xC ). Solution: Evaluating P r(X ≤ x, 0.5 < X ≤ 1) for the allowable three cases x < 0.5 0.5 ≤ x ≤ 1 x>1 P r(X ≤ x, 0.5 < X ≤ 1) = 0 P r(X ≤ x, 0.5 < X ≤ 1) = FX (x) − FX (0.5) = e−0.5 − e−x P r(X ≤ x, 0.5 < X ≤ 1) = FX (1) − FX (0.5) = e−0.5 − e−1 = 0.2386 Also, P r(C ) = FX (1) − FX (0.5) = e−0.5 − e−1 = 0.2386. Thus 0 x < 0.5 P r(X ≤ x, 0.5 < X ≤ 1) −0.5 −x )/0.2386 0.5 ≤ x ≤ 1 (e −e fX (xC ) = = P r(0.5 < X ≤ 1) 1 x>1 3. Prove that the characteristic function for the univariate Gaussian distribution, N (η, σ 2 ), is φ(ω ) = exp j ωη − ω2σ2 2 Next determine the moment generating function and determine the ﬁrst four moments. 1 ELEG–636 Homework #1, Spring 2009 Solution: φ(ω ) = 1 (x − η )2 ejωx dx exp 2σ 2 2πσ −∞ ∞ 1 (x2 − 2ηx + η 2 − 2jωxσ 2 ) √ = exp dx 2σ 2 2πσ −∞ ∞ 1 (−η 2 + (η 2 + jωσ 2 η )2 (x − (ηx + jωσ 2 )2 √ exp = exp 2σ 2 2σ 2 2πσ −∞ ∞ (−η 2 + (η 2 + jωσ 2 η )2 1 (x − (ηx + jωσ 2 )2 √ = exp exp 2σ 2 2σ 2 2πσ −∞ (−η 2 + (η 2 + jωσ 2 η )2 = exp 2σ 2 √
ω2 σ2 2 ∞ dx dx which reduces to φ(ω ) = exp j ωη − . The moment generating function is simple s2 σ 2 2 Φ(s) = exp sη + and mk =
dk Φ(s) , dk s s=0 which yields m2 = σ 2 + η 2 m4 = 3σ 4 + 6σ 2 η 2 + η 4 m1 = η m3 = 3ησ 2 + η 3 4. Let Y = X 2 . Determine fY (y ) for: (a) fX (x) = 0.5 exp{−x} (b) fX (x) = exp{−x}U (X ) √ Solution: Y = X 2 ⇒ X = ± y and dY /dX = 2X . Thus fY (y ) = Substituting and simplifying fX (x) = 0.5 exp{−x} fX (x) = exp{−x}U (x) 5. Given the joint pdf fXY (x, y ) fXY (x, y ) = 8xy, 0 < y < 1, 0 < x < y 0, otherwise ⇒ ⇒
√ 1 fY (y ) = √ e− y U (y ) 2y √ 1 fY (y ) = √ e− y U (y ) 2y fX (x) 2x √ x= y + fX (x) 2x √ x=− y Determine (a) fx (x), (b) fY (y ), (c) fY (y x), and (d) E [Y x]. Solution: 2 ELEG–636 Homework #1, Spring 2009 4x − 4x3 0 < x < 1 0 otherwise 4y 3 0 < y < 1 0 otherwise (a) fX (x) = (b) fY (y ) = ∞ −∞ fXY (x, y )dy ∞ −∞ fXY (x, y )dx fXY (x,y ) fX (x) = = 1 x 8xydy y o = 8xydx = x<y<1 otherwise (c) fY (y x) = (d) E [Y x] = = 2y 1−x2 0 = ∞ −∞ yfY (y x)dy 1 2y 2 x 1−x2 dy = 2 3 1−x3 1−x2 = 2 3 1+x+x2 1+x 6. Let W and Z be RVs deﬁned by W = X2 + Y 2 and Z = X2 where X and Y are independent; X, Y ∼ N (0, 1). (a) Determine the joint pdf fW Z (w, z ). (b) Are W and Z independent? Solution: Given the system of equations J wz xy = 2x 2y 2x 0 = 4xy  Note we must have w, z ≥ 0 and w ≥ z . Thus the inverse system (roots) are √ √ y = ± w − z. x = ± z, Thus fW Z (w, z ) = fXY (x, y ) √ x = ±√ z 4xy  y =± w−z (∗) Note also that, since X, Y ∼ N (0, 1), fXY (x, y ) = 1 − x2 +y2 2 e 2π (∗∗) Substituting (∗∗) into (∗) [which has four terms] and simplifying yields fW Z (w, z ) = ew/2 2π z (w − z ) U (w − z )U (z ) (∗ ∗ ∗) Note W and Z are not independent. Counter example proof: Suppose W and Z are independent. Then fW (w)fZ (z ) > 0 for all w, z > 0. But this violates (∗ ∗ ∗), as fW Z (w, z ) > 0 only for w ≥ z. 3 ELEG–636 Homework #2, Spring 2009 1. Let R= 2 −2 −2 5 Express R as R = QΩQH , where Ω is diagonal. Solution: 2 − λ −2 −2 5 − λ = λ2 − 7λ + 6 = 0
1 √ [1, −2]T 5 ⇒ λ1 = 6, λ2 = 1
1 √ [2, 1]T . 5 Than solving Rqi = λi qi gives q1 = where Q = [q1 , q2 ] and q2 = Ω= Thus R = QΩQH and 60 01 2. The twodimensional covariance matrix can be expressed as: C= ρ∗ σ
2 σ1 1 σ2 ρσ1 σ2 2 σ2 (a) Find the simplest expression for the eigenvalues of C.
2 2 (b) Specialize the results to the case σ 2 = σ2 = σ2 . (c) What are the eigenvectors in the special case (b) when ρ is real? Solution: (a)
2 σ1 − λ ρσ1 σ2 2 ρ∗ σ1 σ2 σ2 − λ 2 2 22 = λ2 − (σ1 + σ2 )λ + (1 − p2 )σ1 σ2 = 0 4 4 22 22 σ1 + σ2 − 2σ1 σ2 + 4p2 σ1 σ2 2 ⇒λ=
2 2 (b) For σ 2 = σ2 = σ2 2 2 (σ1 + σ2 ) ± λ= 3. Let 2σ 2 ± 2 4p2 σ 4 = (1 ± p)σ 2 x[n] = Aejω0 n where the complex amplitude A is a RV with random magnitude and phase A = Aejφ . Show that a sufﬁcient condition for the random process to be stationary is that the amplitude and phase are independent and that the phase is uniformly distributed over [−π, π ]. Solution: First note E {x[n]} = E {A}ejω0 n and E {A} = E {A}E {ejφ } = 0 1 ELEG–636 Homework #2, Spring 2009 by independence and uniform distribution of φ. Thus it has a ﬁxed mean. Next note E {x[n]x∗ [n − k ]} = E {A2 }ejω0 k which is strictly a function of k ⇒ WSS. 4. Let Xi be i.i.d. RVs uniformly distributed on [0, 1] and deﬁne
20 Y=
i=1 Xi . Utilize Tchebycheff’s inequality to determine a bound for P r{8 < Y < 12}. Solution: Note ηx = inequality
1 2 2 and σx = 1 12 . 2 Thus ηy = 10 and σy = 20 12 = 5 . Utilize Tchebycheff’s 3 5 7 = 12 12 P r{Y − ηy  ≥ 2} ≤ σy 2 2 = 5 12 ⇒ P r{8 < Y < 12} ≥ 1 − 5. Let X ∼ N (0, 2σ 2 ) and Y ∼ N (1, σ 2 ) be independent RVs. Also, deﬁne Z = XY . Find the Bays estimate of X from observation Z : (a) Using the squared error criteria. (b) Using the absolute error criteria. 6. Let X and Y be independent RVs characterized by fX (x) = ae−ax U (x) and fY (y ) = ae−ay U (y ). Also, deﬁne Z = XY . Find the Bays estimate of X from observation Z using the uniform cost function. Solution: Fz x (z x) = P r(xy ≤ z x) = P r(y ≤ z/x) = Fy (z/x) ⇒ fz x (z x) = 1 fy (z/x) x 1 x = arg max fz x (z x)fx (x) = arg max fy (z/x)fx (x) ˆ x 1 −az/x −ax −1 = arg max ae ae U (x)U (z ) = arg max a2 x−1 e−a(zx +x) U (x)U (z ) x −1 −1 ⇒ 0 = − a2 x−2 e−a(zx +x) + (a2 x−1 e−a(zx +x) )(−a(1 − zx−2 )) 0 = − x−1 − a(1 − zx−2 ) ⇒ ax2 + x − z = 0 √ −1 ± 1 + 4az ⇒x= ˆ 2a 7. Random processes x[n] and y [n] are deﬁned by x[n] = v1 [n] + 3v2 [n − 1] y [n] = v2 [n + 1] + 3v2 [n − 1] where v1 [n] and v2 [n] are independent white noise processes, each with variance 0.5. 2 ELEG–636 Homework #1, Spring 2008 1. Let fx (t) be symmetric about 0. Prove that µ is the expected value of a sample distributed according to fx−µ (t). Solution. Since fx (t) is symmetric about 0, fx (t) is even.
+∞ E [(x − µ)] =
−∞ +∞ tfx−µ (t)dt tfx (t − µ)dt =
−∞ Let u = t − µ,
+∞ E [(x − µ)] =
−∞ +∞ u + µfx (u)du
+∞ =
−∞ ufx (u) du +
odd +∞ −∞ µfx (u)du = 0+µ
−∞ fx (u)du =µ 2. The complimentary cumulative distribution function is deﬁned as Qx (x) = 1 − Fx (x), or more explicitly in the zero mean, unit variance Gaussian distribution case as ∞1 1 √ exp − t2 dt. Qx (x) = 2 2π x Show that Qx (x) ≈ √ 1 1 exp − x2 . 2 2πx ∞1 Hint: use integration by parts on Qx (x) = x √2πt t exp − 1 t2 dt. Also 2 explain why the approximation improves x as increases. Solution. b Recall integration by parts: a f (t)g (t)dt = f (t)g (t)b − a 12 √1 Let g (t) = t exp − 2 t and f (t) = 2πt
∞ b af (t)g (t)dt. Qx (x) =
x √ 1 1 t exp − t2 dt 2 2πt 1 ELEG–636 Homework #1, Spring 2008 1 1 exp − t2 2 2πt
∞ ∞ = −√ −
x x √ 1 1 exp − t2 dt 2 2 2πt →0 as x→∞ ≈ √ 1 1 exp − x2 2 2πx ∞ Since x √21 2 exp − 1 t2 dt goes to zero as x goes to inﬁnity, the ap2 πt proximation improves x as increase. 3. The probability density function for a two dimensional random vector is deﬁned by fx (x) = Ax2 x2 x1 , x2 ≥ 0 and x1 + x2 ≤ 1 1 0 otherwise (a) Determine Fx (x) and the value of A. (b) Determine the marginal density fx2 (x). (c) Are fx1 (x) and fx2 (x) independent? Show why or why not. Solution. (a)
1 1−x1 0 1 Fx1 ,x2 (∞, ∞) = = = = = = 0 Ax2 x2 dx2 dx1 1 x 2 x2 2 20 0 1 (1 − x1 )2 Ax2 dx1 1 2 0 A14 (x − 2x3 + x2 )dx1 1 1 20 1 A 60 1 Ax2 1 (1) Therefore, A = 60. Deﬁning Fx1 ,x2 (u, v ) = P r(x1 ≤ u, x2 ≤ v ), we have • x1 < 0 or x2 < 0, then F (x1 , x2 ) = 0. 2 ELEG–636 Homework #1, Spring 2008 • x1 , x2 ≥ 0 and x1 + x2 ≤ 1, then
x1 x2 0 F (x1 , x2 ) =
0 60u2 vdvdu = 10x3 x2 12 • 0 ≤ x1 , x2 ≤ 1 and x1 + x2 ≥ 1, then
1−x2 1−u F (x1 , x2 ) = 1 −
0 60u2 vdvdu − 1 0 1−u 60u2 vdvdu = 10x2 − 2 x2 3 20x2 + 15x4 − 4x5 + 2 2 x1 10x3 1 − 15x4 + 6x5 − 1 1 1 • 0 ≤ x1 ≤ 1 and x2 ≥ 1, then
1 1−u 0 F (x1 , x2 ) = 1 −
x1 60u2 vdvdu = 10x3 − 15x4 + 6x5 1 1 1 • 0 ≤ x2 ≤ 1 and x1 ≥ 1, then
1−x2 1−u F (x1 , x2 ) = 1 −
0 60u2 vdvdu = 10x2 − 2 • x1 , x2 ≥ 1, then F (x1 , x2 ) = 1. So x2 3 20x2 + 15x4 − 4x5 2 2 x1 < 0 or x2 < 0 0 10x3 x2 x1 , x2 ≥ 0, x1 + x2 ≤ 1 12 10x2 − 20x3 + 15x4 − 4x5 + 10x3 − 15x4 + 6x5 − 1 0 ≤ x , x ≤ 1, x + x ≥ 1 12 1 2 2 2 2 2 1 1 1 F (x1 , x2 ) = 10x3 − 15x4 + 6x5 0 ≤ x1 ≤ 1, x2 ≥ 1 1 1 1 10x2 − 20x3 + 15x4 − 4x5 0 ≤ x2 ≤ 1, x1 ≥ 1 2 2 2 2 1 x1 , x2 ≥ 1 (b)
1−x2 fx2 (x2 ) = 0 60x2 x2 dx1 1 = 20x2 (1 − x2 )3 3 ELEG–636 Homework #1, Spring 2008 (c) Since
1−x1 fx1 (x1 ) = 0 60x2 x2 dx2 1 = 30x2 (1 − x1 )2 1 , fx1 ,x2 (x1 , x2 ) = fx1 (x1 )fx2 (x2 ). Therefore, fx1 (x1 ) and fx2 (x2 ) are NOT independent. 4. Consider the two independent marginal distributions fx1 (x) = 1 0 ≤ x1 ≤ 1 0 otherwise 2x 0 ≤ x2 ≤ 1 0 otherwise fx2 (x) = Let A be the event x1 ≤ x2 . (a) Find and sketch fx (x). (b) Determine P r{A}. (c) Determine fxA (xA). Are the components independent, i.e., are fx1 A (xA) and fx2 A (xA) independent? Solution. (a) Since two marginal distributions are independent, fX (X ) = fx1 (x1 )fx2 (x2 ) = (b)
1 1−x2 2x2 0 ≤ x1 , x2 ≤ 1 0 otherwise P r(A) =
0 1 0 2x2 dx1 dx2 2x2 dx2 2
1 0 =
0 = = 2x3 2 3 2 3 4 (2) ELEG–636 Homework #1, Spring 2008 (c) fX A (X A) = = fX (X ) P r(A) 3x2 0 ≤ x1 < x2 ≤ 1 0 otherwise
1 fx1 A (x1 A) = = = 3x2 dx2 x1 3x2 1 2 2 x1 3(1 − x1 )2 2
x2 , 0 ≤ x1 ≤ 1 fx2 A (x2 A) = 2x2 dx1
0 = 2 x2 , 2 0 ≤ x2 ≤ 1 fX A (X A) = fx1 A (x1 A)fx2 A (x2 A). Therefore, fx1 A (x1 A) and fx2 A (x2 A) are NOT independent. 5. The entropy H for a random vector is deﬁned as −E {ln fx (x)}. Show that for the complex Gaussian case H = N (1 + ln π ) + ln Cx . Determine the corresponding expression when the vector is real. Solution. The complex Gaussian p.d.f. is fx (x) = Then, H = −E {ln fx (x)} = E [(x − mx )H C−1 (x − mx )] + N ln π + ln Cx  x 1 π N Cx  exp[−(x − mx )H C−1 (x − mx )] x 5 ELEG–636 Homework #1, Spring 2008 Note E [(x − mx )H C−1 (x − mx )] = E [trace((x − mx )H C−1 (x − mx ))] x x = trace(C−1 E [(x − mx )(x − mx )H ]) x = trace(C−1 Cx ) x = trace(I) = N Therefore H = N + N ln π + ln Cx  = N (1 + ln π ) + ln Cx  Similarly, when the vector is real H= 6. Let x = 3 u − 4v y = 2u + v where u and v are unit mean, unit variance, uncorrelated Gaussian random variables. (a) Determine the means and variances of x and y . (b) Determine the joint density of x and y . (c) Determine the conditional density of y given x. Solution. (a) E (x) = E (3u − 4v ) = 3E (u) − 4E (v ) = 3−4 = −1 E (y ) = E (2u + v ) = 2E (u) + E (v ) = 2+1 =3 6 1 1 N (1 + ln(2π )) + ln Cx  2 2 ELEG–636 Homework #1, Spring 2008 2 σx = E (x2 ) − E 2 (x) = E [(3u − 4v )2 ] − 1 = 25
2 σy = E (y 2 ) − E 2 (y ) = E [(2u + v )2 ] − 9 =5 (b) Note x y Thus A−1 = and fx,y (x, y ) = = = fu,v (A−1 [x, y ]T ) abs A 1 fu,v ((x + 4y )/11, (−2x + 3y )/11) 11 1 1 x + 4y −2x + 3y exp(− [( − 1)2 + ( − 1)2 ]) 22π 2 11 11 1 11 14 −2 3 = 3 −4 21
A u v (c) Note x is Gaussian fx (x) = √ Thus fyx (y x) = = = fx,y (x, y ) fx (x) √ 2π × 5 1 x + 4y −2x + 3y 1 exp − [( − 1)2 + ( − 1)2 ] + (x + 1)2 22π 2 11 11 2 × 25 52 1 x + 4y −2x + 3y 1 exp − [( − 1)2 + ( − 1)2 − (x + 1)2 ] 22 π 2 11 11 25 1 1 exp − (x + 1)2 2 × 25 2π × 5 7 ELEG–636 Homework #1, Spring 2008 7. Consider the orthogonal transformation of the correlated zero mean random variables x1 and x2 y1 y2 = cos θ sin θ − sin θ cos θ x1 x2 2 2 Note E {x2 } = σ1 , E {x2 } = σ2 , and E {x1 x2 } = ρσ1 σ2 . Determine the 1 2 angle θ such that y1 and y2 are uncorrelated. Solution. y1 = x1 cos θ + x2 sin θ y2 = −x1 sin θ + x2 cos θ E (y1 y2 ) = E [(x1 cos θ + x2 sin θ)(−x1 sin θ + x2 cos θ)] = sin θ cos θE [x2 ] + (cos2 θ − sin2 θ)E [x1 x2 ] − sin θ cos θE [x2 ] 2 1
2 2 = sin θ cos θ(σ2 − σ1 ) + (cos2 θ − sin2 θ)ρσ1 σ2 2 (σ 2 − σ1 ) = sin 2θ · 2 + cos 2θ · ρσ1 σ2 2 If y1 and y2 are uncorrelated, E (y1 y2 ) = 0. For −π/2 ≤ θ < π/2, θ= 2ρσ1 σ2 1 arctan 2 2 2 σ2 − σ1 8. The covariance matrix and mean vector for a real Gaussian density are Cx = and mx = 1 0 1 0.5 0.5 1 (a) Determine the eigenvalues and eigenvectors. (b) Generate a mesh plot of the distribution using MATLAB. (c) Change the offdiagonal values to −0.5 and repeat (a) and (b). 8 ELEG–636 Homework #1, Spring 2008 Solution. (a) Solve Cx − λI = 0. (1 − λ)2 − 0.25 = (λ − 0.5)(λ − 1.5) = 0 Hence, eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigenvector is [1, −1]T . For λ = 1.5, the corresponding eigenvector is [1, 1]T . (c) Eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigenvector is [1, 1]T . For λ = 1.5, the corresponding eigenvector is [1, −1]T . 9. Let {xk (n)}K be i.i.d. zero mean, unit variance uniformly distributed rank=1 dom variables and set
K yK ( n ) =
k=1 xk (n). (a) Determine and plot the pdf of yK (n) for K = 2, 3, 4. (b) Compare the pdf’s to the Gaussian density. (c) Perform the comparison experimentally using MATLAB. That is, generate K sequences of n = 1, 2, . . . , N uniformly distributed samples. Add the sequences and plot the resulting distribution (histogram). Fit the results to a Gaussian distribution for various K and N . Solution. (a) {xk (n)}K are i.i.d. zero mean, unit variance uniformly distributed rank=1 dom variables. 1/2a xk ∈ [−a, a] fxk (xk ) = 0 otherwise Since E [x2 ] = 1, k E [x2 ] = k 1 2a x3 = 6a a2 = 3 =1 9
a −a a −a x2 dx ELEG–636 Homework #1, Spring 2008 √ ⇒a= That is fxk (xk ) =
1 √ 23 3 0 √√ xk ∈ [− 3, 3] otherwise For K=2, y2 (n) = x1 (n) + x2 (n). fy2 (x) = fx1 (x) ∗ fx2 (x) =
x 1 12 + 2√3 x − 12 + 0 1 √ 23 √ −2 3 ≤ x < 0 √ 0<x≤2 3 otherwise For K=3, y3 (n) = x1 (n) + x2 (n) + x3 (n) = y2 (n) + x3 (n). fy3 (x) = fy2 (x) ∗ fx3 (x)
√2 (x+3√ 3) 3−482 3 √ x = (x−3√ 3)2 48 3 8 3√ 0 √ √ −3 3 ≤ x < − 3 √ √ − 3≤x< 3 √ √ 3≤x≤3 3 otherwise 10 ...
View
Full
Document
This note was uploaded on 11/07/2010 for the course ENGINEERIN 636 taught by Professor Gutta during the Spring '10 term at University of Arkansas – Fort Smith.
 Spring '10
 gutta

Click to edit the document details