X we write x xx 1x 2 to cancell xx 1 from the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: = p − p2 . We know that E (X ) = 3V ar(X ) ⇒ p = 3(p − p2 ) ⇒ p = 2 . 3 Therefore P (X = 0) = 1 − P (X = 1) = 1 − 2 ⇒ P (X = 0) = 3 1 . 3 b. V ar(3X ) = 9V ar(X ) = 9(02 1 + 12 2 ) − ( 2 )2 ⇒ V ar(3X ) = 2. 3 3 3 16 = = = = Example 8 Let X ∼ b(n, p). Show that V ar(X ) = np(1 − p). We start with E [(X (X − 1)] and then use σ 2 = EX 2 − µ2 . EX (X − 1) = n ￿ x=0 x(x − 1) ￿n￿ x px (1 − p)n−x = Since for x = 0, x = 1 the expression is zero, we can begin from x = 2 : n ￿ x(x − 1) x=2 n ￿ x=2 n! px (1 − p)n−x (n − x)!x! = We write x! = x(x − 1)(x − 2)! to cancell x(x − 1) from the numerator ￿ n! n! px (1 − p)n−x = px (1 − p)n−x (n − x)!(x(x − 1)(x − 2)! (n − x)!(x − 2)! n x(x − 1) = x=2 We factor outside from the summation n(n − 1)p2 . n(n − 1)p2 n ￿ x=2 n(n − 1)p2 (n − 2)! px − 2(1 − p)n−x (n − x)!(x − 2)! ￿ n−2 y =0 EX (X − 1) = n(n − 1)p2 = Now we let y = x − 2 : (n − 2)! py (1 − p)n−y−2 (n − y − 2)!(y )! n−2 ￿ ￿ n − 2￿ y =0 y ⇒ py (1 − p)n−y−2 (1) We can see now that Y ∼ b(n − 2, p), that is, Y is a binomial random variable with n − 2 number of trials and probability of success p. Therefore n−2 ￿ ￿ n − 2￿ y =0 y py (1 − p)n−y−2 = 1. And expression (1) can be written as EXX (X − 1) = n(n − 1)p2 . To find the variance we use σ 2 = EX 2 − µ2 : EX (X − 1) = n(n − 1)p2 ⇒ EX 2 − EX = n(n − 1)p2 EX 2 = n(n − 1)p2 + EX We know that EX = µ = np : ⇒ ⇒ EX 2 = n(n − 1)p2 + np. And the variance is: 2 σ 2 = EX 2 − µ2 = n(n − 1)p2 + np − (np)2 2 σ = n(n − 1)p + np(1 − np) = np[(n − 1)p + 1 − np] = np(np − p + 1 − np) σ 2 = np(1 − p). ⇒ ⇒ Example 9 Let X be a geometric random variable with probability of success p. The probability mass function of X is: P (X = x) = (1 − p)x−1 p, k = 1 , 2, · · · The above summation must equal to 1 since P (x) is a probability mass function. This is an easy proof since we have an infinite sme of geometric series with common ratio 1 − p. ∞ ￿ x=1 (1 − p)x−1 p = p + (1 − p)p2 + (1 − p)2 p3 + · · · = p 1 =1 1 − (1 − p) Example 10 ￿￿ We want P (X ≥ 1) > 0.90. We can write this as: 1 − P (X = 0) > 0.90 ⇒ 1 − n 0.400 (1 − 0.40)n > 0.90 ⇒ 1 − 0.60n > 0 0.90 ⇒ 0.60n < 0.10 ⇒ n > log (0.10) log (0.60) ⇒ n > 4.51 or n > 5. 17 Example 11 We know that the probability mass function of the hypergeometric probability distribution is: ￿r ￿￿N −r￿ x n− ￿N ￿ x P ( X = x) = n Therefore the expected value of X is: µ = E (X ) = n ￿ x x=0 r ￿N ￿ n n ￿ ￿r ￿￿N −r￿ n ￿ x n− ￿N ￿ x = n x=1 ￿N − r ￿ (r − 1)! = (r − x)!(x − 1)! n − x x=1 ￿ N −r ￿ n− ￿N x ￿ r! (r − x)!(x − 1)! n ￿ ￿ r − 1 ￿￿N − r￿ r ￿N ￿ n x−1 x=1 n−x (1) n−y−1 y y =0 ⇒ Let y = x − 1 n−1 r ￿ ￿r − 1￿...
View Full Document

This note was uploaded on 10/04/2012 for the course STATISTICS 100a taught by Professor Cristou during the Spring '10 term at UCLA.

Ask a homework question - tutors are online