{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

elemprob-page12 - Y i Y j with i 6 = j Hence Var X = E X 2...

Info icon This preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
5. Some discrete distributions. Bernoulli . A r.v. X such that P ( X = 1) = p and P ( X = 0) = 1 - p is said to be a Bernoulli r.v. with parameter p . Note E X = p and E X 2 = p , so Var X = p - p 2 = p (1 - p ). Binomial . A r.v. X has a binomial distribution with parameters n and p if P ( X = k ) = n k p k (1 - p ) n - k . The number of successes in n trials is a binomial. After some cumbersome calculations one can derive E X = np . An easier way is to realize that if X is binomial, then X = Y 1 + · · · + Y n , where the Y i are independent Bernoulli’s, so E X = E Y 1 + · · · + E Y n = np . We haven’t defined what it means for r.v.’s to be independent, but here we mean that the events ( Y k = 1) are independent. The cumbersome way is as follows. E X = n k =0 k n k p k (1 - p ) n - k = n k =1 k n k p k (1 - p ) n - k = n k =1 k n ! k !( n - k )! p k (1 - p ) n - k = np n k =1 ( n - 1)! ( k - 1)!(( n - 1) - ( k - 1))! p k - 1 (1 - p ) ( n - 1) - ( k - 1) = np n - 1 k =0 ( n - 1)! k !(( n - 1) - k )! p k (1 - p ) ( n - 1) - k = np n - 1 k =0 n - 1 k p k (1 - p ) ( n - 1) - k = np. To get the variance of X , we have E X 2 = n k =1 E Y 2 k + i = j E Y i Y j . Now E Y i Y j = 1 · P ( Y i Y j = 1) + 0 · P ( Y i Y j = 0) = P ( Y i = 1 , Y j = 1) = P ( Y i = 1) P ( Y j = 1) = p 2 using independence. The square of Y 1 + · · · + Y n yields n 2 terms, of which n are of the form Y 2 k . So we have
Image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Y i Y j with i 6 = j . Hence Var X = E X 2-( E X ) 2 = np + ( n 2-n ) p 2-( np ) 2 = np (1-p ) . Later we will see that the variance of the sum of independent r.v.’s is the sum of the variances, so we could quickly get Var X = np (1-p ). Alternatively, one can compute E ( X 2 )-E X = E ( X ( X-1)) using binomial coefficients and derive the variance of X from that. Poisson . X is Poisson with parameter λ if P ( X = i ) = e-λ λ i i ! . Note ∑ ∞ i =0 λ i /i ! = e λ , so the probabilities add up to one. To compute expectations, E X = ∞ X i =0 ie-λ λ i i ! = e-λ λ ∞ X i =1 λ i-1 ( i-1)! = λ. Similarly one can show that E ( X 2 )-E X = E X ( X-1) = ∞ X i =0 i ( i-1) e-λ λ i i ! = λ 2 e-λ ∞ X i =2 λ i-2 ( i-2)! = λ 2 , 12...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern