{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# ch1 - 1 Chapter 1 Special Distributions 1 Special...

This preview shows pages 1–5. Sign up to view the full content.

1 Chapter 1 Special Distributions 1. Special Distributions Bernoulli, binomial, geometric, and negative binomial Sampling with and without replacement; Hypergeometric Finite sample variance correction Poisson and an “informal” Poisson process Stationary and independent increments Exponential and Gamma; Strong Markov property Normal, and the classical CLT; Chi-square Uniform, beta, uniform order statistics Cauchy Rademacher, and symmetrization Multinomial, and its moments 2. Convolution and related formulas Sums, products, and quotients Student’s t ; Snedecor’s F ; and beta 3. The multivariate normal distribution Properties of covariance matrices Characteristic function Marginals, independence, and linear combinations Linear independence The multivariate normal density Conditional densities Facts about Chi-square distributions 4. General integration by parts formulas Representations of random variables Formulas for means, variances, and covariances via integration by parts

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2
Chapter 1 Special Distributions 1 Special Distributions Independent Bernoulli Trials If P ( X = 1) = p = 1 P ( X = 0), then X is said to be a Bernoulli ( p ) random variable. We refer to the event [ X = 1] as success, and to [ X = 0] as failure. Let X 1 , . . . , X n be i.i.d. Bernoulli( p ), and let S n = X 1 + · · · + X n denote the number of successes in n independent Bernoulli( p ) trials. Now P ( X i = x i , i = 1 , . . . , n ) = p n 1 x i (1 p ) n n 1 x i if all x i equal 0 or 1; this formula gives the joint distribution of X 1 , . . . , X n . From this we obtain P ( S n = k ) = n k p k (1 p ) n k for k = 0 , . . . , n, (1) since each of the ( n k ) different placings of k 1’s in an n vector containing k 1’s and n k 0’s has probability p k (1 p ) n k from the previous sentence. We say that S n Binomial ( n, p ) when (1) holds. Note that Binomial(1 , p ) is the same as Bernoulli( p ). Let X 1 , X 2 , . . . be i.i.d. Bernoulli( p ). Let Y 1 W 1 min { n : S n = 1 } . Since [ Y 1 = k ] = [ X 1 = 0 , . . . , X k 1 = 0 , X k = 1], we have P ( Y 1 = k ) = (1 p ) k 1 p for k = 1 , 2 , . . . . (2) We say that Y 1 Geometric ( p ). Now let W m min { n : S n = m } . We call W m the waiting time to the m-th success . Let Y m W m W m 1 for m 1, with W 0 0; we call the Y m ’s the interarrival times . Note that [ W m = k ] = [ S k 1 = m 1 , X k = 1]. Hence P ( W m = k ) = k 1 m 1 p m (1 p ) k m for k = m, m + 1 , . . . . (3) We say that W m Negative Binomial ( m, p ). Exercise 1.1 Show that Y 1 , Y 2 , . . . are i.i.d. Geometric( p ). Since the number of successes in n 1 + n 2 trials is the number of successes in the first n 1 trials plus the number of successes in the next n 2 trials, it is clear that for independent Z i Binomial( n i , p ), Z 1 + Z 2 Binomial ( n 1 + n 2 , p ) . (4) 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 CHAPTER 1. SPECIAL DISTRIBUTIONS Likewise, for independent Z i Negative Binomial ( m i , p ), Z 1 + Z 2 Negative Binomial ( m 1 + m 2 , p ) . (5) Urn Models Suppose that an urn contains N balls of which M bear the number 1 and N M bear the number 0. Thoroughly mix the balls in the urn. Draw one ball at random. Let X 1 denote the number on the ball. Then X 1 Bernoulli( p ) with p = M/N . Now replace the ball back in
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 19

ch1 - 1 Chapter 1 Special Distributions 1 Special...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online