This preview shows page 1. Sign up to view the full content.
Unformatted text preview: the value r. Consequently, p(X = r) is the sum of the probabilities of the outcomes s such that X(s) = r. Hence, 22 Expected Value
Theorem 2: The expected number of successes when n mutually independent Bernoulli trials are performed, where p is the probability of success on each trial, is np.
Proof: Let X be the random variable equal to the number of success in n trials. By Theorem 2 of section 7.2, p(X = k) = C(n,k)pkqn k. Hence,
by Theorem 1 continued
23 Expected Value
from previous page
by Theorem 2 in Section 7.2
by Exercise 21 in Section 6.4 factoring np from each term
shifting index of summation with j = k 1 by the binomial theom
because p + q = 1
We see that the expected number of successes in n mutually independent Bernoulli trials is np. 24 The Geometric Distribution
Definition 2: A random variable X has geometric distribution with parameter p if p(X = k) = (1 p)k‐1p for k = 1,2,3,…, where p is a real number with 0 p
1.
Theorem 4: If the random variable X has the geometric distribution with parameter p, then E(X) = 1/p.
Example: Suppose the probability that a coin comes up tails is p. What is the expected number of flips until this coin comes up tails?
The sample space is {T, HT, HHT, HHHT, HHHHT, …}.
Let X be the random variable equal to the number of flips in an element of the sample space; X(T) = 1, X(HT) = 2, X(HHT) = 3, etc. By Theorem 4, E(X) = 1/p. see text for full details 25 Independent Random Variables
Definition 3: The random variables X and Y on a sample space S a...
View
Full
Document
 Spring '08
 Staff

Click to edit the document details