371chapter7 - Chapter 7 Rules for Means and Variances...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 7 Rules for Means and Variances; Prediction 7.1 Rules for Means and Variances The material in this section is very technical and algebraic. And dry. But it is useful for under- standing many of the methods we will learn later in this course. We have random variables X 1 , X 2 , . . .X n . Throughout this section, we will assume that these rv’s are independent. Sometimes they will also be identically distributed, but we don’t need i.d. for our main result. (There is a similar result without independence too, but we won’t need it.) Let μ i denote the mean of X i . Let σ 2 i denote the variance of X i . Let b 1 , b 2 , . . ., b n denote n numbers. Define W = b 1 X 1 + b 2 X 2 + . . .b n X n . W is a linear combination of the X i ’s. The main result is • The mean of W is μ W = ∑ n i =1 b i μ i . • The variance of W is σ 2 W = ∑ n i =1 b 2 i σ 2 i . Special Cases 1. i.i.d. case . If the sequence is i.i.d. then we can write μ = μ i and σ 2 = σ 2 i . In this case the mean of W is μ W = ( ∑ n i =1 b i ) μ and the variance of W is σ 2 W = ( ∑ n i =1 b 2 i ) σ 2 . 2. Two independent rv’s. If n = 2 , then we usually call them X and Y instead of X 1 and X 2 . We get W = b 1 X + b 2 Y which has mean μ W = b 1 μ X + b 2 μ Y and variance σ 2 W = b 2 1 σ 2 X + b 2 2 σ 2 Y . 3. Two i.i.d. rv’s. Combining the notation of the previous two items, W = b 1 X + b 2 Y has mean μ W = ( b 1 + b 2 ) μ and variance σ 2 W = ( b 2 1 + b 2 2 ) σ 2 . Especially important is the case W = X + Y which has mean μ W = 2 μ and variance σ 2 W = 2 σ 2 . Another important case is W = X- Y which has mean μ W = 0 and variance σ 2 W = 2 σ 2 . 71 7.2 Predicting for Bernoulli Trials Predictions are tough, especially about the future—Yogi Berra. We plan to observe m BT and want to predict the total number of successes that we will get. Let Y denote the r.v. and y the observed value of the total number of successes in the future m trials. Similar to estimation, we will learn about point and interval predictions. 7.2.1 When p is Known We begin with point prediction of Y . We adopt the criterion that we want the probability of being correct to be as large as possible. Below is the result. Calculate the mean of Y , which is mp . If mp is an integer then it is the most probable value of Y and our prediction is ˆ y = mp . Here are some examples. • Suppose that m = 20 and p = 0 . 50 . Then, mp = 20(0 . 5) = 10 is an integer, so 10 is our point prediction of Y . With the help of our website calculator (details not given), we find that P ( Y = 10) = 0 . 1762 . • Suppose that m = 200 and p = 0 . 50 . Then, mp = 200(0 . 5) = 100 is an integer, so 100 is our point prediction of Y . With the help of our website calculator, we find that P ( Y = 100) = 0 . 0563 ....
View Full Document

{[ snackBarMessage ]}

Page1 / 8

371chapter7 - Chapter 7 Rules for Means and Variances...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online