Exampleflipacoinifheadyouget1dollarandiftailyougetnoth

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: omes, where the weights are the probabilities of outcomes. Discrete case: Suppose X can take n possible values, denoted as x1, x2, …, xn (keep in mind the notational convention). Let pi = prob (X = xi), ∑ p 1, and p 0 for any . Then μ ≡ E ( X ) = ∑ i =1 pi xi = p1 x1 + p2 x2 + ... + pn xn , where E() is the expectation n operator, and μ is used to denote a mean. Example: Flip a coin, if head you get 1 dollar and if tail you get nothing. Suppose it is a fair coin, then you are expected to get 0.5 dollars, that is, the expected value of flipping a coin is 0.5 dollars, although if you flip a coin another time you could get 1 dollar or nothing. Facts about expected values 1. For any constant a, E(a) = a. A constant does not have any variations! Example: E(3)=3 2. For any constants a and b, E(a + bX) = a + bE(X) Example: In the previous GPA and study time example, the estimated regression line is Y=1+0.1X, where Y=GPA, X=study time per week measured in hours. Suppose the population mean of study time is 15 hours per week, that is, E(X)=15, then GPA is expected to be E(Y) = 1+0.1 E(X) = 2.5 3. For any constants a , a , … , a , and r.v.’s X , X , … , X , ∑ ∑ . For example, E(X1+X2+…+Xn)=E(X1)+E(X2)+…+E(Xn). That is, the expected value of the sum is the sum of expected values. 3 If Y = 1 + 2 X + u , E(u)=0, then E(Y)=? 4. Note that the above rule only applies to a linear form, so E[(αX)2]=α2E(X2) ≠ α2[E(X)]2. Example: Running business: X=profit, three possible outcomes, x1=1 m, p1=0.3; x2=‐1.5m, p2=0.3; x3=0.1m, p3=0.4 The r.v. X2=1 m2, p1=0.3; X2=2.25 m2,p2=0.3; X2=0.01 m2, p3=0.4 Then E(X2) =0.3*1 +0.3*2.25 +0.3*0.01 =0.978 [E(X)]2=(‐0.11m)^2=0.0121 m2 So E(X2)≠ [E(X)]2 Variance A measure of variability or dispersion of a r.v. around its mean, or how spread out the distribution is. Figure: distributions of r.v.’s with different variances Var ( X ) = σ X 2 = E ( X − μ X )2 = E ( X 2 − 2 X μ X + μ X 2 ) = E ( X 2 ) − μ X 2 , where Facts about variances 1. Positive square root of a variance is a standard deviation (sd), or σ X . 2. Var and sd are always nonnegative. 3....
View Full Document

Ask a homework question - tutors are online