Probability DISCRETE RANDOM VARIABLES

E3x2 x 200 000 100 pxx 0002 x 1 2 3 04 02 x 2

This preview shows page 2 - 6 out of 7 pages.

                              = E[3X+2] x 200 000 100 000 50 000 0 P(X=x) 0.002 0.01 0.1 0.888 x 0 1 2 3 P(X=x ) 0.1 0.3 0.4 0.2 x 2 5 8 11 P(X=x ) 0.1 0.3 0.4 0.2 Nulake p 156 Sigma p130, Ex 7.04
Image of page 2

Subscribe to view the full document.

4. VARIANCE OF A RANDOM VARIABLE Var[X] = E[(X- μ ) 2 ]                                  = E[X 2 ] -  μ 2 Proof:      Var[X] = E[(X- μ ) 2 ]                             = E[X 2  – 2X μ  +  μ 2                             = E[X 2 ] – E[2X μ ] + E[ μ 2 ] = E[X 2 ] – 2 μ E[X] + E[ μ 2 ] = E[X 2 ] – 2 μ μ 2 = E[X 2 ] –  μ 2 Example: giving  μ  = E[X] = 2.7 giving Var[X] = E[(X- μ ) 2 ]                       = 1.01  giving E[X 2 ] = 8.3           Note that E[X 2 ] –  μ = 8.3 – 2.7 2                                 = 1.01   = Var[X] The standard deviation ( σ ) of a random variable X is defined as the square root of the variance.   σ = Var[X] X 1 2 3 4 P(X=x ) 0.1 0.4 0.2 0.3 (X- μ ) 2 2.89 0.49 0.09 1.69 P(X=x ) 0.1 0.4 0.2 0.3 X 2 1 4 9 16 P(X=x ) 0.1 0.4 0.2 0.3 Nulake p 152 Sigma p135, Ex 7.05, 7.06
Image of page 3
5. VARIANCE OF A FUNCTION OF A RANDOM VARIABLE Var[aX + b] = a 2 Var[X] Proof:     Var[aX+b] = E[(aX+b) 2 ] -  μ aX+b 2                                 = E[ a 2 2  +2abX+b 2 ] – E[aX+b]  2                                 = (a 2 E[X  2 ] + 2abE[X] + b 2 ) – (aE[X]+b) 2   = (a 2 E[X  2 ] + 2abE[X] + b 2 ) – (a 2 E[X]  2 +2abE[X]+b 2 )   = a 2 E[X  2 ] - a 2 E[X]  2   = a 2 (E[X  2 ] - E[X]  2 )   = a 2 Var[X] Example: E[X] = 2,  E[X 2 ] = 5;  so Var[X] = 1 Then for Y = 3X – 1 E[Y] = E[3X- 1] Var[Y] = Var[3X- 1]         = 3E[X] – 1                                     = 9Var[X]         = 3  ×  2 - 1                                      = 9  ×  1         = 5                                                 = 9 6. SUMS AND DIFFERENCES OF RANDOM VARIABLES (a) E[ X + Y ] = E[X] + E[Y],        E[ X – Y ] = E[X] - E[Y]         (b) If X and Y are independent then Var[ X  ±  Y ] = Var[X] + Var[Y] (c) This means that for independent random variables X and Y we have:      E[ aX + bY + c] = aE[X] + bE[Y] + c,      Var[aX + bY + c] = a 2 Var[X] + b 2 Var[Y] X 0 1 2 3 P(X=x ) 0.1 0.2 0.3 0.4 Nulake p 160 Sigma p138, Ex 7.07 Nulake p 160 Sigma p143, Ex 7.09, 7.10
Image of page 4

Subscribe to view the full document.

Example:    Empty jam containers have a mean weight of 250g with a standard                   deviation of 20g.
Image of page 5
Image of page 6
You've reached the end of this preview.
  • Spring '12
  • figen
  • Variance, Probability theory, var, probability density function

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern