Chapter2&3_Summary - IMPORTANT IDEAS FROM CHAPTER 2...

Info icon This preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: IMPORTANT IDEAS FROM CHAPTER 2 Expected Values J 1. Let X be, a random variable, g a function from the real line to the real line. X discrete: E(g(X)) : Zacg(:t)p(w) where the sum is over the support of X (provided the sum converges absolutely) X continuous: E( (g(X)) —f_oo 9(3) fx (2:) )da: (provided the integral converges absolutely) 2. Properties of expectation: a) If P(X:c) = 1, then E(X) = c; b) If 9(3):) 5 h(:t:) for all x, then E(g(X)) s E(h(X)). 6) E(ag(X) + WXD = aE(9(X)) + bE(h(X))- 3. The km moment of a random variable X is E(Xk). The variance is defined as the second moment of X — 1a,, where ,u is the mean of X. FACT: If the It“ moment of X is finite, then the moments of order k; — 1,11: — 2,. . . ,1 are all finite. 4. Let X be a r.v. with c.d.f FX. The moment generating function (m.g.f.) of X is defined by M X (t) = E(etx), provided that this expectation exists for all t in a neigh- borhood of zero. a) If X has m.f.g. MX(t), then E(X“) = MENU), the nth derivative of the m.g.f. evaluated at t=0. b) If X and Y are two r.v.’s with rn.g.f.’s Mx(t) and My(t), respectively, and if M X(t) = My(t) for all t in a neighborhood of zero, then X and Y have the same distribution. c) (For later use). Suppose X1, 73: 1,2, are r.v.’s with m.g.f.’s MX, (t),z‘ = 1,2, and c.d.f.’s in. Suppose lime,(t) = M(t) as 1' —> 00, for all t in a neighborhood of 0. If M (t) is the m.g.f. for the r.v. X, then limFX,(m) = FX(:r) at all x where FX is continuous. ST 561: Important topics from Chapter III Discrete mulitvariate random variables Let X1, X2, . . .,Xk be discrete. The joint proba- bility mass function of X1, . . . , X k is given by fx($1,...,$k):P(X1=31,...,Xk age). The marginal probability function of X1- is given by summing out all variables except an; in the joint p.m.f.: not) = Zf(x1,...,xi). where the sum is over 321, . . . , $i_1, act-+1, . . .,a‘n. The conditional p.m.f. of, say, X2, . . .,Xk given X1 is given as f($2,---,93k[fl=1)= f($1,$2,---,$k)/f1($1)‘ If h: Rk —> R1, the expected value of h(X1, . . .,Xk) is given by E(h(X1,X2, . . .,Xk) = Zh($1,...,$k)fx(fli1,.. .,:Ek) where the sum is over all variables $1, . . ”ask. The conditional expectation of, say, h(X2), given X3 = 931, is given by E(h(X2)lX1 = $1) = Zhlxzilflhlfil), when X1 takes the value (1:1. The random variable E[h(X2)|X1] is the random variable taking the value E(h(X2)|X1 : 3:1) Whenever X1 = x1. Useful facts here are that E(E(h(X2)lX1)) = E(h(X2))a Vadthfl) = ElVGT(h(X2)|X1)) + VGT(E(h(X2)|X1))v X1, . . .,Xk are mutually independent iif fx(:c1, . . . , m) = Hf=1fi(xi). When this is satisfied, Elh1(X1) - - -hk(Xkll I HlLiElhil-Xill- In particular, the moment generating function of the sum of independent random variables is the product of the individual m.g.f.’s, One of the most important examples of a discrete multivariate random variable is the mnltinomial (n, pl, . distribution, in which P(X1=$1,...,Xk LE 2W k) $81!...wklp1 where p; +. . .+pk = 1 and $1 +. . .+a:k : n. The marginal distribution of X5, here is binomial{n,pi). Continuous random variables If X1, . . . ,Xk are continuous random variables, their joint distribution is described by the joint probability density function f(a:1,:c2, . flak), which is non— negative and integrates to 1 over R’“. For a set A in Rk, ”51%) P((X1,...,Xk) e A) =[4f(:c1,...,2:k)d$1...dosk, where the integral is k—dimensional. For continuous random variables, the formal definitions of the marginal and conditional p.d.f.’s , expectation, conditional expectation, etc. are exactly the same as in the discrete case, with integrals replacing sums. The definition of independence is exactly the same. When k:2, an important example is the bivariate normal distribution, with joint p.d.f. 1 $1HM12W $1—I-512fl72yM22 m12".”22 2...... )2p( )( )+( H. 0'1 0'1 0'2 0'2 f(3311$2): exam-mm where 00 < #1,].L2 < 00, 0 < (71, 0 < oz, and -—1 S p S 1. In this model, both the marginal and conditional distributions are normal, and X1, X2 are independent iff p = 0. Covariance and correlation: The covariance of random variables X1 and X2 is given by 000(X11X2) : E{(X1 r“ #1)(X2 — #2)}, where “1, #2 are the means of X1, X2 respectively. Then Var(X1 + X2) = Var(X1) + Var(X2) + 2000(X1, X2), so the variances are additive if X1,X2 are independent. The correlation of X1, X2 is given by Cov(X1, X2) COT‘T(X1,X2) = 0102 where 01, 02 are the standard deviations of X1,X2 respectively. The correlation p must satisfy _1 g P S 1: and if X1,X2 are independent, their correlation is zero. In the mnltinomial(n,p1, . . . ,pk) model, the correlation of X,- and X5 is given by —{(1gp€33(’1'_pj)}1/2. In the bivariate normal model, the parameter p is the correlation of X1,X2. Inequalities: Markov’s inequality: For X 2 0, P(X > t) 5 E(X)/t. Special cases of this include Chebyshev’s inequality: Pox — m > t) s Var(X)/t2. where ,u is the expected value of X. The Cauchy-Schwarz inequality states that if X1, X2 have finite variances, E(lX1llX2|) S x/E(X12)E(X§)- J ensen’s inequality states that if g is a convex function on R1, then Elng)) 2 9(E(X))- ...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern