{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

IEOR 165 Lecture 6 June 4 2009

# IEOR 165 Lecture 6 June 4 2009 - Lecture Notes IEOR 165 |...

This preview shows pages 1–2. Sign up to view the full content.

Lecture Notes IEOR 165 | George Shanthikumar Tu W Th 2-4:30 pm | 3113 Etcheverry Goes over MOM & MLE example Based on graphical interpretation, we can decide that = . AMLE a certain xminor xmax And then we solve for = [ ] EAMLE E Xminor max = + If EAMLE nn 1a then this is a biased estimator : So we multiply : = + = + . AMLE c n 1nAMLE n 1nxmin is an unbiased estimator of a To decide which is better estimator, we see which variance is smaller. Estimation & Hypothesis Testing of Bernoulli Population Effectiveness of an ad campaign. (find statistically significant improvement) Let x be a {0,1} r.v. (Bernoulli r.v.) with probability of success P (=P{x=1}) Suppose we have a sample ,…, . x1 xnand we want to estimate P Estimate P: (use average) = = = = Ex ppMOM xbar 1nk 1nxk : = = - - = Lx p pk 1nxk1 pn k 1nxk Understanding that = - , = fxx 1 p x 0 = , = = * + = * - = + ( - )( - ) fxx p x 1Ix 1 p Ix 1 1 p xp 1 x 1 p So if we have a vector (1 0 0 1 1 0 0 0 1), then we multiply by P*(1-p)(1-p)p*p*(1-p)(1-p)(1-p)*p So if we take the log of likelihood function: : = = + - = - , ≤ ≤ logLx p logpk 1nxk n k 1nxk1 p 0 p 1 : = = - - - = = - - - = = ddplogLx p 1pk 1nxk 11 pn k

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}