This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Estimation De...nitions/Notes De...nition 1 A point estimate of a parameter is a single number that can be regarded as the most plausible value of . A point estimate is obtained by selecting a suitable statistic and computing its value from the given sample data. The selected statistic is called the point estimator of . De...nition 2 A point estimator is said to be an unbiased estimator of if E(^ = for every ) ^ is not unbiased, the dierence E(^ is called the bias of ^ possible value of . If ) . Proposition 1 When X is a binomial random variable with parameters n and p, the sample proportion p = X is an unbiased estimator of p. ^ n Principle of Unbiased Estimation When choosing among several dierent estimators of , select one that is unbiased. Proposition 2 Let X1 ; X2 ; : : : ; Xn be a random sample from a distribution with mean and variance 2 . Then the estimator: 2 = S 2 = ^ is an unbiased estimator of 2 .
i=1 n1 Xi X 2 Proposition 3 If X1 ; X2 ; : : : ; Xn is a random sample from a distribution with mean , then X is an unbiased estimator of . If in addition the distribution is continuous and symmetric, then X and any trimmed mean are also unbiased estimators of . Principle of Minimum Variance Unbiased Estimation Among all estimators of that are unbiased, choose the one that has minimum variance. The resulting ^ is called the minimum variance unbiased estimator (MVUE) of . Theorem 1 Let X1 ; X2 ; : : : ; Xn be a random sample from a normal distribution with parameters and . Then the estimator = X is the MVUE for . ^ ^ De...nition 3 The standard error of an estimator is its standard deviation ^ = V (^ If ). the standard error itself involves unknown parameters whose values can be estimated, substitution of these estimate into ^ yields the estimated standard error (estimated standard deviation) of the estimator. The estimated standard error can be denoted either by ^ or by ^ s^ . De...nition 4 Let X1 ; X2 ; : : : ; Xn be a random sample from a distribution with pmf or pdf f (x). For k = 1; 2; 3; : : :, the kth population moment, or kth moment of the distribution Pn Xk k f (x), is E(X ). The kth sample moment is i=1 i . n
q De...nition 5 Let X1 ; X2 ; : : : ; Xn be a random sample from a distribution with pmf or pdf f (x; 1 ; : : : ; m ), where 1 ; : : : ; m are parameters whose values are unknown. Then the moment estimators ^1 ; : : : ; ^m are obtained by equating the ...rst m sample moments to the corresponding ...rst m population moments and solving for 1 ; : : : ; m . 1 De...nition 6 Let X1 ; X2 ; : : : ; Xn have joint pmf of pdf f (x1 ; : : : ; xn ; 1 ; : : : ; m ), where the parameters 1 ; : : : ; m have unknown values. When x1 ; : : : ; xn are the observed sample values and f(x1 ; : : : ; xn ; 1 ; : : : ; m ) is regarded as a function of 1 ; : : : ; m , it is called the likelihood function. The maximum likelihood estimates (mle's) ^1 ; : : : ; ^m are those val ues of the i 's that maximize the likelihood function, so that f(x1 ; : : : ; xn ; ^1 ; : : : ; ^m ) f (x1 ; : : : ; xn ; 1 ; : : : ; m ) for all 1 ; : : : ; m . When the Xi 's are substituted in place of the xi 's, the maximum likelihood estimators result. The Invariance Principle Let ^1 ; : : : ; ^m be the mle's of the parameters 1 ; : : : ; m . Then the mle of any function h(1 ; : : : ; m ) of these parameters is the function h(^1 ; : : : ; ^m ) of the mle's. Proposition 4 Under very general conditions on the joint distribution of the sample, when the sample size n is large, the maximum likelihood estimator of any parameter is approximately unbiased [E(^ = ] and has variance that is nearly as small as can be achieved by any ) estimator. Stated another way, the mle ^ is approximately the MVUE of . De...nition 7 The plausibility of other values of can be compared to ^ in terms of the relative likelihood R ( 0 ) = L(^0 ) . L() De...nition 8 The ratio (x1 ; x2 ; : : : ; xn ) =
L(0 ) , L(^) The logarithm of is linear and is called the log-likelihood ratio statistic l (0 ) l ^ . In h i general, for large n, 2 l (0 ) l ^ 2 . _
1 where 2 (0; 1) is a likelihood ratio statistic. 2 ...
View Full Document
This note was uploaded on 01/08/2012 for the course EXST 4050 taught by Professor Staff during the Fall '10 term at LSU.
- Fall '10