{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Chapter 4 - 4 4.1 Estimation Estimator 4.1.1 x observed...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
§ 4 Estimation § 4.1 Estimator 4.1.1 x : observed dataset θ : unknown parameter Point estimation of θ : selection of a “reliable” value to estimate θ based on x . 4.1.2 Definition. Estimator of θ : statistic T = T ( X ) for approximating θ Estimate of θ : T ( x ), realisation of T ( X ) based on the observed data X = x . T ( X ) is random (varies from sample to sample), T ( x ) is a constant fixed by the sample in hand. 4.1.3 Some criteria for qualifying “good” estimators: bias, variance, standard deviation, (root) mean squared error ... § 4.2 Bias and mean squared error 4.2.1 In the frequentist paradigm, the quality of an estimator T ( X ) is assessed by examining its sampling distribution, which tells us how T ( X ) would vary from sample to sample. 4.2.2 Definition. T : estimator of θ Bias of T = E [ T ] - θ . If T has zero bias, it is unbiased . 4.2.3 Bias of T measures its accuracy . Var( T ) (or, s . d . ( T ) = p Var( T ) ) measures its precision . A good estimator should be accurate and precise . 4.2.4 Definition. The mean squared error (MSE) of an estimator T for θ is MSE ( T ) = E [( T - θ ) 2 ] . Note that MSE ( T ) = Var ( T ) + { bias ( T ) } 2 . MSE provides a measure of the quality of an estimator by taking into account both accuracy (bias) and precision (variance). 31
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Small MSE sampling distribution of T concentrated near θ . Figure § 4.2.1 illustrates the sampling distributions of 4 estimators with different bias and standard deviation properties. The smallest MSE is given by the one with small bias and small s.d. -6 -4 -2 0 2 4 x 0.0 0.2 0.4 0.6 0.8 pdf of sampling distribution high bias, high s.d. low bias, high s.d. high bias, low s.d. low bias, low s.d. true value of parameter Comparison of 4 estimators with different qualities Figure § 4.2.1: Sampling distributions of 4 estimators of θ = 0 with different qualities 4.2.5 To retain the same unit as the observations, we may consider the root mean squared error (RMSE), defined to be MSE. 4.2.6 If an estimator T is unbiased, then MSE( T ) = Var( T ) and RMSE( T ) = s . d . ( T ). Example § 4.2.1 Consider X 1 , . . . , X n iid from Bernoulli ( p ), and ¯ X is used to estimate p . Then bias( ¯ X ) = E [ ¯ X ] - p = p - p = 0 , so ¯ X is an unbiased estimator of p . Thus, MSE( ¯ X ) = Var( ¯ X ) = p (1 - p ) /n and RMSE( ¯ X ) = s . d . ( ¯ X ) = p p (1 - p ) /n. 32
Background image of page 2
Clearly, we need a bigger sample size n to achieve a smaller MSE for ¯ X . 4.2.7 MSE may depend on unknown parameters, and hence may not be computable. In this case we may want to “estimate” it. An estimated s.d. is known as a standard error (s.e.). Example § 4.2.1: (cont’d) Note that s . d . ( ¯ X ) = p p (1 - p ) /n, and MSE( ¯ X ) = p (1 - p ) /n. Substituting ¯ X for p in the above formulae, we estimate the s.d. and MSE of ¯ X by respectively s . e . ( ¯ X ) = q ¯ X (1 - ¯ X ) /n and [ MSE( ¯ X ) = ¯ X (1 - ¯ X ) /n. 4.2.8 Example § 4.2.2 Sample X = ( X 1 , . . . , X n ) — i.i.d. with mean μ x and variance σ 2 x > 0 Sample Y = ( Y 1 , . . . , Y m ) — i.i.d. with mean μ x + δσ x ( δ 6 = 0) and variance ² 2 σ 2 x Assume that m < n , X and Y are independent of each other.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}