Chapater3 Unbiased Estimation

# Chapater3 Unbiased Estimation - Chapter 3 Unbiased...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 3 Unbiased Estimation 3.1 Criteria of estimation Suppose that we have a random sample X = { X 1 , ··· ,X n } from a family { F θ : θ ∈ Θ } of probability distributions ( θ could be a vector.) Let T ( X ) = T ( X 1 , ··· ,X n ) be such an estimator for θ or g ( θ ). The purpose is to find T = T ( X 1 , ··· ,X n ), an estimator for θ or more generally g ( θ ), such that it is “optimal” in certain sense. One of criteria for choosing an “optimal” estimator is Mean Squared Errors (MSE) , defined by MSE [ T ] = E [ T- g ( θ )] 2 = V ar ( T )) + bias 2 [ T ] . where bias [ T ] = ET- g ( θ ). An estimator T is said to be optimal in the MSE sense if it has the smallest MSE for all θ ∈ Θ. However, there is a difficulty with this criteria, as illustrated in the next example. Example 3.1 Let X 1 , ··· ,X n be iid with EX i = μ and V ar ( X i ) = 1 . To estimate μ , we choose three estimators T 1 = X 1 , T 2 = 3 , T 3 = ¯ X. It is easy to find MSE ( T 1 ) = 1 , MSE ( T 2 ) = (3- μ ) 2 , MSE ( T 3 ) = 1 n . Plot MSE against μ , we see that there is no UNIFORM best estimator if we use MSE criterion. The problem arises because we considered the class of all possible estimators (with second moments), which appears to be too big. In the above example, T 2 = 3 is only a good choice if we KNOW a prior that the true μ is 3 (or near 3). Since we do not assume to have any idea of its whereabouts, it seems reasonable to drop 1 T 2 = 3 from our considerations. That is, we should focus our search on a smaller class of estimators. The questions is: what class should we choose then? In this chapter, we will restrict attention to the class of all unbiased esti- mators (with second moments). An estimator T ( X ) is called unbiased for g ( θ ) if ET ( X ) = g ( θ ), in which case we have MSE ( T ) = V ar ( T ). An unbiased estimator T ( X ) is called uniformly minimum variance unbiased estimator (UMVUE) for g ( θ ) if it has the smallest variance for all θ . Let U{ g ( θ ) } denote the class of all unbiased estimators of g ( θ ). Some questions immediately arise. (1). Is U{ g ( θ ) } empty? If the answer is yes, there exists a statistic η ( X ) such that Eη ( X ) = g ( θ ). Then, g ( θ ) is said to be U-estimable . If the answer is no, g ( θ ) is then said to be NOT U-estimable . (2). If U{ g ( θ ) } is non-empty, is there a UMVUE in it? (3). If a UMVUE does exist in non-empty U{ g ( θ ) } , how can we find it? Is it unique? We shall try to answer these questions in this section. 3.2 Is U{ g ( θ ) } empty? Example 3.2 X ∼ Bin(n,p). Then, U{ g ( θ ) } is nonempty (i.e., g ( p ) is U-estimable) iff it is a polynomial in p of degree m (an integer), where ≤ m ≤ n ....
View Full Document

## This note was uploaded on 02/01/2012 for the course MATH 5010 taught by Professor D during the Spring '11 term at HKU.

### Page1 / 36

Chapater3 Unbiased Estimation - Chapter 3 Unbiased...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online