a Show that X 2 is not an unbiased estimator for \u03bc 2 b For what value of kis

A show that x 2 is not an unbiased estimator for μ 2

This preview shows page 4 - 8 out of 12 pages.

(a) Show that X 2 is not an unbiased estimator for μ 2 . (b) For what value of k is the estimator X 2 - kS 2 unbiased for μ 2 . 117
Image of page 4
Chapter 6. Point Estimation STAT 155 6.2 Methods of Point Estimation The Method of Moments Let X 1 , · · · , X n be a random sample from a pmf or pdf f ( x ) . For k = 1 , 2 , · · · The k th population moment or k th moment of the distribution is E ( X k ) . The k th sample moment is 1 n P n i =1 X k i . Let X 1 , · · · , X n be a random sample from a distribution with pmf or pdf p ( x ; 1 , · · · , m ) , where 1 , · · · , m are parameters whose values are unknown. Then the moment estimators ˆ 1 , · · · , ˆ m are obtained by equating the first m sample mo- ments to the corresponding first m population moments and solving for 1 , · · · , m . The basic idea of this method is to equate certain sample char- acteristics, such as the mean, to the corresponding population expected values. Then solving these equations for unknown parameter values yields the estimators. 118
Image of page 5
Chapter 6. Point Estimation STAT 155 Example Suppose X 1 , · · · , X n is a random sample from a Bernoulli distri- bution with parameter p . That is, each X i takes the value 1 with probability p and the value 0 with probability 1 - p . Find the moment estimator of p . Is the moment estimator unbiased? Example Suppose X 1 , · · · , X n is a random sample from a normal distribu- tion with parameters μ and σ . Find the moment estimators of μ and σ 2 . Are they unbiased? 119
Image of page 6
Chapter 6. Point Estimation STAT 155 Maximum Likelihood Estimation Let X 1 , · · · , X n have joint pmf or pdf f ( x 1 , · · · , x n ; 1 , · · · , m ) = f ( x 1 , · · · , x n ; ) where the parameters = { 1 , · · · , m } have unknown values. When x 1 , · · · , x n are the observed sample values and L = f ( x 1 , · · · , x n ; ) is regarded as a function of = { 1 , · · · , m } , it is called the likelihood function . The maximum likelihood estimates (mle’s) ˆ = { ˆ 1 , · · · , ˆ m } are those values of the i ’s that maximize the likelihood func- tion. When the X i ’s are substituted in place of the x i ’s, the maximum likelihood estimators result. The likelihood function tells us how likely the observed sam- ple is as a function of the possible parameter values. Maxi- mizing the likelihood gives the parameter values for which the observed sample is most likely to have been generated — that is, the parameter values that “agree most closely” with the ob- served data.
Image of page 7
Image of page 8

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture