114 Chapter 6 Point Estimation STAT 155 Example Suppose X 1 X n is a random

114 chapter 6 point estimation stat 155 example

This preview shows page 114 - 120 out of 128 pages.

114
Image of page 114
Chapter 6. Point Estimation STAT 155 Example Suppose X 1 , · · · , X n is a random sample from a Bernoulli distri- bution with parameter p . That is, each X i takes the value 1 with probability p and the value 0 with probability 1 - p . Find the moment estimator of p . Is the moment estimator unbiased? Example Suppose X 1 , · · · , X n is a random sample from a normal distribu- tion with parameters μ and σ . Find the moment estimators of μ and σ 2 . Are they unbiased? 115
Image of page 115
Chapter 6. Point Estimation STAT 155 Maximum Likelihood Estimation Let X 1 , · · · , X n have joint pmf or pdf f ( x 1 , · · · , x n ; θ 1 , · · · , θ m ) = f ( x 1 , · · · , x n ; Θ ) where the parameters Θ = { θ 1 , · · · , θ m } have unknown values. When x 1 , · · · , x n are the observed sample values and L = f ( x 1 , · · · , x n ; Θ ) is regarded as a function of Θ = { θ 1 , · · · , θ m } , it is called the likelihood function . The maximum likelihood estimates (mle’s) ˆ Θ = { ˆ θ 1 , · · · , ˆ θ m } are those values of the θ i ’s that maximize the likelihood func- tion. When the X i ’s are substituted in place of the x i ’s, the maximum likelihood estimators result. The likelihood function tells us how likely the observed sam- ple is as a function of the possible parameter values. Maxi- mizing the likelihood gives the parameter values for which the observed sample is most likely to have been generated — that is, the parameter values that “agree most closely” with the ob- served data. 116
Image of page 116
Chapter 6. Point Estimation STAT 155 Notes on finding the mle 1. If X 1 , · · · , X n is a random sample, i.e. X i ’s are independent and identically distributed (iid). Because of independence, the likelihood function L is a product of the individual pmf’s or pdf’s. 2. Finding Θ to maximize ln( L ) is equivalent to maximizing L itself. In statistics, taking the logarithm frequently changes a product to a sum, which is easier to work with. ln( xy ) = ln( x ) + ln( y ) ln( x/y ) = ln( x ) - ln( y ) ln( x y ) = y ln( x ) 3. To find the values of θ i ’s that maximize ln( L ) , we must take the partial derivatives of ln( L ) or with respect to each θ i , equate them to zero, and solve the equations for θ i ’s. This solution is ˆ Θ = { ˆ θ 1 , · · · , ˆ θ m } , the mle. 117
Image of page 117
Chapter 6. Point Estimation STAT 155 Example Suppose X 1 , · · · , X n is a random sample from a Bernoulli distri- bution with parameter p . That is, each X i takes the value 1 with probability p and the value 0 with probability 1 - p . Find the mle of p . Is the mle unbiased? Example Suppose X 1 , · · · , X n is a random sample from a normal distri- bution with parameters μ and σ . Find the mle’s of μ and σ 2 . Are they unbiased? 118
Image of page 118
Chapter 6. Point Estimation STAT 155 Exercise 6.22 Let X denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of X is f ( x ; θ ) = ( ( θ + 1) x θ 0 x 1 0 otherwise where - 1 < θ . A random sample of ten students yields data x 1 = . 92 , x 2 = . 79 , x 3 = . 90 , x 4 = . 65 , x 5 = . 86 , x 6 = . 47 , x 7 = . 73 , x 8 = . 97 , x 9 = . 94 , and x 10 = . 77 .
Image of page 119
Image of page 120

You've reached the end of your free preview.

Want to read all 128 pages?

  • Fall '08
  • Staff
  • Statistics, Probability theory

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture