This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Statistics 3858 : Construction of Two Common Types of Estimators Estimators are statistics used to estimate parameters or functions of parameters for a statistical model. Here we consider finite parameter models. Let Θ be the parameter space. Consider the typical setting of the random data X 1 ,X 2 ,...,X n being iid from a distribution that belongs to a statistical model. That is X i are iid with distribution f ( · ,θ ) where θ ∈ Θ, but unknown. 1 Method of Moments Let μ k = E( X k ). Notice this expectation depends on the parameter θ , and so we write μ k ( θ ) = E θ ( X k ). The subscript θ is denote the dependence on the parameter θ . Aside To help clarify this notation and idea recall, in the continuous r.v. case that we define, when finite, E( X k ) = ∫ ∞ −∞ x k f ( x ) dx . (1) In the case of a statistical model with parameter space Θ possible choices for the pdf f are f ( · ; θ ). Thus in (1) we have a possibly different value for this integral for each possible choice of θ . Thus we have E θ ( X ) = ∫ ∞ −∞ x k f ( x ; θ ) dx . which gives a mapping θ 7→ E θ ( X ) ≡ μ k ( θ ). In the case of a discrete r.v. we have a similar property. End of Aside Notice this gives a mapping θ 7→ μ k ( θ ), typically a many to one mapping. However we can often find a mapping θ 7→ ( μ 1 ( θ ) ,μ 2 ( θ ) ,...,μ K ( θ )) so that this mapping (or function) is a one to one function. Specifically this gives a function h : Θ 7→ R K . When we restrict the range appropriately this is a 1 to 1 function. Therefore, when using the domain Θ and suitable range, the function h has an inverse h − 1 . Usually K = d where d = dimension(Θ), but 1 Two Common Estimation Methods 2 sometimes we need to modify this to obtain a 1 to 1 function. Specifically we have θ = h − 1 ( μ 1 ( θ ) ,μ 2 ( θ ) ,...,μ K ( θ )) . Examples : (fill in details) Exponential , Bernoulli , Poisson , Normal, Gamma, N (0 ,σ 2 ) Bernoulli and Binomial If X ∼ Bernoulli( θ ) then E θ ( X ) = θ . Thus for a given value of μ 1 = E( X ) we can determine θ = μ 1 . If X ∼ Binom( m,θ ) then E θ ( X ) = mθ . Thus for a given value of μ 1 = E( X ) we can determine θ = μ 1 m . Notice that if X 1 ,...,X m are iid Bernoulli( θ ) then Y = X 1 + X 2 + ... + X m ∼ Binom( m,θ ). End of Example Exponential If X ∼ exponential ,θ then X has pdf f ( x ; θ ) = θe − θx I( x ≥ 0) The parameter θ belongs to the parameter space Θ = (0 , ∞ ). E θ ( X ) = 1 θ and E θ ( X 2 ) = 2 θ 2 . Thus we have a mapping θ 7→ E θ ( X ) = 1 θ . This produces a mapping h : Θ 7→ R + given by h ( θ ) = 1 θ . This has an inverse mapping h − 1 : R + 7→ Θ given by h − 1 ( x ) = 1 x . Thus for a given value μ = E( X ) we can determine θ = h − 1 ( μ ) = 1 μ ....
View
Full
Document
This note was uploaded on 01/17/2012 for the course AM 1234 taught by Professor Qqqq during the Spring '11 term at UWO.
 Spring '11
 qqqq

Click to edit the document details