Cramer-Rao - ARE 210 Cramr-Rao Lower Bound page 1...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
ARE 210 Cramèr-Rao Lower Bound page 1 1 Econometrics is the application of the tools of mathematical statistics to economic ques- tions and evidence. There are many principles and methods that can be applied to estima- tion and inference in econometrics. But all involve optimization of some criterion subject to some constraints. (1) BEST minimize the variance of an estimator subject to the constraint that the bias is zero (2) BLUE same as (1) plus a restriction to linear functions of the data (the “dependent” variable) (3) MSE minimize the sum of the bias 2 and the variance of the estimator (4) Least Squares minimize the sum of squared “errors” for the estimator (5) Least Absolute Deviations minimize the mean absolute deviation of the estimator from the unknown parameter (6) Maximum Likelihood maximize the joint density function for the data with respect to the parameter estimator (7) Method of Moments use sample moments matched to population mo- ments to obtain “robust” consistent parameter estimates In some cases, several of these procedures coincide. For example, in the classical linear regression model, , tt t yx + ε T 2 .. . ( , ), t iid n o ε σ
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
ARE 210 Cramèr-Rao Lower Bound page 2 2 the solutions for ˆ β obtained from (1), (2), (4), (6) and (7) are identical. In other situa- tions, the estimators differ and can differ substantially in a finite sample. We treat the subject matter as the uniform study of the statistical properties of “extremum estimators,” exploiting both general or specific results from the mathematical theory of optimization and from mathematical statistics as each case of interest warrants. The Cramèr-Rao Lower Bound Theorem One of the more important results in classical statistics demonstrates that the principle of a BEST estimator has real content. This result is known as the Cramèr-Rao Lower Bound Theorem , which shows that the variance of an unbiased estimator can never fall below a certain well-defined lower limit. For even the simplest case of a single parameter θ and an iid vector of observations on a single variable, we need some machinery to develop the main ideas involved. Let 12 , ,..., T x xx be T independent and identically distributed (iid) observations on a ran- dom variable (rv), x, with a probability density function (pdf), f(x, θ ), where θ is an un- known parameter to be estimated with the data { , ,..., }. T x We require the following regularity conditions. P1. Define the support set for x by {: (,) 0 } . Ax f x = θ> Then A does not depend upon θ .
Background image of page 2
ARE 210 Cramèr-Rao Lower Bound page 3 3 Examples: f(x, θ ) = () 2 2 1 2 1 , 2 x ex −θ σ −∞< <∞ πσ , satisfies P1, but the uniform distribu- tion, f(x, θ ) = 1 θ , if 0 , x << θ with f(x, θ )= 0 otherwise, does not.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/01/2008 for the course ARE 210 taught by Professor Lafrance during the Fall '07 term at University of California, Berkeley.

Page1 / 10

Cramer-Rao - ARE 210 Cramr-Rao Lower Bound page 1...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online