L5 More on Estimation - More on Estimation In the previous...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
More on Estimation. In the previous chapter we looked at the properties of estimators and the criteria we could use to choose between types of estimators. Here we examine more closely some very popular basic estimation techniques, two of which focus on the estimation of parameters of a pre-specified probability density function (Maximum Likelihood and Method of Moments techniques) and the third which focuses on the estimation of the shape of an un-specified probability density function (kernel estimation). In all cases we are confronted with a random sample X i , i = 1, 2,. .,n, and in the first two cases we know the form of the p.d.f. f(x, 2 ) but not the value of the parameter 2 (often there will be more than one parameter, the techniques are readily extended to deal with this situation) in the third case we do not know the form of f all we are trying to calculate is the value of f( ) for a given x. The third case relates solely to continuous random variables, the first two cases relate to both discrete and continuous random variables, in our discussion we refer only to the continuous case though we will give examples of discrete random variable problems. Maximum Likelihood Estimation. The intuition behind this technique is to choose a value for the unknown 2 that will make the chance of us having obtained the sample we did obtain as big as possible. The rationale for this is that any sample we get is going to be a more likely to be a high probability sample than a low probability sample. Imagine we wish to estimate the average height of males and we randomly sample 4 males from off the street, we would be surprised if all 4 were above 7 feet and similarly we would be surprised if they were all below 4 feet. This is because they are unlikely samples. We would be a lot less surprised if their heights were between 5 and 6 feet because that would constitute a more likely sample. Thus it makes sense to choose a value for 2 which maximizes the probability of having got the sample that we got. Given f(x, 2 ) and independently drawn X i ;s, the joint density of the sample which is referred to as L, the likelihood, is given by:
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
and the estimation technique simply amounts to deriving the formula for 2 in terms of the X i ;s which maximizes this function with respect to 2 . For technical reasons (i.e. the algebra is usually easier!) we usually maximize the log of the likelihood. When there is more than one parameter the first order conditions are simply solved simultaneously (see the examples below). Method of Moments Estimation. The motivation here is quite different from, and somewhat more straightforward than, that for the maximum likelihood method, it relies on common sense. We have seen in an earlier chapter that given f(x, 2 ) we can obtain a formula for the theoretical mean or expected value of x and we can similarly obtain a formula for the theoretical variance and any other moments of x, for example if x is a continuous random variable we have: The sample mean and sample variance are estimates of E(X) and V(X) respectively so all that is
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/23/2011 for the course ECONOMICS 209 taught by Professor Kambourov during the Spring '11 term at University of Toronto.

Page1 / 11

L5 More on Estimation - More on Estimation In the previous...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online