mleslides - Maximum Likelihood Estimation Eric Zivot The...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Maximum Likelihood Estimation Eric Zivot November 16, 2009
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Likelihood Function Let X 1 ,...,X n be an iid sample with probability density function (pdf) f ( x i ; θ ) , where θ is a ( k × 1) vector of parameters that characterize f ( x i ; θ ) . Example: Let X i ˜ N ( μ, σ 2 ) then f ( x i ; θ )=( 2 πσ 2 ) 1 / 2 exp μ 1 2 σ 2 ( x μ ) 2 θ =( μ, σ 2 ) 0
Background image of page 2
The joint density of the sample is, by independence, equal to the product of the marginal densities f ( x 1 ,...,x n ; θ )= f ( x 1 ; θ ) ··· f ( x n ; θ )= n Y i =1 f ( x i ; θ ) . The joint density is an n dimensional function of the data x 1 ,...,x n given the parameter vector θ and satis f es f ( x 1 ,...,x n ; θ ) 0 Z ··· Z f ( x 1 ,...,x n ; θ ) dx 1 ··· dx n =1 .
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The likelihood function is de f ned as the joint density treated as a function of the parameters θ : L ( θ | x 1 ,...,x n )= f ( x 1 ,...,x n ; θ )= n Y i =1 f ( x i ; θ ) . Notice that the likelihood function is a k dimensional function of θ given the data x 1 ,...,x n . It is important to keep in mind that the likelihood function, being a function of θ and not the data, is not a proper pdf. It is always positive but Z ··· Z L ( θ | x 1 ,...,x n ) 1 ··· k 6 =1 . To simplify notation, let the vector x =( x 1 ,...,x n ) denote the observed sample. Then the joint pdf and likelihood function may be expressed as f ( x ; θ ) and L ( θ | x ) , respectively.
Background image of page 4
Example 1 Bernoulli Sampling Let X i ˜ Bernoulli( θ ) . That is, X i =1 with probability θ X i =0 with probability 1 θ The pdf for X i is f ( x i ; θ )= θ x i (1 θ ) 1 x i ,x i =0 , 1 Let X 1 ,...,X n be an iid sample with X i ˜ Bernoulli( θ ) . The joint density / likelihood function is given by f ( x ; θ )= L ( θ | x )= n Y i =1 θ x i (1 θ ) 1 x i = θ P n i =1 x i (1 θ ) n P n i =1 x i
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Example 2 Normal Sampling Let X 1 ,...,X n be an iid sample with X i ˜ N ( μ, σ 2 ) . The pdf for X i is f ( x i ; θ )=( 2 πσ 2 ) 1 / 2 exp μ 1 2 σ 2 ( x i μ ) 2 , θ =( μ, σ 2 ) 0 −∞ < 2 > 0 , −∞ <x i < The likelihood function is given by L ( θ | x )= n Y i =1 (2 πσ 2 ) 1 / 2 exp μ 1 2 σ 2 ( x i μ ) 2 =( 2 πσ 2 ) n/ 2 exp 1 2 σ 2 n X i =1 ( x i μ ) 2
Background image of page 6
Example 3 Linear Regression Model with Normal Errors Consider the linear regression y i = x 0 i (1 × k ) β ( k × 1) + ε i ,i =1 ,...,n ε i | x i ˜i id N (0 2 ) The pdf of ε i | x i is f ( ε i | x i ; σ 2 )=(2 πσ 2 ) 1 / 2 exp μ 1 2 σ 2 ε 2 i The Jacobian of the transformation for ε i to y i is one so the pdf of y i | x i is normal with mean x 0 i β and variance σ 2 : f ( y i | x i ; θ )=( 2 πσ 2 ) 1 / 2 exp μ 1 2 σ 2 ( y i x 0 i β ) 2 θ =( β 0 2 )
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Given an iid sample of n observations, y
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 65

mleslides - Maximum Likelihood Estimation Eric Zivot The...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online