Lecture07-2010

# Lecture07-2010 - Maximum Likelihood and Examples: Lecture...

This preview shows pages 1–4. Sign up to view the full content.

Maximum Likelihood and Examples: Lecture VII Charles B. Moss September 2, 2010 I. Maximum Likelihood A. An alternative objective approach to estimating the parameters of a distribution function is by maximum likelihood. 1. The argument behind maximum likelihood is to choose those parameters that maximize the likelihood or relative probabil- ity of drawing a particular sample. 2. The likelihood function (or the probability of a particular sam- p le)canthenbewr ittenas L = N Y i =1 f ± x i | μ, σ 2 ² = ± 2 πσ 2 ² N 2 exp " 1 2 σ 2 N X i =1 ( x i μ ) 2 # (1) 3. Maximizing this function with respect to the parameters μ and σ 2 implies max μ,σ 2 L = ± 2 2 ² N 2 exp " 1 2 σ 2 N X i =1 ( x i μ ) 2 # (2) 4. Taking the Frst-order conditions with respect to μ Frst 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
AEB 6182 Agricultural Risk Analysis and Decision Making Professor Charles B. Moss Lecture VII Fall 2010 ∂L ∂μ = ± 2 πσ 2 ² N 2 exp " 1 2 σ 2 N X i =1 ( x 1 μ ) 2 # × 1 2 σ 2 N X i =1 [ 2( x i μ )] ! =0 1 σ 2 N X i =1 x i ! ˆ μ = 1 N N X i =1 x i (3) 5. In order to solve for the ±rst-order conditions with respect to the variance, we treat σ 2 as a single variable ∂σ 2 = N 2 (2 π ) N 2 ± σ 2 ² N 2 1 exp " 1 2 σ 2 N X i =1 ( x i μ ) 2 # + ± 2 2 ² N 2 ± σ 2 ² 2 2 N X i =1 ( x i μ ) 2 exp " 1 2 σ 2 N X i =1 ( x i μ ) 2 # = N ± σ 2 ² 1 2 + ± σ 2 ² 2 2 N X i =1 ( x i μ ) 2 = 2 N X i =1 ( x i μ ) 2 ˆ σ 2 = 1 N N X i =1 ( x i μ ) 2 (4) 6. The derivation of the maximum likelihood estimates can be simpli±ed by maximizing the logarithm of the likelihood func- tion. 2
AEB 6182 Agricultural Risk Analysis and Decision Making Professor Charles B. Moss Lecture VII Fall 2010 ln ( L )= N 2 ln ± σ 2 ² 1 2 σ 2 N X i =1 ( x i μ ) 2 ln ( L ) ∂μ = 1 2 σ 2 N X i =1 [ 2( x i μ )] = 0 N X i =1 x i =0 ˆ μ = 1 N N X i =1 x i ln ( L ) ∂σ 2 = N 2 1 σ 2 + 1 2 ± σ 2 ² 2 N X i =1 ⇒− 2 + N X i =1 ( x i μ ) 2 ˆ σ 2 = 1 N N X i =1 ( x i μ ) 2 (5) B. The method of moments estimator and maximum likelihood esti- mator of the parameters of the normal distribution are the same.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 07/15/2011 for the course AEB 6182 taught by Professor Weldon during the Fall '08 term at University of Florida.

### Page1 / 8

Lecture07-2010 - Maximum Likelihood and Examples: Lecture...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online