chapter3 - § 3 Likelihood and Sufficiency § 3.1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: § 3 Likelihood and Sufficiency § 3.1 Introduction 3.1.1 Sample data: X = ( X 1 ,...,X n ) Postulated parametric family of probability functions (i.e. either mass functions or pdf’s) : { p ( · | θ ) : θ ∈ Θ } e.g. — • X 1 ,...,X n are iid Poisson( θ ): p ( x 1 ,...,x n | θ ) = n Y i =1 P ( X i = x i | θ ) = n Y i =1 e- θ θ x i x i ! , θ ∈ (0 , ∞ ) . • X 1 ,...,X n are iid N ( μ,σ 2 ): p ( x 1 ,...,x n | θ ) = n Y i =1 • 1 √ 2 πσ 2 exp ‰- ( x i- μ ) 2 2 σ 2 ‚ θ = ( μ,σ ) ∈ (-∞ , ∞ ) × (0 , ∞ ) . If we observe X = x = ( x 1 ,...,x n ), what can we learn from x about the true value of θ ? We wish to answer this question by means of statistical inference . 3.1.2 Raw sample data, i.e. x , contain information relevant to our inference about θ . We want to extract such information completely yet “economically”. This requires us to find a meaningful way to “present” data. Answer: Likelihood function — a mathematical device to present all information available in x which is relevant to θ . It measures the plausibility of each θ ∈ Θ being the true θ that gives rise to x . 3.1.3 Suppose the random vector X has a probability function belonging to the parametric family { p ( · | θ ) : θ ∈ Θ } . Definition. Given that X is observed (realised) to be x , the likelihood function of θ is defined to be ‘ x ( θ ) = p ( x | θ ) , i.e. the probability function of X , evaluated at X = x , but considered as a function of θ . 17 The loglikelihood function is S x ( θ ) = ln ‘ x ( θ ) , which gives equivalent information but is more convenient to work with than ‘ x ( θ ). 3.1.4 Common special case — X iid: X is a random sample, i.e. X = ( X 1 ,...,X n ) are iid with each X i ∼ probability function f ( x | θ ). Then the (joint) probability function of X is p ( x | θ ) = n Y i =1 f ( x i | θ ) and so ‘ x ( θ ) = p ( x | θ ) = n Y i =1 f ( x i | θ ) , where x = ( x 1 ,...,x n ) is the realisation of X . 3.1.5 When examining a likelihood function ‘ x ( θ ), we are mainly interested in the relative likelihoods of different values of θ . The actual magnitude of the likelihood function itself is unimportant. Without loss of information we may ignore positive multiplicative factors in the likelihood function that do not depend on θ . For example, if ( X 1 ,...,X n ) iid ∼ Poisson( θ ), we may take the likelihood function to be ‘ x ( θ ) = e- nθ θ ∑ i x i , omitting the factor ( Q i x i !)- 1 . 3.1.6 It is usually not meaningful to compare ‘ x ( θ ) with ‘ x ( θ ) across different samples x and x at the same θ . More meaningful is to compare ‘ x ( θ ) across different values of θ under the same observed sample x ....
View Full Document

This note was uploaded on 06/09/2011 for the course ECON econ 1001 taught by Professor Wong during the Fall '09 term at HKU.

Page1 / 14

chapter3 - § 3 Likelihood and Sufficiency § 3.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online