# lec21 - Lecture 21 21.1 Monotone likelihood ratio. In the...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 21 21.1 Monotone likelihood ratio. In the last lecture we gave the definition of the UMP test and mentioned that under certain conditions the UMP test exists. In this section we will describe a property called monotone likelihood ratio which will be used in the next section to find the UMP test for one sided hypotheses. Suppose the parameter set Θ ⊆ is a subset of a real line and that probability distributions θ have p.d.f. or p.f. f ( x | θ ) . Given a sample X = ( X 1 , . . . , X n ) , the likelihood function (or joint p.d.f.) is given by f ( X | θ ) = n Y i =1 f ( X i | θ ) . Definition: The set of distributions { θ , θ ∈ Θ } has Monotone Likelihood Ratio (MLR) if we can represent the likelihood ratio as f ( X | θ 1 ) f ( X | θ 2 ) = V ( T ( X ) , θ 1 , θ 2 ) and for θ 1 > θ 2 the function V ( T, θ 1 , θ 2 ) is strictly increasing in T . Example. Consider a family of Bernoulli distributions { B ( p ) : p ∈ [0 , 1] } , in which case the p.f. is qiven by f ( x | p ) = p x (1- p ) 1- x and for X = ( X 1 , . . . , X n ) the likelihood function is f ( X | p ) = p P X i (1- p ) n- P X i . We can write the likelihood ratio as folows: f ( X | p 1 ) f ( X | p 2 ) = p P X i 1 (1- p 1 ) n- P X i p P X i 2 (1- p 2 ) n- P X i = 1- p 1 1- p 2 n p 1 (1- p 2 ) p 2 (1- p 1 ) P X i ....
View Full Document

## This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.

### Page1 / 5

lec21 - Lecture 21 21.1 Monotone likelihood ratio. In the...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online