Lecture_9

Lecture_9 - Maximum likelihood Fredrik Ronquist 1...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Maximum likelihood Fredrik Ronquist September 28, 2005 1 Introduction Now that we have explored a number of evolutionary models, ranging from simple to complex, let us examine how we can use them in statistical inference. The standard approach is to use maximum likelihood, which will be covered in this lecture. Assume that we have some data D and a model M of the process that generated the data. The model has some parameters θ , the specific value of which we do not know but wish to estimate. If the model is properly constructed, we will be able to calculate the probability of it generating the observed data given a specific set of parameter values, P ( D | θ,M ). Often, the conditioning on the model is suppressed in the notation, in which case the probability would simply be written as P ( D | θ ). This probability is often referred to as the likelihood of the parameter values. In maximum likelihood inference, we simply estimate the unknowns in θ by finding the values with the maximum likelihood or, more precisely, the highest probability of generating the observed data. 2 One-parameter models The maximization process is typically straight-forward when there is only one parameter in the model. Say, for instance, that we are interested in estimating the probability of obtaining heads when tossing a coin. A reasonable model is that each toss is identical and independent and has a heads probability of p . Hence, the probability of tails would be 1- p , and the probability of a particular outcome would follow a binomial distribution. For instance, the probability of the 1 BSC5936-Fall 2005-PB,FR Computational Evolutionary Biology sequence HHTTHTHHTTT would be L = P ( D | p ) = pp (1- p )(1- p ) p (1- p ) pp (1- p )(1- p )(1- p ) = p 5 (1- p ) 6 As you can see, it is sufficient to know the number of heads and tails to calculate the probability. If we graph the likelihood we can simply find its maximum, which is at p = 5 / 11 = 0 . 454545 ... (Fig. 1); the estimate of p is often denoted ˆ p . We can also calculate ˆ p analytically by taking the derivative of L with respect to p : d L d p = 5 p 4 (1- p ) 6- 6 p 5 (1- p ) 5 and finding where this derivative is zero. That is easily done by factoring out p 4 and (1- p ) 5 and concentrating on the remaining expression 5(1- p )- 6 p , which will give us the only relevant root p = 5 / 11. Figure 1: Likelihood curve for the coin tossing example. Note that the likelihood does not integrate to 1; it is not a probability distribution. The maximum likelihood estimate for p is found by locating the peak of the curve. It is often easier to maximize the logarithm of the likelihood than the likelihood itself. In this case, 2 BSC5936-Fall 2005-PB,FR Computational Evolutionary Biology we would get: ln L = 5ln p + 6ln(1- p ) and the derivative d(ln L ) d p = 5 p- 6 1- p If we set the derivative to zero, we would get 5 p- 6 1- p = 0 5(1- p )- 6 p = 0 5- 11 p = 0 p = 5 11 which, not surprisingly, is the same estimate of...
View Full Document

This note was uploaded on 11/27/2011 for the course BSC 5936 taught by Professor Staff during the Spring '08 term at FSU.

Page1 / 10

Lecture_9 - Maximum likelihood Fredrik Ronquist 1...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online