{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture_10

# Lecture_10 - Maximum likelihood II Peter Beerli October 3...

This preview shows pages 1–4. Sign up to view the full content.

Maximum likelihood II Peter Beerli October 3, 2005 1 Conditional likelihoods revisited Some of this is already covered in the algorithm in the last chapter, but more elaboration on the practical procedures and why are using these might give an even better understanding. This section follows the Felsenstein book pages 251-255. We express the likelihood of the tree in Figure 1 as Prob( D ( i ) | T ) = z y w x Prob( A, A, C, G, G, w, y, x, z | T ) | T ) (1) where T = ( t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 , t 8 ). Each summation is over all 4 nucleotides. The above proba- bility can be separated into Prob( A, A, C, G, G, w, y, x, z | T ) | T ) =Prob( z ) × Prob( w | y, t 3 )Prob( A | w, t 1 )Prob( A | w, t 2 ) × Prob( y | t 5 , z )Prob( C | y, t 4 ) × Prob( x | z, t 6 )Prob( G | x, t 7 )Prob( G | x, t 8 ) Prob( z ) at the root is often assumed to depend on the stationary base frequencies. All parts are easy to calculate and if we order the terms of the sum in formula 1 and move the summations as far right as possible we get a summation pattern that uses the same structure as our tree ((C, (A, A)),(G, G)) Prob( D ( i ) | T ) = z Prob( z ) y Prob( y | t 5 , z )Prob( C | y, t 4 ) × w Prob( w | y, t 3 )Prob( A | w, t 1 )Prob( A | w, t 2 ) × x Prob( x | z, t 6 )Prob( G | x, t 7 )Prob( G | x, t 8 ) 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
BSC5936-Fall 2005 Computational Evolutionary Biology Figure 1: A tree with branch length and data This method was introduced by Felsenstein in 1973 as pruning but was known before that in pedigree analyses and even long before that as a summation reduction device called Horner’s rule. Even though it is named after William George Horner, who described the algorithm in 1819, it was already known to Isaac Newton in 1669 and even to the Chinese mathematician Ch’in Chiu- Shao around 1200s (Horner’s rule Wikipedia.com 2005). Using the tree structure and calculating quantities (conditional likelihoods) on nodes we arrive at the solution outlined in the algorithm in the last lecture. 2 Scaling of likelihoods With large trees the conditional likelihoods will be very small and on finite machines we need to scale the conditional likelihoods so that they do not underflow. This would have dire consequences 2
BSC5936-Fall 2005 Computational Evolutionary Biology

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 6

Lecture_10 - Maximum likelihood II Peter Beerli October 3...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online