{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lect5 - Kevin Buckley 2007 1 ECE 8770 Topics in Digital...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Kevin Buckley - 2007 1 ECE 8770 Topics in Digital Communications - Sp. 2007 Lecture 5 2 Symbol Detection and Sequence Estimation 2.4 The Viterbi Algorithm In Subsection 2.3 we introduced MLSE in general terms, and considered it as applied for memoryless (“noninteracting symbol ”) modulation schemes, DPSK, PRS and CPM. In Section 3 of the Course we will again consider MLSE for InterSymbol Interference (ISI) channels. Here we introduce the Viterbi algorithm as a computationally efficient approach to solving a certain class of ML and MAP sequence estimation problems. We first introduce it in general terms, and then apply it to DPSK, PRS and CPM examples. 2.4.1 Sequence Estimation for Hidden Markov Models (HMM’s) Markov Random Processes: Consider a continuous-time random process X ( t ). We know from an introductory discussion on random processes that the complete statistical characterization of X ( t ) – the set of all joint PDF’s of all possible combinations of samples all possible numbers of samples of X ( t ) – is in general not practical. Let t 1 and t 2 be two points in time, and denote X 1 = X ( t 1 ) and X 2 = X ( t 2 ) as the random variable samples of X ( t ) at these times. The PDF of random variable X ( t 2 ) given (a value of) X ( t 1 ) is denoted p ( x 2 /x 1 ). This is just the conditional PDF we’ve been employing. Now consider K samples of X ( t ), X n ; n = 1 , 2 , ..., K , taken at times t n n = 1 , 2 , · · · , K where t n +1 > t n . If, for all integer K and all possible t n n = 1 , 2 , · · · , K , we have that p ( x K /x K 1 , x K 2 , · · · , x 1 ) = p ( x K /x K 1 ) , (1) then X ( t ) is a Markov process. This indicates that, given X K 1 , X K is statistically independent of X n ; n = K - 2 , K - 3 , · · · , 1. As a result, for a Markov process, p ( x K , x K 1 , x K 2 , · · · , x 1 ) = p ( x K /x K 1 , x K 2 , · · · , x 1 ) (2) . p ( x K 1 /x K 2 , x K 3 , · · · , x 1 ) · · · p ( x 2 /x 1 ) p ( x 1 ) = p ( x K /x K 1 ) p ( x K 1 /x K 2 ) · · · p ( x 2 /x 1 ) p ( x 1 ) = p ( x 1 ) K productdisplay n =2 p ( x n /x n 1 ) . Markov processes are much more easily characterized, statistically, than general random processes, and they occur commonly in nature and engineering system.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Kevin Buckley - 2007 2 Markov Sequences: Consider a discrete-time random sequence X n . If, for all K and n , p ( x n /x n 1 , x n 2 , · · · , x n K ) = p ( x n /x n 1 ) , (3) then X n is a Markov sequence. For a Markov sequence, p ( x n , x n 1 , x n 2 , · · · , x n K ) = p ( x n /x n 1 ) p ( x n 1 /x n 2 ) · · · p ( x n K +1 /x n K ) p ( x n K ) = p ( x n K ) K 1 productdisplay k =0 p ( x n k /x n k 1 ) . (4) Vector Markov Sequences: To this point we have discussed only scalar Markov processes X ( t ) and sequences X n . This discussion generalizes to vector random processes and sequences. For example, let X n denote an L -dimensional vector random sequence. X n is a vector Markov sequence if, for all K and n , p ( x n /x n 1 , x n 2 , · · · , x n K ) = p ( x n /x n 1 ) . (5) Then p ( x n , x n 1 , x n 2 , · · · , x n K ) = p ( x n /x n 1 ) p ( x n 1 /x n 2 ) · · · p ( x n K +1 /x n K ) p ( x n K ) = p ( x n K ) K 1
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}