This preview shows page 1. Sign up to view the full content.
Unformatted text preview: hastic steepest descent algorithm called the least mean square (LMS) algorithm. 4.3.4 LS equalizer
In the training period for the MMSE equalizer, the “data” sequence, i.e., the training sequence is
known to the equalizer. Instead of minimizing the MSE, which is a statistical average, we can actually
minimize the sum of the square errors. This is called the least squares (LS) criterion. Suppose that the
known sequence lasts for Ã symbols. Then the sum of the square errors is given by ¾ Ã Ã
Ã ´Á Á µ¾ ½
½ Differentiating with respect to Á ÁÌ ´Ã µ ¾ (4.36) ´Ã µ and setting the result to zero, we get
Ê´Ã µ ´Ã µ ´Ã µ (4.37) This time,
Ã Ê´Ã µ Ã ´Ã µ ½
½ 4.16 Á ÁÌ (4.38) ÁÁ (4.39) Wong & Lok: Theory of Digital Communications 4. ISI & Equalization Suppose that we are given one more training symbol. Apparently, we have to recalculate Ê´Ã · ½µ ´Ã ·½µ, and solve the matrix equation all over again. However, actually, there is a more efﬁcient
approach. Assuming Ê´Ã µ is nonsingular, ´Ã µ Ê ½ ´Ã µ ´Ã µ...
View
Full
Document
This note was uploaded on 12/13/2012 for the course EEL 6535 taught by Professor Shea during the Spring '08 term at University of Florida.
 Spring '08
 Shea
 Frequency

Click to edit the document details