# With decision feedback we can think of the equalizer

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: hastic steepest descent algorithm called the least mean square (LMS) algorithm. 4.3.4 LS equalizer In the training period for the MMSE equalizer, the “data” sequence, i.e., the training sequence is known to the equalizer. Instead of minimizing the MSE, which is a statistical average, we can actually minimize the sum of the square errors. This is called the least squares (LS) criterion. Suppose that the known sequence lasts for Ã symbols. Then the sum of the square errors is given by ¾ Ã Ã Ã ´Á Á µ¾ ½ ½ Differentiating with respect to Á ÁÌ ´Ã µ ¾ (4.36) ´Ã µ and setting the result to zero, we get Ê´Ã µ ´Ã µ ´Ã µ (4.37) This time, Ã Ê´Ã µ Ã ´Ã µ ½ ½ 4.16 Á ÁÌ (4.38) ÁÁ (4.39) Wong &amp; Lok: Theory of Digital Communications 4. ISI &amp; Equalization Suppose that we are given one more training symbol. Apparently, we have to recalculate Ê´Ã · ½µ ´Ã ·½µ, and solve the matrix equation all over again. However, actually, there is a more efﬁcient approach. Assuming Ê´Ã µ is non-singular, ´Ã µ Ê ½ ´Ã µ ´Ã µ...
View Full Document

## This note was uploaded on 12/13/2012 for the course EEL 6535 taught by Professor Shea during the Spring '08 term at University of Florida.

Ask a homework question - tutors are online