This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ratic
function of . The gradient of the MSE with respect to gives the direction to change largest increase of the MSE. In our notation, the gradient is ¾´
4.15 for the Ê µ. To decrease the MSE, we Wong & Lok: Theory of Digital Communications can update in the direction opposite to the gradient. This is the steepest descent algorithm: At the th step, the vector ´ µ is updated as
´ ½µ · ´µ
where 4. ISI & Equalization Ê ´ ½µ (4.34) is a small positive constant that controls the rate of convergence to the optimal solution. In many applications, we do not know Ê and in advance. However, the transmitter can transmit a training sequence that is known a priori by the receiver. With a training sequence, the receiver can
estimate Ê and . Alternatively, with a training sequence, we can replace Ê and at each step in the steepest descent algorithm by the rough estimates Á ÁÌ and Á Á , respectively. The algorithm becomes: ´ ½µ · Á ÁÌ ´ ½µ Á ´µ (4.35) This is a stoc...
View Full Document
This note was uploaded on 12/13/2012 for the course EEL 6535 taught by Professor Shea during the Spring '08 term at University of Florida.
- Spring '08