This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Complextoreal.com Intuitive Guide to Principles of Communications
Charan Langton Analog Filters Filters play important role in communications. Filters keep the signal from
splashing energy into adjacent channels and conversely protect the user band from
unwanted signals and noise from adjacent channels. Filters are also used to shape
pulses that represent the baseband symbols.
Here we show a chain of filters that might be used in a multicarrier signal.
Transmitter Pulse
Shaping
Filter Wideband
Filter Raised Cosine
filter to limit
signal bandwidth Remove all
noise after
upconversion Channel Receiver
Frontend
Receive
Filter
Remove noise
before down
conversion DEMUX Receive
Filter Separate
Channel Matched
Filter Figure 1 Uses for filters in a communications chain Important filter characteristics
The first filter used Figure 1 is often called the pulse shaping filter because it
converts discrete signals into analog symbol shapes that can be transmitted. Not all
shapes can be transmitted efficiently so pulse shaping is very important. After
pulse shaping, the signal is modulated and upconverted to a carrier frequency. The
signal is them amplified using a High Power Amplifier. At this point, a filter Filters – Analog, digital and adaptive filtering Page 1 maybe used to limit the spreading from the HPA. At the receiver there would be a
frontend receiver tuned to the carrier frequency. It removes noise picked up by
signal through the channel. A demultiplexer may be used to separate out the user
channel, and then a receive or matched filter is used for demodulation.
Whether these filters are realized as analog or digital depends on the application.
At rf frequencies, the analog filters are cheaper and lighter and used often.
Frequency Response and Group Delay The purpose of a filter is to block out the undesirable signals and at the same time
keep the passband signal as undistorted as possible. We characterize filters by
their Frequency response, also called Bode Plots. The frequency response of an
analog filter T(s) can be described as a ratio of two voltages EL and ES or
polynomials in sdomain, N(s) and D(s), where s = jw and where ω is radians per
second. Figure 2 An LC filter H ( s) = EL N ( s)
=
E s D( s ) (0.1) Writing the transfer function for the filter in Fig. 6.4.1 H ( s) = 1
s3 + 2s2 + 2s + 1 (0.2) If we evaluate this expression for various ω (set s = jw ), we write the frequency
and the phase response as Filters – Analog, digital and adaptive filtering Page 2 H ( s) = 1
, φ = tan −1 (ω )
3
1 − 2ω + j(2ω − ω )
2 (0.3) The order of this filter is given by the highest order of ω , in this case 3. This is a
lowpass filter of fairly shallow rejection. The transfer function of Equation (0.30)
has three poles and no zeros. Reference to poles and zeros comes up often in
filters. Their purpose is to provide a short cut for determining if a filter that has
been synthesized is realizable or not. The roots of the numerator are called zeros
and those of the denominator, poles. If these roots fail any of the following rules,
then filter cannot be built. These rules are: [8]
1. The poles and zeros must occur in pairs which are complex conjugates of
each other.
2. The poles and zeros on the real axis do not have to be in pairs.
3. Poles must be restricted to the left half of the complexfrequency plane. Figure 3 – Frequency and phase response of a general filter. Phase Response A passive filter provides no gain to the signal. In fact, most often there is insertion
loss and overall power loss in the band of interest. Conversely an active filter is Filters – Analog, digital and adaptive filtering Page 3 one that has an amplification function built into it. The filter of Figure 3 is a
passive filter as there is no gain in the passband. Its phase response seems to jump
up and down rapidly, but that’s for graphing convenience since the scale of
observation is limited to 360 degrees. We see that in the first segment starting at f
= 1, phase goes from 50 degrees to 180 degrees and then a full 360 degrees in the
next segment and then again from 180 to 40 degrees, adding these up and
subtracting 2x360 degrees, we get 90 degrees. Which is exactly the phase
difference between a sine wave of frequency 1 Hz and +1 Hz. For this filter,
which happens to be a Butterworth filter of order 9, the phase is linear in the passband, from 1 Hz to +1 Hz which is a characteristic of Butterworth filters. A great
deal of importance is attached to the phase response of the filter since any change
in linearity causes signal distortion.
Impulse Response The Impulse response of a filter is very useful characteristic. Akin to striking a bell
to hear the quality of its ring, an impulse response similarly pings the filter with an
impulse. Of course, response to a single impulse won’t immediately tell you how
the filter will respond to a real signal, it does tell us something useful. Since the
system response out of a linear filter is itself linear, the response of the filter to a
signal is just linear additions of the impulse response for each incoming signal
pulse, scaled by its magnitude. For the unit pulse (a delta function) we can say that
acting on system H, it produces a response h(n).
H
δ (n) ⎯⎯ h(n)
→ or in time domain, a pulse produces several outputs.
H
{1,0,0,...0} ⎯⎯→ {h0 , h1 , h2 , h3 ,...} The timeinvariant property tells us that
H
δ (n − T ) ⎯⎯ h(n − T )
→ For any delay equal to T, the response is delayed by the same amount. To write the
sum of n impulses, we get Filters – Analog, digital and adaptive filtering Page 4 H
δ (n) + δ (n − 1) + δ (n − 2) + ... ⎯⎯ h(n ) + h(n − 1) + h(n − 2) + ...
→ This result in a weighted sum of x ( o)δ (n ) + x (1)δ (n − 1) + x (2)δ (n − 2) + ...
H
⎯⎯
→
x (0)h(n) + x (1)h(n − 1) + x (2)h(n − 2) + ... Which in a more formal way can be written as the familiar discretetime
convolution of the input signal x(n) with the filter tapweight h(n) to produce the
output signal (n). y(n ) = ∑ x (m)h(n − m) (0.4) m Group Delay of a Filter The phase response as shown in Figure 3 of a filter is difficult to make sense
because of the large excursions over the bandwidth. An alternate metric of interest
is the Group Delay. Group delay is a time parameter give in units of time delay.
The phase difference is converted to a time delay over the frequencies in the
occupied band. The group delay is given as the slope of the phase curve vs.
frequency. Group delay is used far more as a measure of filter phase response than
pure phase in degrees. A plot of delay vs. signal frequency gives an indication of
how the relative frequencies of the signal are being delayed by the filter. The
objective is to get as flat a group delay as possible which implies linear behavior.
The expression for group delay is given by ratio of change in phase over change in
frequency. Given the phase response of the filter, it is easy to calculate the group
delay by this relationship. Tdelay = − ∂φ
∇φ
=−
∂ω
∇ω Filters – Analog, digital and adaptive filtering (0.5) Page 5 From the phase response in Figure 3, we can incrementally compute the slope to
get the group delay response. Despite the fact that the phase response looks like
straight lines, the group delay plot is more illuminating in showing the important
behavior. Group delay is used to compare filters, where such comparison of phase
is a difficult task. The effect of group delay on the signal can be explained by this
metaphor. Imagine a train where the front is moving faster than the back. The train
must stretch and the result is distortion. Similarly group delay tells us how much
distortion the filter has introduced in the signal in time domain. If the group delay
is much longer than a symbol time, then we have a problem. However if the
maximum group delay difference is much smaller than the symbol time (at least
one order of magnitude less) the distortion would be small. Wideband signals are
much more prone to group delay distortion than narrowband. This is simply
because a narrowband signal experiences much less difference in delays between
its maximum and minimum frequency. The group delay shows up a function of the
linearity of the filter. The filters that offer a linear phase response are generally
characterized as linear filters.
Ultimately, we use group delay as a measure of the nonlinearity of the system and
its potential to distort the signal. There would always be delay through a system, so
the absolute value is not very important, just the range of delay over the desired
bandwidth. Filters – Analog, digital and adaptive filtering Page 6 Figure 4 – Frequency and phase response of a filter. Figure 4 takes the phase of filter in Figure 3 and plots it as group delay. This
depiction of group delay has appeal. A perfectly flat group delay tells us that all
frequencies in the signal arrive with delta change in time with some given delay
through the path. (The front of the train and the back of the train arrive at the
station at the same time.) In the center the group delay is somewhat flat. We see
the group delay increasing rapidly at the edges of the passband. This says that the
distortion would be larger for the frequencies at the edge. The concept of group
delay can be applied to any device that has a frequency response. Nearly all
devices even those that are frequency independent exhibit some group delay. In
most cases, the system group delay as opposed to the filter group delay would be
measured and compared against some maximum acceptable design specification.
Definition of Bandwidth Akin to water flowing through a pipe, symbol rate has similar relationship to
bandwidth. Larger bandwidth allows larger data rates. But “Bandwidth” is a term
fraught with confusion. It is clear what we mean by data rate but what is
bandwidth? There are several different ways of defining bandwidth and all lead to
different answers. Whenever bandwidth is specified, it is sensible to ask what the
specification criteria is. Filters – Analog, digital and adaptive filtering Page 7 First thing we need to know is that Bandwidth contains only the positive
frequencies of a signal. All frequency specifications start measuring at 0 Hz. This
is why lowpass bandwidth is onehalf of bandpass, although absolutely nothing
about the signal changes as it goes from lowpass to bandpass. It has just shifted
to a higher center frequency and all components of the signal have moved into the
positive frequency range.
Here are some ways in which bandwidth is specified.[Couch]
1. Absolute Bandwidth – Specifications of bandwidths by regulatory bodies are
absolute bandwidths. For example if the bandwidth is given form 12.7 to
13.7 GHz, then this is an absolute bandwidth.
2. 3dB bandwidth – If the edge frequency is f1, then it is assumed that the
magnitude at this point has attenuated to exactly onehalf the peak value.
However, not all filters have a welldefined 3dB bandwidth. Chebychev
filters that have a ripple in the passband are one such example. Figure 5 – Definition of Bandwidth Absolute bandwidth: 28 Hz
3 dB bandwidth: 23.50
98% power bandwidth: 29.50 Hz
99% power bandwidth: 42.75 Hz.
The nulltonull bandwidth is 38.5 Hz Filters – Analog, digital and adaptive filtering Page 8 3. 98% and 99% power bandwidths  This is the bandwidth which contains
respective amount of total power of the signal.
4. Nulltonull bandwidth – This bandwidth definition is used more
commonly in antenna design and specifies the range (f1 – f2), where both
edges fall in some predefined level nulls.
5. Occupied Bandwidth – This is usually a regulatory specification in terms
of a power and attenuation mask, the purpose of which is to control the
energy being transmitted out of the band. Figure 6 shows a FCC specified
mask for satellite communications. The frequency is given as a percent of
the total bandwidth.
6. Equivalent Noise Bandwidth – If all power of the signal were to be
confined to a rectangular ideal shape, we get a bandwidth measure that is
called the Equivalent Noise Bandwidth. The alternate way to specify this is
by the term TimeBandwidth product. For a rootraised cosine, the noise
bandwidth is 0.5 and for a Butterworth filter it is approximately 0.53.
FCC Out of Band Mask
0
0 50 100 150 200 250 300 350 400 10 Attenuation 20
30
40
50 = 43  10*Log(XMIT PWR) 60
70
Frequency Offset, % Figure 6 FCC Emissions Mask Filters – Analog, digital and adaptive filtering Page 9 Analog filter types
There are four main analog filters and all are used extensively in designs
,particularly at RF frequencies. Analog filters are usually compact, and often
inexpensive. The primary principle of an analog filter is that it is based on a RLC
circuit, and is inherently a nonlinear device but can be made to behave linearly in
a particular range. The filter types are: Butterworth, Chebychev, Elliptic, and
Bessel. There are of course hybrids that combine best parts of each but they are
usually application specific.
Butterworth Most commonly used analog filter is the Butterworth Filter. It has a set of transfer
functions that result in the flattest possible behavior in the passband. Butterworth
filters are allpole filters and fall on a circle (limited to the left half) of unit radius
so in the passband they have nearly linear phase response as in Figure 7. The
equation for a Butterworth filter of order 3 is given in Equation (0.33) The
frequency response of these filters is consistent as the order goes up [4]. The
bandwidth is defined as that point where the attenuation equals 3 dB. The general
equation of a Butterworth filter is given by AdB ⎡ ⎛ ω ⎞2 n ⎤
= 10 log ⎢1 + ⎜ ⎟ ⎥
⎢ ⎝ ωc ⎠ ⎥
⎣
⎦ (0.6) Where ωc the 3 dB cutoff frequency and n is is the order of the filter. A general
analog is defined by AdB = 10 log(1 + Ω2n ) ) (0.7) Where the Ω is given by the Table 6.3. 1. And given by ωx ωc is the ratio of the
frequency of interest and the 3 dB cutof frequency. Table 1 Filters – Analog, digital and adaptive filtering Page 10 Filter Type Ω Lowpass ωx ωc Highpass ωc ωx Bandpass BWx BW3−dB Bandreject BW3−dB BWx Example: Calculate and plot the frequency response of a Butterworth filter of order
5 , 7 and 9.
Using equation (0.34), we get the following graph.
‐20
0
20
40 7
9 60 5 80
100
120
0 1 2 3 4 5 Figure 7 Butterworth Filter Frequency Response for order 5, 7 and 9.
Butterworth filters are used widely in frontend of low power receivers. There is
very little distortion in the passband and they are considered most benign of all
analog filters. However the out of band attenuation is not as good as that offered by
other filters which is often a matter of tradeoff between passband behavior and the
effect on and from adjacent channels. Filters – Analog, digital and adaptive filtering Page 11 Chebychev Filters There are two types of Chebychev filters. Both types have a ripple either in the
passband or the stopband. Type I filter, the one used most often has a ripple in
the passband which is of course not a desired behavior but it rollsoff much faster
than all the others. The ripple of course can be made very small by design. Type II
does not have a ripple in the passband but does not roll off as rapidly as type I.
Both are used depending on which band is of concern The Chebychev (There are
numerous way to spell this name, for a story about the correct spelling see book
[6] by Paul Davies.) filters offer a steeper stopband response but at the expense of
introducing the ripple. Figure 8 Chebychev Filter Types Although the magnitude response of the Chebychev is very attractive, its group
delay behavior is not so. In Figure 9 we present the group delay of this filter for
several filter orders. Beyond order 7, the group delay becomes too large for
practical use.
Elliptic Both Butterworth and Chebychev are allpole filters and as such their rejection
does not roll off rapidly and can only be zero at the far end of the stop band. Filters – Analog, digital and adaptive filtering Page 12 Elliptic filters on the other hand have zeros in the stopband and for this reason this
filter has the fastest rolloff of all analog filters. As Chebychev has ripple in either
the passband or stopband, the elliptic has ripple in both. It is of course related to
both Chebychev and Butterworth filters. Setting the ripple to zero in stopband
causes Elliptic filter to become a Chebychev and suppression of ripple in both
bands causes it to degenerate to a Butterworth filter. This filter has equiripple
behavior in both the passband and the stopband. The ripple can be changed in both
bands independently by design. A form of elliptic filter that limits the ripple in the
stopband is called quasielliptic filter. The improved performance from doing that
comes with sidelobes but they are usually quite far down. For applications where
large and fast rejection is required, Elliptic filters stand out. They are used in
satellite communications for both as input multiplexer and output multiplexers
after HPA amplification. In Figure 9, we see the comparison of the Elliptic and see
how much better is its stopband response. (a) Bessell (c) Chebychev Type I Filters – Analog, digital and adaptive filtering (b) Elliptic (d) Chebychev Type II Page 13 Figure 9 The frequency response of (a) Bessell, (b) Elliptic, (c) Chebychev Type I and (d) Chebychev Type II analog filters. Figure 10 Inband response comparison Digital Filters Digital filters can provide the same frequency characteristics as analog filters and
can do even better in phase response. They are a design created from shiftregister
networks and can be built with memory and Arithmetic units. They offer many
advantages over analog filters: [5]
1. Most digital structures offer unconditional stability.
2. The digital shift registers are easy to control and coefficient values can be
stored and easily changed for applications such as adaptive filtering.
3. They can be manufactured from ASICs.
4. Digital filters can operate over wide frequencies and dynamic range.
5. We can design perfectly linear phase response.
6. Digital filters don’t suffer from age related degradations.
Finite Impulse response (FIR) Filters
Digital filters can be classified in two forms: FIR and IIR, depending on the
length of their impulse response, finite or infinite. Both of these forms are Filters – Analog, digital and adaptive filtering Page 14 created with delay/memory/shift registers. The FIR has an impulse response
that extends over a finite number of terms only such that we can write its
impulse response as {h0 hM } h1 Where M is the order of the filter. The length of the impulse response is equal
to M+1. In Figure 11 is shown one such FIR filter. The signal is tapped off
after each delay element and multiplied by a constant, called filter coefficient,
filter weight or tapweight. All scaled components of the signal are added
together to produce the output.
14 x (n)
z −1
x (n − 1)
z −1
x ( n − 2) ∑ y(n ) 14 14 z −1
x (n − 3) 14 Figure 11 A Moving Average FIR filter In Figure 11 we see a very simple FIR filter. All tapweights are equal to 0.25
and this is an M = 4 order filter. We determine the output by
y(n ) = 1
1
1
1
( x ) + ( x − 1) + ( x − 2) + ( x − 3) .
4
4
4
4 If the first inputs were 1, 2, 6, 4, 3, 2, our first few outputs would be Filters – Analog, digital and adaptive filtering Page 15 (1 × 0.25 + 0 × 0.25 + 0 × 0.25 + 0 × 0.25 ) = 0.2
( 2 × 0.25 + 1 × 0.25 + 0 × 0.25 + 0 × 0.25 ) = 0.75
( 6 × 0.25 + 2 × 0.25 + 1 × 0.25 + 0 × 0.25 ) = 0.983
( 4 × 0.25 + 6 × 0.25 + 2 × 0.25 + 1 × 0.25 ) = 1.983
(3 × 0.25 + 4 × 0.25 + 6 × 0.25 + 2 × 0.25) =
( 2 × 0.25 + 3 × 0.25 + 4 × 0.25 + 6 × 0.25 ) =
Let’s determine the impulse response of this filter using the same procedure as
above but with a signal equal to 1,0,0,0,0. What we get for first five times is 0.25,
0.25, 0.25, 0.25. This is the impulse response of this FIR filter and it is exactly
equal to the filter tapweights. (1 × 0.25 + 0 × 0.25 + 0 × 0.25 + 0 × 0.25 ) = 0.25
( 0 × 0.25 + 1 × 0.25 + 0 × 0.25 + 0 × 0.25 ) = 0.25
( 0 × 0.25 + 0 × 0.25 + 1 × 0.25 + 0 × 0.25 ) = 0.25
( 0 × 0.25 + 0 × 0.25 + 0 × 0.25 + 1 × 0.25 ) = 0.25
This fact is true for all FIR filters, no matter how many tapweight it has long as it
has this forwardfeeding structure. If we trace the path of the impulse through the
FIR filters, its impulse response will always be equal to its tapweights or
coefficients. The tapweights imply a frequency response. Changing the tapweights means changing the filter response. If we know what type of frequency
response we want, we can design it by finding coefficients that provide it. This is
the process of the design of a FIR filter; going from a given frequency response to
determining coefficients.
To generalize the expression for an arbitrary FIR filter of order M, we can write
the equation of a FIR filter as
M y(n ) = ∑ h( k )x (n − k ) (0.8) k =0 Filters – Analog, digital and adaptive filtering Page 16 The filter has an easy to understand structure. This type of design is called a feedforward structure. It has the wonderful quality of always being stable as it has no
denominator and can never be singular.
h0 x (n) ∑ y(n ) z −1
x (n − 1) h1 z −1
x (n − 2) h2 z −1
x (n − M ) hM Figure 12 Generic FIR filter of M+1 taps Where M+1 equals the numbers of taps. This is the basic structure of a FIR filter,
also called transversal, or tapdelay line filter. That’s all there is to a FIR filter. The
output of the filter, Equation (0.35) is a convolution of the coefficients and the
incoming signal values. We only show a scalar version in this figure but of course,
the coefficients and the signal can both be complex.
FIR filters find their uses in many fields other than communications. One example
is the technical analysis done on stock prices. Moving average, exponential moving
average, MACD, RSI, are all example of FIR filters applied to a sequence of data
either in order to shape it, or to extract useful information such as trends or
intelligent information free of instantaneous noise.
Condition for Phase Linearity Filters – Analog, digital and adaptive filtering Page 17 One of the advantageous qualities of FIR filters is that they exhibit linear phase. A
kth order FIR has a frequency response as given by (0.35). We say it has a linear
phase if the phase as a function of the frequency equals the phase change through
the filter at the center tap plus a constant. θ (ω ) = −h(ω ) + p
h = ( M − 1) / 2 and p = ±π / 2 (0.9) Group delay which is a derivative of the phase is given as Tgd = − d(θ (ω )) dω = ( M − 1) 2 (0.10) That says that group delay is a constant and a function of the number of taps. The
other condition that is implied is that the impulse response of the filter must be
symmetric, otherwise equation (0.37) cannot hold: h(n) = ±h( M − 1 − n ) (0.11) In describing adaptive filters, we will show that although we can design the taps to
be symmetrical, in order to equalize a channel, we need to come up with tapweights that may not be symmetrical because of the nonlinear nature of the
incoming signal. The property of being able to change the impulse response by
changing the tapweights is remarkably powerful and versatile.
Infinite Impulse Response (IIR) Filter
The difference between a FIR filter and an IIR filter is that IIR filter can produce
an infinite response. We can write its impulse response as
∞ y(n ) = ∑ h( k )x (n − k ) (0.12) k =0 Of course, the infinite number of terms is a problem. In a subclass of IIR filters,
the requirements of infinite number of taps is gotten around by selecting tapweights that are related to each other, so that the summation can be estimated as
the sum of an infinite series. [3] Filters – Analog, digital and adaptive filtering Page 18 Compare the FIR structure with only forward going taps to the structure of an IIR
filter. This structure adds an additional section (on the right in Figure 13) that takes
the past values of the output signal d(n) and feeds them back into computing the
current value. The second section on the right is called feedbackward. [See Lyons
1 for more on FIR, IIR].
b0 x (n) ∑ y(n ) ∑ z −1 z −1
b1 x ( n − 1) −a1 z −1 −a2 b2 x (n − 2) y(n − 1)
z −1
y(n − 1) z −1
b3 x (n − 3)
N (z) 1 D( z ) Figure 13 An IIR filter with both feedforward and feedback parts. This is one such IIR filter. It has two sections, the right one is called nonrecursive
and the left is recursive section. The other names for these sections are: feedforward and feedbackward. The response of this filter is H (z) = N (z) 1
N (z)
=
D( z ) D ( z ) The output y(n) is sum of the terms from Figure 13. Filters – Analog, digital and adaptive filtering Page 19 y(n ) = ⎡ feed − forward output ⎤ + ⎡ feed − backward output ⎤
⎦
⎣
⎦⎣
= ⎡b1x (n ) + b2x (n − 1) + b3x (n − 2) + b4 x (n − 3)⎤
⎣
⎦
+ ⎡a1 y(n − 1) + a2 y(n − 2)⎤
⎣
⎦
Where the terms with the ‘b’ coefficients are feeding forward, and those in the
second line with ‘a” coefficients are feeding backward, hence the names. The feedbackward terms require the previous values of the signal created by the feedforward section. Note that the values of the coefficients in the feedbackward
section expression are negative of the tapweights..
The channel response can be written as ⎡b x (n ) + b2x (n − 1) + b3x (n − 2) + b4 x (n − 3)⎤
⎦
H (z) = ⎣ 1
⎡a1 y(n − 1) + a2 y(n − 2)⎤
⎣
⎦ (0.13) This forward and backward process makes this type of filter complex. It can
become unstable and has the problem of propagating errors. But this structure
allows us to model analog filters and specialized digital filters such as those used
in Decision Feedback Equalization.
Adaptive Filtering
What is Adaptive Filtering
The general name for adaptive filtering is Equalization. In the broad sense, the
word “equalization” refers to any signal processing or filtering technique that is
designed to eliminate or reduce channel distortions. How well the channel effects
can be equalized depends on how well we know the channel transfer function.
Equalization has wide applications in communications from echo cancelling in
telephone lines, multipath mitigation in cell phones, beam forming and signal
recognition.
The simplest type is equalizer is one called a graphic equalizer, where the
bandwidth of interest is divided in a certain number of bands with sliding controls. Filters – Analog, digital and adaptive filtering Page 20 Each band has bandpass filter and the slider adjusts power gain settings in that
band. Equalization in this case is performed by adjusting power in selected bands.
A general type of equalizer, called parametric equalizer can adjust the signal with
all three of its parameters; gain, frequency and phase. We can make these
adjustments in frequency domain or time. Most of the equalization does rely on
filtering, performed most commonly by FIR filters, either a transversal or a lattice
type with adjustable tapweights. The equalization process is based on the
knowledge that the impulse response of an FIR filter is same as its tapweights. So
once we know or have estimated the channel impulse response, we apply the
inverse of this to the filter, such that by changing the tapweights of the filter,
signal is distorted in the opposite direction of the channel impulse response hence
equalizing it. The filter and the algorithm that is used to adjust the tapweights is
called the equalizer. The equalization can be done with knowledge of the channel
or blind without knowledge of the channel.
There are many variations on this basic theme, some equalizers operate
continuously, some only part of the time. The equalizer filters can be described as
to whether they are linear devices that contain only feedforward elements, or
whether they are nonlinear devices that contain both feedforward and feedback
elements (Decision Feedback Equalizers). They can be grouped according to the
automatic nature of their operation, which may be either preset or adaptive. They
can also be grouped according to the filter’s resolution or update rate. The
equalization can take place on the symbol or on a sequence.
To understand equalization and the various ways it can be accomplished we start
first with the channel impulse response. Assume that the transmitted pulse had a
rootraised cosine shape. We want the received signal response to be the rootraised cosine so the overall system transfer function H ( f ) is equal to the raisedcosine filter, designated H RC ( f ) . Thus, we write it as the product in frequency
domain of the response of the transmitter and the receiver. H RC ( f ) = Ht ( f )H r ( f ) (1.1) In this way H t ( f ) and H r ( f ) , the response of the transmitter and receiver
respectively, each have frequency transfer functions that are the square root of the Filters – Analog, digital and adaptive filtering Page 21 raised cosine. Then, the equalizer transfer function needed to compensate for
channel distortion is simply the inverse of the channel transfer function:
He ( f ) = 1
H RC ( f ) = 1
H RC ( f ) e − jθc ( f ) (1.2) Sometimes a system frequency transfer function manifesting ISI at the sampling
points is purposely chosen (e.g., a Gaussian filter transfer function). The
motivation for such a transfer function is to improve bandwidth efficiency,
compared with using a raisedcosine filter. When such a design choice is made, the
role of the equalizing filter is not only to compensate for the channelinduced ISI,
but also to compensate for the ISI brought about by the transmitter and receiver
filters [7].
The basic principle is simple. Apply a filter E that mitigates the expected ISI. We
can do this in one of two ways: 1. Design E(f) so that all ISI is eliminated, or 2.
Design it such that we minimize the mean squared error. The equalizer designed to
eliminate all ISI is called a zeroforcing Equalizer and one designed to minimize
the squared error (or the variance) is called Minimum Squared Error Equalizer. x y x Figure 14 Basic idea of an equalizer Linear Estimation Principles
The incoming signal to an equalizer is a random process. There are two
possibilities when estimating a random variable, 1. We have no observed samples
of the process and 2. We have a set of limited numbers of samples. In the first
case, an example helps intuitive understanding. You need to estimate the number
of number of hours it will take to complete your homework The best estimate
under these conditions would be to say that it will you take about the same amount
of time as the average of all your past times (past observations). So if it takes you
on the average 3 hours to do your home work, the best current estimate would be Filters – Analog, digital and adaptive filtering Page 22 the same. Although this is intuitive, we really made this decision by applying the
meansquare error criterion which says that the leastmeansquares estimate of a
random variable x given only its average and variance from past observations, is
ˆ
equal to its past average or the best estimate is x = x . The resulting minimum error
is Ee 2 = σ x2 . For this example, selection of the past average as the estimate
minimizes the error, however that error is still large and equal to the variance of
this process. So if it was taking you on the average 3 hours, but the variance was 1
hour, then on the average, you could be wrong by that amount. When we do have
observations available, the problem becomes that of estimate of the function that
relates the input to the output with the least error. The process of equalization tries
to achieve this objective.
In most books, two different sets of terminology are used to differentiate between
linear estimation and adaptive estimation. Here we will use only one set of
terminology for both types of processes. We define terminology we will use in this
section:
x(i ) Transmitted unknown symbol σ x2 Variance of the input signal n(i ) Additive noise
2
σn Variance of noise y(i)
ˆ
y(i)
e(i )
ˆ
x(i ) Received distorted symbol
Equilizer's estimate of x (i )
Error between estimate and actual symbol
Final decision about x(i ) k (n) Tap − weight of ith tap
L
Number of taps, 2N + 1
Assume all are column vectors unless shown with transpose symbol, T. Filters – Analog, digital and adaptive filtering Page 23 ˆ(
y i) x (i )
y(i ) ˆ(
y i) Λwn
∑ x (i ) ˆ(
x i) Figure 15  Equalization based on a transversal filter
In figure 15, we show the general process for equalization. A symbol x(i) is
transmitted passing through an unknown channel. We call the observed or the
received signal, y(i). Keep in mind that y(i) may contain parts of past symbols,
such as the simple transfer relationship shown in Figure 16 where the received
signal y(i) is the sum of two transmitted symbols and as such contains ISI. The
signal y(i) goes through the equalizer which filters it according to its taps weights.
ˆ
The filtered version, also the estimation called x is compared with the transmitted
symbol by the algorithm. Based on the difference, the algorithm directs the filter to
adjust the tapweights and continue in this loop until the error is acceptably small.
One of the big questions which we will answer later on is that in real systems we
do not know the channel response as neatly as shown in Figure 16. The dotted line
in Figure 16 from the transmitter to the receiver is not there. n(i )
x (i ) z −1 X ∑ ∑ y(i ) = x (i ) + 0.5x (i − 1) + n(i ) x (i ) + 0.5x (i − 1) 0.5 Filters – Analog, digital and adaptive filtering Page 24 Figure 16  A hypothetical channel with additive noise
In Figure 16, we see an example of an ISI channel. Each observed instance of y is
given by y(i) = x(i) + 0.5 x(i −1) + n(i) (1.3) The received symbol y(i) consists of contribution from more than one transmitted
symbol ( x(i ), x(i − 1) ), hence this channel introduces ISI as well as noise. The task of
the equalizer is to take this signal and filter it in such a way that the filtered symbol
is as close to the transmitted symbol as possible. The equalizer performs the linear
mapping as stated in Eq. (1.4). We define the transfer function of the equalizer, K
as a linear function of tapweights of the filter. The equalizer takes the inputs and
multiplies them by the vector (a row vector) of tapweights (only one tap is shown
in Figure 17) and produces an output which is the estimated symbol.
ˆ
x = KT y x (1.4) y ˆ
x = Ky
z −1 k0 Figure 17  The input and output of an equalizer In equalization process that is continuous, there are two separate phases, 1. The
training phase, and 2. The tracking phase. The training sequence as shown in
Figure 16 provides the missing transmitted data, at least for a short while. This is
done via a preset sequence that is appended to the start of the transmitted data. The
sequence used is often chosen to be a noiselike, and “rich” in spectral content,
which is needed to estimate the channel frequency response. Alternately the
training can also consist of sending a single narrow pulse, approximating an ideal
impulse and thereby learning the impulse response of the channel. In practice, a Filters – Analog, digital and adaptive filtering Page 25 pseudonoise (PN) signal is preferred over a single pulse for the training sequence
because the PN signal has larger average power and hence larger SNR for the same
peak transmitted power.
y ( i − 3) y ( i − 4) k4 k3 y ( i − 2) k2 y ( i − 1) k1 y( i ) k0
ˆ
x (i ) Figure 18 Feedforward filter structure used asan equalizer The transversal filter, depicted in Figure 18, is the most popular form of an easily
adjustable equalizing filter consisting of a delay line with Tsecond taps (where T
is the symbol duration). In such an equalizer, the current and past values of the
received signal are linearly weighted with equalizer coefficients or tap weights { kn
} and are then summed to produce the output. Note that kn are scalar values but
change as a set for each sample. The main contribution is from a central tap, with
the other taps contributing echoes of the main signal at symbol intervals on either
side of the main signal. If it were possible for the filter to have an infinite number
of taps, then the tap weights could be chosen to force the system impulse response
to zero at all but one of the sampling times, thus making H e ( f ) correspond exactly
to the inverse of the channel transfer function in Equation (1.2) Even though an
infinite length filter is not realizable, one can still specify practical filters that
approximate the ideal case.
Theory behind equalization
Minimization Criteria Referring to Figure 18, we have two random variables x, and y of zeromean. We
assume that they are WideSense Stationary, this assumption allows us to use the Filters – Analog, digital and adaptive filtering Page 26 ensemble covariance (of the training sequence) as the total channel response for
the duration of the communication. It is obvious that the random variables x and y
are correlated in some way. This correlation of course is necessary otherwise no
estimation is possible. Signal y enters the equalizing transversal filter of tapweight
vector K, where K is column vector of p taps. We are interested in estimating the
ˆ
signal x by equalizing the channel output y. That equalizer estimate is x and is the
ˆ
equalized version of y. We can write the relationship between y and x as a linear
relationship
T
ˆ(0) ⎤ ⎡ k0 y ⎤
⎡x
⎢T⎥
⎢x
ˆ(1) ⎥ ⎢ k1 y ⎥
⎢
⎥=
⎥
⎢
⎥⎢
⎢T ⎥
⎢
⎥
ˆ(
⎣ x p − 1) ⎦ ⎢kp−1 y ⎥
⎣
⎦ (1.5) Alternately we can write this in vector form as
ˆ
x = ky (1.6) ˆ
The input data y is a vector of p values since each estimated x is a sum of p values
of y. Since each ki is a column vector by assumption, we transpose it in Equation
ˆ
(1.5). For example, the first value of x(0), for i = 0 will be calculated as ⎡ y (i − p ) ⎤
⎢ y (i − ( p − 1)) ⎥
⎥
ˆ
x(0) = ⎡ k p −1 k p − 2
ko ⎤ ⎢
⎣
⎦⎢
⎥
⎢
⎥
y (0)
⎣
⎦
= k p −1 y (− p) + k p − 2 y (−( p − 1)) + k0 y (0) The error between the actual and the equalized signal can be written as
ˆ
Error ≈ x − x 2 (1.7) To design an optimum equalizer, we reduce this error as small as possible. For this
we use a criterion called Minimum Mean Squares Error (MMSE)[15]. This
criterion is the one most often used in the design of equalizer filters. Minimizing
MSE requires no more than second degree statistics such as covariance and leads Filters – Analog, digital and adaptive filtering Page 27 to easy to implement designs. Sometimes the terminology Least Mean Squares
Error (LMSE) is also used when under certain conditions the minimum cannot be
guaranteed. We will square both sides of (1.7) substituting Equation (1.6) and
since x is a random variable, we take expectations of both sides to specify the
mean error.
2 E e(i ) = E x (i ) − kiT y 2 (1.8) Expanding the right hand side,
2
2
2
E e(i ) = E x (i ) + ⎡E kiT y ⎤ − 2E ⎡ x (i )kiT y ⎤
⎣
⎦
⎢
⎥
⎣
⎦ (1.9) 2 The expected value of E x (i ) is the variance of the input signal. The second term
can be written as kiT Ryki and the third term as kiT Ryx ,i . Now let’s call the error, a
cost operator J and making substitutions,
J ( ki ) 2
σ x (i ) − Rxy ,i ki − kiT Ryx ,i + kiT Ry ki (1.10) The objective is to minimize J where J is a function of the unknown tapweight
vector k . Differentiate J with respect to ki , and the cost function reduces to J ( ki ) = − Rxy ,i + kiT Ry (1.11) Set equation (1.11) equal to zero to minimize the cost and then rewrite in matrix
form, to get a very simple equation for determining optimum tapweights of the
equalizing filter. K O Ry = Rxy (1.12) K O Represents optimum set of coefficients that result in zero error. This is called the MMSE solution. Solution for optimum tap weights requires knowledge of
Ry and Rxy . To find estimates, we use training sequences and develop this
covariance over the length of the sequence. The taps are computed at each sample
of the training sequence and once the error has been reduced down to zero, can Filters – Analog, digital and adaptive filtering Page 28 either be set for some time or can be changed periodically depending on the
channel variability. The resulting MSE is given by
T
MMSE = J ( K 0 ) = Ry − Rxy K 0 (1.13) The output signal is given by, ˆ
x = ko y (1.14) Example of a MMSE 3tap equalizer
n(i )
x (i ) ∑ z −1 y(i − 3) ∑ y(i − 2)
z k3 −1 z y(i − 1)
−1 z y(i )
−1 k1 k2 0.5 ˆ(
x i) ∑
Change
Weights ei Error
Calculation Training
Sequence Figure 19 – A transversal feedforward adaptive filter
In this example borrowed from [15, Sayed, Adaptive filtering which BTW is the
best book I have ever seen on this subject,] we use the channel from Figure 19 with
ISI and noise. For the equalizing filter, we will use a 3tap feedforward structure.
Both the signal and noise are zeromean signals of variance equal to 1. The signal
goes through an equalizer with three taps. Given input to the equalizer of
x(i ), x(i − 1), x(i − 2), x(i − 4) , we compute the tapweights that will equalize the
incoming signal with MMSE. The equalizer tap vector at time i = 0 is given by
T
k0 = ⎡k1
⎣ k2 k3 ⎤
⎦ From Equation (1.12), we write in matrix form Filters – Analog, digital and adaptive filtering Page 29 K T Ry = Rxy (1.15) Where K T is a 1x3 matrix consisting of just one row of three tapweights. Next
compute matrix Ry . Note that individual values in this autocorrelation matrix can
be written as ry ( k ) E[ y(i ) yT (i − k )] . ⎡ y(0) 2
y(0) y(1) y(0) y(2) ⎤
⎡ y(0) ⎤
⎢
⎥
2
Ry = ⎢ y(1) ⎥ ⎡ y(0) y(1) y(2)⎤ = E ⎢ y(1) y(0)
y(1)
y(1) y(2) ⎥
⎦
⎢
⎥⎣
⎢
⎥
2
⎢ y(2) ⎥
⎣
⎦
y(2) ⎥
⎢ y(2) y(0) y(2) y(1)
⎣
⎦
Which is equal to ⎡ ry (0) ry (1) ry (2) ⎤
⎢
⎥
= ⎢ ryT (1) ry (0) ry (1) ⎥
⎢r T (2) r T (1) r (0) ⎥
y
y
⎣y
⎦ (1.16) We can compute each of these correlations by noting that the output signal is given
by this equation
y(i ) = s(i ) + 0.5s(i − 1) + n(i ) (1.17) Multiplying (1.17) by yT (i ) from the right and taking expectations we get ( ) ( ) ( ) ( E y(i ) yiT (i ) = E s(0) yiT (i ) + E 0.5s(1) yiT (i ) + E n(i ) yi * (i ) ) (1.18) ) (1.19) The correlations of the second two terms are zeros because the points are
uncorrelated, so their crosscorrelations are zero. ( ) ( ) ( ) ( E y(i ) yiT (i ) = E s(0) yiT (i ) + E 0.5s(1) yiT (i ) + E n(i ) yi * (i )
=1+ 0 + 0 =1 ( ) T E s(i − 1) yT (i ) = E s(i − 1) ⎡s(i ) + 0.5s(i − 1) + n(i )⎤
⎣
⎦
= 0 + 0.5 × 1 + 0 = 0.5 Filters – Analog, digital and adaptive filtering (1.20) Page 30 ( ) T E s(i − 1) yT (i ) = E s(i − 1) ⎡s(i ) + 0.5s(i − 1) + n(i )⎤
⎣
⎦ (1.21) = 0 + 0.5 × 1 + 0 = 0.5 Multiply Eq. (1.17) by yiT (i ) and then taking expectation of the product to get the
correlation values, (
=E ( s(0) y )
(i ) ) + E ( 0.5s(1) y ry (0) = E y(i ) yiT (i )
T i T i ) ( (i ) + E n(i ) yiT (i ) ) (1.22) = 1 + 0.5 × 0.5 + 1
= 2.25
Similarly compute ry (1) and ry (2) by multiplying Equation (1.17) with yT (i − 1)
to get (
= E ( s(i ) y
= E ( s(1) y )
(i − 1) ) + E ( 0.5s(i − 1) y (i − 1) ) + E ( n(i ) y
(0) ) + E ( 0.5s(0) y (0) ) + E ( n(1) y (0) ) ry (1) = E y(i ) yiT (i − 1)
T i
i T i i T T T (i − 1) ) T i i = 0 + E ( 0.5s(0)( s(0) ) + 0 = 0.5 ( )
(i − 1) ) + E ( 0.5s(i − 1) y (i − 1) ) + E ( n(i ) y
(0) ) + E ( 0.5s(0) y (0) ) + E ( n(1) y (0) ) ry (1) = E y(i ) yiT (i − 1) (
= E ( s(1) y = E s(i ) yiT
i T T i i T i T (i − 1) ) T i = 0 + E ( 0.5s(0)( s(0) ) + 0 = 0.5
Compute ry (2) similarly, Filters – Analog, digital and adaptive filtering Page 31 ( ry (2) = E y(i ) yiT (i − 1) (
= E ( s(2) y ) )(
(1) ) + E ( 0.5s(2) y )(
(1) ) + E ( n(1) y )
(1) ) = E s(2) yiT (1) + E 0.5s(1) yiT (1) + E n(2) yiT (1)
i T i T i T = 0 + 0 + 0 = 0.0
Form the matrix from individually computed values of crosscorrelations:
0⎤
⎡2.25 0.5
Ry = ⎢ 0.5 2.25 0.5 ⎥
⎢
⎥
⎢0
0.5 2.25⎥
⎣
⎦ (1.23) Now we compute Rxy which is equal to
⎡
⎤
Rxy = ExyT = ⎣Es(i ) yT (i ) Es(i ) yT (i − 1) Es(i ) yT (i − 2) ⎦ (1.24) Now multiply Equation (1.17) by x T (i ) , and taking expectation, we get
Ey(i )x T (i ) = E x (i )x T (i ) + E0.5x (i − 1)x T (i ) + E n(i )x T (i )
=1+ 0 + 0 The second and the third terms on the right are zero, because individual symbols
are uncorrelated with each other and with noise. Similarly with the remaining two
terms in Equation (1.24) can be obtained by multiplying the Equation (1.17) by
x T (i + 1) and x T (i + 2) .
E y(i )x T (i + 1) = 0 and E y(i )x T (i + 2) = 0 We assume that the channels are WSS and hence the cross correlations are not
sensitive to the order of the symbols, we can rewrite these two equivalences as
E y(i )x T (i + 1) = E y(i − 1)x T (i ) = (E x (i ) yT (i − 1))T
E y(i )x T (i + 2) = E y(i − 2)x T (i ) = (E x (i ) yT (i − 2))T Now we have all the data to put together Rxy Filters – Analog, digital and adaptive filtering Page 32 Rxy = ⎡1 0 0⎤
⎣
⎦ (1.25) Compute the optimum tapweights by equation (1.15) 0⎤
⎡2.25 0.5
⎢ 0.5 2.25 0.5 ⎥
T
−1
k0 = Rxy Ry = ⎡1 0 0⎤ ⎢
⎣
⎦
⎥
⎢0
0.5 2.25⎥
⎣
⎦
= ⎡0.4688 −0.1096 0.0244 ⎤
⎣
⎦ −1 There we have it, the values of each of the three tapsweights. The error in this
equalization is equal to
2
MMSE = σ x − Rxy k0 = .5312 We can write the cost function as a function of the 3 tapweights which would
result in a threedimensional error space.
ZeroForcing Solution Consider that a single pulse was transmitted over a system designated to have a
raisedcosine transfer function H RC ( f ) = H t ( f )H r ( f ) . Also consider that the
channel induces ISI, so that the received demodulated pulse exhibits distortion, as
shown in Figure 20, such that the pulse sidelobes do not go through zero at sample
times adjacent to the mainlobe of the pulse. The distortion can be viewed as
positive or negative echoes occurring both before and after the mainlobe. To
achieve the desired raisedcosine transfer function, the equalizing filter should
have a frequency response He(f ), as shown in Equation (1.2), such that the actual
channel response when multiplied by He(f ) yields HRC(f ). In other words, we
would like the equalizing filter to generate a set of canceling echoes using the
process shown in Figure 20. Filters – Analog, digital and adaptive filtering Page 33 x y x Figure 20  Received pulse exhibiting distortion. Since we are interested in sampling the equalized waveform at only a few
predetermined sampling times, the design of an equalizing filter can be a
straightforward task.
We will use a zeroforcing equalizer to do that. The idea is to force the signal to
zero at the sample boundary. This method proposed by Lucky [5, 14] does not use
MMSE as its criteria but instead the minimization of peak distortion. The solution
also ignores noise and can have problems with amplification of noise. For such an
equalizer with finite length, the peak distortion is guaranteed to be minimized only
if the eye pattern is initially open. However, for highspeed transmission and
channels introducing much ISI, the eye is often closed before equalization [8].
Since the zeroforcing equalizer neglects the effect of noise, it is not always the
best system solution. Most highspeed telephone line modems use an MMSE
criterion because it is superior to a zeroforcing criterion; it is more robust in the
presence of noise and large ISI [8]. Filters – Analog, digital and adaptive filtering Page 34 Example of a ZeroForcing ThreeTap Equalizer Consider that the tap weights of an equalizing transversal filter are to be
determined by transmitting a single impulse as a training signal. Let the equalizer
circuit in be made up of just three taps. Given a received distorted set of pulse
samples {x(k)}, with voltage values 0.0, 0.2, 0.9, 0.3, 0.1, as shown in Figure 20,
use a zeroforcing solution to find the weights {k0, k1, k2} that reduce the ISI so
ˆ
that the equalized pulse samples x(i ) have the values, {x(1)= 0, x(0)= 1, x(1)= 0}.
Using these weights, calculate the ISI values of the equalized pulse at the sample
times k = ±2, ± 3. What is the largest magnitude sample contributing to ISI, and
what is the sum of all the ISI magnitudes?
Solution For the channel impulse response specified, Equation (1.14) yields
⎡0 ⎤ ⎡ y(0) y( −1) y( −2) ⎤ ⎡k−1 ⎤
⎢⎥ ⎢
⎥⎢ ⎥
⎢1 ⎥ = ⎢ y(1) y(0) y( −1) ⎥ ⎢ k0 ⎥
⎢0 ⎥ ⎢ y(2) y(1)
y(0) ⎥ ⎢ k1 ⎥
⎣⎦ ⎣
⎦⎣ ⎦ (1.26) 0 ⎤ ⎡k−1 ⎤
⎡ 0.9 0.2
⎢ −0.3 0.9 0.2 ⎥ ⎢ k ⎥
=⎢
⎥⎢ 0 ⎥
⎢ 0.1 −0.3 0.9 ⎥ ⎢ k1 ⎥
⎣
⎦⎣ ⎦ Solving these three simultaneous equations results in the following weights:
⎡k−1 ⎤ ⎡ −0.2140 ⎤
⎢ k ⎥ = ⎢ 0.9631 ⎥
⎢ 0⎥ ⎢
⎥
⎢ k1 ⎥ ⎢ 0.3448 ⎥
⎣⎦⎣
⎦ The values of the equalized pulse samples {x(k)} corresponding to sample times
k = −3, − 2, − 1, 0, 1, 2, 3 are computed by using the preceding weights in
Equation (1.26), yielding
0.0000, 0.0428, 0.0000, 1.0000, 0.0000, 0.0071, 0.0345 Filters – Analog, digital and adaptive filtering Page 35 The sample of greatest magnitude contributing to ISI equals 0.0428, and the sum of
all the ISI magnitudes equals 0.0844. It should be clear that this threetap equalizer
has forced the sample points on either side of the equalized pulse to be zero. If the
equalizer is made longer than three taps, more of the equalized sample points can
be forced to a zero value.
Whereas the MMSE equalizer removes most of the ISI but limits the amplification
of noise, ZFE enhances the noise
MSE Equalization Types
SymbolSpaced Equalizers Equalizer filters are classified by the rate at which the input signal is sampled. A
transversal filter with taps spaced T seconds apart, where T is the symbol time, is
called a symbolspaced equalizer. The process of sampling the equalizer output at a
rate 1/T causes aliasing if the signal is not strictly bandlimited to 1/T hertz—that is,
the signal’s spectral components spaced 1/T hertz apart are folded over and superimposed. The aliased version of the signal may exhibit spectral nulls [8].
x1 w1 T x2 T x3 w3 w2 T x4 w4
yn ∑
Change
Weights en Error
Calculation Training
Sequence Figure 21 – Symbol Spaced Equalizer
Fractionally Spaced Equalizers (FSE) A fractionally spaced equalizer is very similar to a symbolspaced linear equalizer.
The major difference is that a fractionally spaced equalizer receives K input
samples before it produces one output sample. The signal is oversampled by factor
K/T, where T is the symbol and the sample rate. Often K is 2 and the equalizer is Filters – Analog, digital and adaptive filtering Page 36 referred to as T/2 FSE. The output sample rate for FSE is 1/T, while the input
sample rate is K/T. The weightupdating also occurs at a higher rate.
x1 w1 T/K x2 T/K x3 w3 w2 T/K x4 w4
yn ∑
Change
Weights en Error
Calculation Training
Sequence Figure 22 – A Kfractionally Space Equalizer A filter update rate that is greater than the symbol rate helps to mitigate the
difficulty of finding spectral nulls when a smaller sampling rate is used. With a
fractionally spaced equalizer, the filter taps are spaced at
T' ≤ T
(1 + r ) seconds apart, where r denotes the excess bandwidth. In other words, the received
signal bandwidth is
W≤ (1 + r )
T The goal is to choose T so that the equalizer transfer function H e ( f ) becomes
sufficiently broad to accommodate the whole signal spectrum. Note that the signal
at the output of the equalizer is still sampled at a rate 1/T, but since the tap weights
are spaced T seconds apart (the equalizer input signal is sampled at a rate 1/T ’),
the equalization action operates on the received signal before its frequency
components are aliased. Equalizer simulations over voicegrade telephone lines,
with T = T/ 2, confirm that such fractionallyspaced equalizers outperform symbolspaced equalizers [14]. Filters – Analog, digital and adaptive filtering Page 37 Decision Feedback Equalizer
n
x y x Figure 23 Dual section DFE Equalizer The basic limitation of a linear equalizer, such as the transversal filter, is that it
performs poorly on channels having spectral nulls [11]. Such channels are often
encountered in mobile radio applications. A decision feedback equalizer (DFE) is a
nonlinear equalizer that uses previous detected decisions to eliminate the ISI on
pulses that are currently being demodulated. The ISI being removed was caused by
the tails of previous pulses; in effect, the distortion on a current pulse that was
caused by previous pulses is subtracted. Two types of DFE are possible: one where
the feedback filter is sued with a zeroforcing equalizer (ZFDFE) so that the
symbols coming in to it are ISIfree and 2. DFE used with a MMSE equalizer
(MMSEDFE), in which case the symbols coming in have minimum ISI. Same as
in a ZFE, the noise enhancement problem exists in a ZFDFE. A MMSEDFE can
minimize noise enhancement as compared to the ZFDFE. Performance of MMSEDFE is usually better than other types, but we can still get error propagations. Figure 23 shows a simplified block diagram of a DFE where the forward filter and
the feedback filter can each be a linear filter, such as a transversal filter. The figure
also illustrates how the filter tap weights are updated adaptively. The nonlinearity
of the DFE stems from the nonlinear nature of the feedback filters. Filters – Analog, digital and adaptive filtering Page 38 x1 x2 x3 x4 w4
w3
w2 ∑ w1 Figure 24 Decision Feedback Equalizer
The principal behind a DFE is that if the symbols previously detected are known
(past decisions are assumed to be correct), then the ISI contributed by these
symbols can be canceled out exactly at the output of the feedforward filter by
subtracting past symbol values with appropriate weighting. The forward and
feedback tap weights can be adjusted simultaneously to fulfill a criterion such as
minimizing the MSE. When only a forward filter is used, the output of the filter contains channel noise
contributed from every sample in the filter. The advantage of a DFE
implementation is that the feedback filter, which is additionally working to remove
ISI, operates on noiseless quantized levels, and thus its output is free of channel
noise.
Preset equalization On channels whose frequency responses are known but are mildly time invariant,
the channel characteristics can be measured and the filter’s tap weights adjusted
accordingly. If the weights remain fixed during transmission of data, the
equalization is called pre set equalization; one very simple method of preset
equalization consists of setting the tap weights according to some average
knowledge of the channel. This is used for data transmission over voicegrade
telephone lines at less than 2400 bit/s. The significant aspect of any preset method Filters – Analog, digital and adaptive filtering Page 39 is that it is done once at the start of transmission or seldom (when transmission is
broken and needs to be reestablished).
Adaptive equalization When the equalization is capable of tracking a slowly timevarying channel
response, it is known as adaptive equalization. It can be implemented to perform
tapweight adjustments periodically or continually. Periodic adjustments are
accomplished by periodically transmitting a preamble or short training sequence of
digital data that is known in advance by the receiver. The receiver also uses the
preamble to detect start of transmission, to set the automatic gain control (AGC)
level, and to align internal clocks and local oscillator with the received signal.
Continual adjustments are made by replacing the known training sequence with a
sequence of data symbols estimated from the equalizer output and treated as
known data. When performed continually and automatically in this way, the
adaptive procedure (the most popular) is referred to as decision directed [11]. The
name “decision directed” is not to be confused with decision feedback (DFE).
Decision directed only addresses how filter tap weights are adjusted—that is, with
the help of a signal from the detector. DFE, however, refers to the fact that there
exists an additional filter that operates on the detector output and recursively feeds
back a signal to the detector input. Thus, with DFE there are two filters, a feedforward filter and a feedback filter that process the data and help mitigate the ISI.
A disadvantage of preset equalization is that it requires an initial training period
that must be invoked at the start of any new transmission. Also, a time varying
channel can degrade system performance due to ISI, since the tap weights are
fixed. Adaptive equalization, particularly decisiondirected adaptive equalization,
successfully cancels ISI when the initial probability of error exceeds one percent,
(rule of thumb). If the probability of error exceeds one percent, the decision
directed equalizer might not converge. A common solution to this problem is to
initialize the equalizer with an alternate process, such as a preamble to provide
good channelerror performance, and then switch to the decisiondirected mode.
To avoid the overhead represented by a preamble, many systems designed to
operate in a continuous broadcast mode use blind equalization algorithms to form Filters – Analog, digital and adaptive filtering Page 40 initial channel estimates. These algorithms adjust filter coefficients in response to
sample statistics rather than in response to sample decisions [11].
Algorithms for Adaptive Equalization
Steepest Descent Algorithm The steepest Descent Algorithm is a class of algorithms that solves the problem of
finding the coefficients in an iterative manner. Eq. (1.27) gives the form of weight
updates we used before. In steepest Descent Method, we replace the covariance
matrices by an approximation.
ki = ki −1 + μ ⎡ Rxy − Ryki −1 ⎤
⎣
⎦ (1.27) We replace the term ⎡ Rxy − Ryki−1 ⎤ in Eq. (1.27) by an approximation p .
⎣
⎦
ki = ki −1 + μ p (1.28) The problem of finding optimum tapweights can now be solved iteratively by
starting with a guess for the first weight k−1 , and an initial step size μ , change
based on the gradient of the vector at that point. Steepest Descent Method is based
on the principle that the error calculated at any particular spot on the error surface
will decrease only if we move opposite to the gradient at that point, scaled by a
step size. We will try to make that clear by example.
The cost function J is a scalar function of the filter coefficients. The partial
derivatives or the gradients of a scalar function in matrix form are defined as a
column vector in Equation (1.29)
⎡ ∂J
J (k1 , k2 ,...k N ) ∇J = ⎢
⎣ ∂k1 ∂J
∂k2 ∂J ⎤
⎥
∂k N ⎦ T (1.29) To show how this algorithm works, [20] we pick a 2tap cost function. We picked
2taps because we can plot the function and immediately have an idea where the
minimum lies. The cost function picked is
J (k 1 , k2 ) = 1.0 − k1 + .75k2 + k 1 k2 + 2(k12 + k22 ) Filters – Analog, digital and adaptive filtering (1.30)
Page 41 Figure 25  The Error Surface We compute the gradient of the error surface by matrix differentiation rules from
Equation (1.29)
∇J = [ −1.0 + k2 + 4k1 0.75 + k1 + 4k2 ]
(1.31) Let’s assume starting values of k1 = 1 and k2 = 1, calculate the gradient for these
values of coefficients using expression (1.31)
⎡2⎤
∇J = ⎢
⎥
⎣ −2.25⎦ These gradients are the slope of the surface defined by the two chosen tapweights,
1 and 1. The gradient is uniquely defined once we know Rx and Rxy which are
used to form the error function, Equation (1.30). We will normalize these values to
simplify the math, and compute a normalized version of the gradient and the
associated cost. We compute the cost at each iteration to see if it is has decreased
enough and depending on a preset convergence factor, if we can we stop the
computations. Normalized increment is give by Filters – Analog, digital and adaptive filtering Page 42 ⎡ ∂J ⎤
⎢ ∂k ⎥
⎡ Δk1 ⎤
1
⎢ 1⎥
=
⎢ Δk ⎥
2
2 ⎢ ∂J ⎥
⎣ 2⎦
⎛ ∂J ⎞ ⎛ ∂J ⎞
⎥
⎜
⎟ +⎜
⎟⎢
∂k1 ⎠ ⎝ ∂k2 ⎠ ⎣ ∂k2 ⎦
⎝ (1.32) ⎡ 2 ⎤ ⎡ 0.665 ⎤
⎢
⎥=⎢
⎥
22 + 2.252 ⎣ −2.25⎦ ⎣ −0.747 ⎦
1 This vector becomes the delta change in the tapweights for the next trial. Here we
apply a stepsize or a scaling factor to provide better granularity. The estimate for
the tapweights for the next step is the starting tapvector weight minus the scaled
gradient.
⎡ k11 ⎤ ⎡ 1 ⎤
⎡ 0.664 ⎤ ⎡ 0.933 ⎤
⎢ 1 ⎥ = ⎢ ⎥ − 0.1 ⎢
⎥=⎢
⎥
⎣ −0.747 ⎦ ⎣ −0.925⎦
⎣ k2 ⎦ ⎣ −1⎦ With these new values, go to Equation (1.31) and recompute the gradients and the
next increment. The next step gives
⎡ k12 ⎤ ⎡ 0.933 ⎤
⎡ 0.667 ⎤ ⎡ 0.866 ⎤
⎢ 2⎥ = ⎢
⎥ − 0.1 ⎢ −0.744 ⎥ = ⎢ −0.850 ⎥
⎣
⎦⎣
⎦
⎣ k2 ⎦ ⎣ −0.925⎦ Notice that the gradient vector has not changed much. Continue until the error
function decreases and then starts increasing. The optimum values for this case are
approximately 0.2705 and 0.3312 Filters – Analog, digital and adaptive filtering Page 43 Figure 26  Convergence time vs. step size for Steepest Descent Method The algorithm is sensitive to step size as can be seen in the Figure 26 for same
starting point but with different step sizes. The oscillations in cost values occur
because the step size in too large. An alternate method where the step size is large
at first and then reduced as the solution starts to converge, works better. The error
surface of a transversal filter working on a W.S.S stochastic process is a bowl
shape quadratic with order equal to number of taps [17]. Of course, the bowl
becomes hard to graph with anything more than two taps. The steepest descent
method step size must meet the following criteria.[4]
0<μ < 1 (1.33) λmax Where λmax is the largest eigenvalue of the matrix Rxy. The method is deterministic
and theoretically can take an infinite number of steps to converge but in most cases
it reaches the optimum solution fairly quickly. Formally we can write the SDM as
a recursion
ki = ki −1 + μ ( Rxy − Ry ki −1 ) (1.34) We can derive this by noting that we have selected each new vector by this general
relationship.
ki = ki −1 + μ p for i > 0 (1.35) Now calculate the cost function for this new value of ki by substituting (1.35) into
the previous value of the cost function at (i − 1) , J ( k )i T
2
σ x (i ) − Rxy ( ki−1 + μ p ) − ( ki−1 + μ p ) Rxy + ( ki−1 + μ p ) Ry ( ki−1 + μ p )
T ( ) T T
= J ( k )i −1 + μ kiT 1 Ry − Rxy p + μ pT ( Ry ki −1 − Rxy ) + μ 2 pT Ry p
− (1.36) Note that gradient at any step is equal to
T
∇J (k ) = k T Ry − Rxy Filters – Analog, digital and adaptive filtering (1.37) Page 44 Substitute (1.37) the gradient, at step i into (1.36), we get J ( k )i = J ( k )i−1 + 2μ Re ⎡∇J ( ki−1 ) p⎤ + μ 2 pT Ry p
⎣
⎦ (1.38) The last term is always positive so we can say that the error function at the next
step is always less than at the previous step, assuming the middle term is also
positive which becomes a condition for the algorithm to converge.
We can now set the value of the next step as opposite of the gradients
p = − [∇J (ki −1 ) ] = Rxy − Ry ki −1
T (1.39) Which is the gradient at the previous step, which is exactly what we did in the
example in this section.
Least Mean Square LMS Steepest Gradient Descent although simple requires knowledge of variance matrix.
Another class of algorithms called Stochastic Gradient algorithms further
simplify the estimation process by making estimates for the covariance and cross
variances. The advantages of using approximations is that we no longer need to
know the covariance and cross variances of the actual signals which are not
available anyway once we are past the training phase. These algorithms are often
used in the tracking mode if the channel is to continue to adapt to the channel after
the training sequence is exhausted. These are true adaptive algorithms in that they
learn and adjust to the channel. This most popular of all equalization algorithms
developed by Widrow [20, 21] is a variation of stochasticgradient algorithm,
called leastmeansquare (LMS) algorithm.
In LMS algorithm, we start with the steepest descent method, Equation (1.34) We
need two variance matrices, Ry and Rxy . But instead of the actual matrices, we use
their instantaneous values as an estimate. We rewrite the steepest descent equation
by making the following substitutions and then rewriting
Rxy ≅ x(i )T yi and Ry ≅ yiT yi ki = ki −1 + μ yi ( xT (i ) − yiT ki −1 ) Filters – Analog, digital and adaptive filtering (1.40)
(1.41)
Page 45 This simple relationship is a consequence of the orthogonality principle that states
that the error formed by an optimal solution is orthogonal to the data. Since this is
a recursive algorithm, care must be exercised to assure algorithm stability. Stability
is assured if the parameter μ is smaller than the reciprocal of the energy of the data
in the filter. When stable, this algorithm converges in the mean to the optimal
solution but exhibits a variance proportional to the parameter. Thus, while we want
the convergence parameter μ to be large for fast convergence but not so large as to
be unstable, we also want it to be small enough for low variance. The parameter is
usually set to a fixed small amount [12] to obtain a lowvariance steadystate tapweight solution. Schemes exist that permit μ to change from large values during
initial acquisition to small values for stable steadystate solutions [13].
The rule of thumb for selecting step size in slowlyvarying channel according to
[15] is
Δ= 1
(5 N ) SNRrcvd (1.42) Where N is number of taps and SNR is for the received signal. For a signal of SNR
2 and 5 taps, the step size should be 0.02. The example shown in Figure 24 uses a
step size of .02.
Example of the LMS algorithm Figure 27 shows an example of equalization performed by the LMS equalizer. A
sine wave experiences ISI and noise as shown. The LMS converges quickly and is
successful in removing a great deal of the noise. Figure 28 shows the values of the
tapweights oscillating which is normal part of this process as they are trying to
track the signal and one would expect them to change along with the signal. Filters – Analog, digital and adaptive filtering Page 46 n o i s e l e s s o u t p u t ( r e d ) , n o is y o u t p u t ( 1 ) , fil t e r e d o u t p u t (2 ), a n d e r r o r noisy y 5 0 5 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 filtered y 5 0 5 d  y 3
o r ig i n a l n o i s e
fi lt e r e d n o is e 2
1
0 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Figure 27  LMS equalizer input and the equalized signal. 0.5 0.5
w1 1 w0 1 0
0.5 0 0 2000 4000 6000 0.5 1 2000 4000 6000 0 2000 4000 6000 2
1 w2 error 0.5
0
0.5 0 0
1 0 2000 4000 6000 2 Figure 28 The convergence of the filter tapweights Filters – Analog, digital and adaptive filtering Page 47 Normalized LMS One problem with LMS is that once the signal tracking has approached steady
state, we ought be able to reduce the size of the step since ostensibly only minor
changes are needed. That cannot be done with LMS. The normalized LMS
overcomes this.
The normalized LMS (NLMS) algorithm is a modified form of the standard LMS
algorithm where the step size is made a function of the received power as well a
function of time. It is no longer fixed as in LMS. k(n + 1) = k(n) + μ e(n) y(n)
y(n) 2 (1.43) You also can rewrite the above equation to the following equation: NLMS usually
converges faster than LMS and maintains better tracking.
6.6.5. Minimum Least Sequence Estimation (MLSE) Figure 29 – Sequence adaptation Previous sections covered equalization performed on the symbol. There is another
techniques that works on sequences of symbol rather than on one symbol at a time.
In 1967, Andrew Viterbi first presented his now famous algorithm for the decoding
of convolutional codes [1]  [ 3 ]. A few years later, what is now known as the
Viterbi decoding algorithm (VDA) was applied to the detection of data signals
distorted by intersymbol interference (LSI) [4][8]. For such applications, the
algorithm is often referred to as a Viterbi equalizer (VE). Note that many
equalizing techniques use filters to compensate for the nonideal properties of a
channel. That is, equalizing filters at the receiver attempt to modify the distorted
waveforms. However, the operation of a VE is quite different. It entails making Filters – Analog, digital and adaptive filtering Page 48 channel measurements to estimate hc(t) and then adjusting the receiver by
modifying its reference waveforms according to the channel environment. The goal
of such adjustments is to enable the receiver to make good data estimates from the
received message waveforms. With a VE, the distorted message waveforms are not
reshaped or directly modified (with the exception of the preconditioning step);
instead the mitigating technique is for the receiver to "adjust itself" in such a way
that it can better deal with the distorted waveforms.
The VDA has become very popular for processing ISIdistorted signals that stem
from a linear system with finite memory. Such a system is referred to as a finitestate machine (FSM), which is the general name given to a system whose output
signals are affected by signals that occur earlier and later in time.
Copyright Charan Langton 2009, All Rights reserved.
Your comments, corrections are welcome. Please post them on
www.complextoreal.com REFERENCES 1. FCC website
2. ITU website.
3. ETSI website
4. Nyquist, H., “Certain Topics of Telegraph Transmission Theory,” Trans. Am.
Inst. Electr. Eng., vol. 47, Apr. 1928, pp. 617–644.
5. Glover, I. A., Grant P.M., Digital Communications, Prentice Hall, 1998
6. Wozencraft, J. M. and Jacobs, I. M., Principles of Communication Engineering,
John Wiley & Sons, Inc., New York, 1965. Filters – Analog, digital and adaptive filtering Page 49 7. TBD
8. Gentile, Ken, Digital Pulse Shaping Filter Basics, Analog Devices, AN922
Application Note
9. Rappaport T.S., Wireless Communications Principles and Practice, 2nd Edition,
Prentice Hall, 2001.
10. Krishnapura N., Pavan S., Mathiazhagan C., Ramamurthi B., "A Baseband
Pulse Shaping Filter for Gaussian Minimum Shift Keying," Proceedings of the
1998 IEEE International Symposium on Circuits and Systems, 1998.
11. Hanzo, L. and Stefanov, J., “The PanEuropean Digital Cellular Mobile Radio
System—Known as GSM,” Mobile Radio Communications, edited by R. Steele,
Chapter 8, Pentech Press, London, 1992.
7. Lender, A. “correlative Coding for Binary Data transmission”, IEEE Spectrum,
Vol. 3, No. 2, pp. 104115, February 1966
8. Digital Transmission Systems, David R. Smith, Ist Edition, 1985 Von Nostrand
Reinhold Company
9. Aulin, T. Rydbeck, N. Sundberg, C.E. TBD
10. Continuous Phase ModulationPart II: Partial Response Signaling Dept. of
Telecommunication Theory, Univ. of Lund, Fack, Lund, Sweden; This paper
appears in: Communications, IEEE Transactions on Publication Date: Mar 1981
Volume: 29, Issue: 3
10. Kabal, P.; Pasupathy, S., PartialResponse Signaling
Communications, IEEE Transactions on Volume 23, Issue 9, Date: Sep 1975,
Pages: 921  934
11. Proakis, J. G., Digital Communications, McGrawHill Book Company, New
York, 1983.
12. Lyons, Richard, Understanding Digital Signal Processing, Prentice Hall Filters – Analog, digital and adaptive filtering Page 50 13. Sophocles J., Orfanidis, Introduction to Signal Processing, Prentice Hall.
Englewood Cliff, 1996
14. Taub, Herbert, Schilling, Donald L., Principles of Communication Systems,
Mcgraw Hill Publishing, Second Edition, 1986
15. Couch, Leon, Digital Communication, 16. K. W. HENDERSON, AND W. H. KAUTZ Transient Responses of
Conventional Filters, , IRLG TRANSACTIONS ON CIRCUIT THEORY,
December 1956
17. Williams, Arthur B., Taylor, Fred J. Electronic Filter Design Handbook,
McGraw Hill, 3rd Edition, 1995
18. Davies, Paul, A Mathematical Yarn 19. B. Sklar, Digital Communications: Fundamentals and Applications, 2nd ed.
Upper Saddle River, NJ: Prentice Hall, 2001.
20. Harris, F., and Adams, B., “Digital Signal Processing to Equalize the Pulse
Response of Non Synchronous Systems Such as Encountered in Sonar and Radar,”
Proc. of the TwentyFourth Annual ASILOMAR Conference on Signals, Systems,
and Computers, Pacific Grove, California, November 5–7, 1990.
21. Benedetto, S., Biglieri, E., and Castellani, V., Digital Transmission Theory,
Prentice Hall, 1987.
22. Lucky, R.W., Automatic Equalization for Digital Communications, Bell
Systems technology Journal, Page 547, April 1965
23. Lucky, R. W., Salz, J., and Weldon, E. J., Jr., Principles of Data
Communications, McGraw Hill Book Co., New York, 1968.
24. Poularikas, Alexander D. and Ramadan, Zayed M., Adaptive Filtering
Primer with Matlab . Taylor & Francis Publishers, 2006
25. Sayed, Ali, Fundamentals of Adaptive Filtering, Wiley, 2003 Filters – Analog, digital and adaptive filtering Page 51 26. Haykin, S., Adaptive Filter Theory, Upper Saddle River, NJ: Prentice Hall,
2001.
27. Proakis, John G., Salehi, Masoud. Contemporary Communication Systems
with Matlab, Brookside/Cole, 2000
28. Qureshi, S. U. H., “Adaptive Equalization,” Proc. IEEE, vol. 73, no. 9,
September 1985, pp. 1340–1387.
29. Feuer, A., and Weinstein, E., “Convergence Analysis of LMS Filters with
Uncorrelated Gaussian Data, “IEEE Trans. on ASSP, vol. V33 pp. 220–230, 1985.
30. Woodrow, Thinking About Thinking, IEEE, April 2005 31. Macchi, O., Adaptive Processing: Least Mean Square Approach With
Applications in Transmission, John Wiley & Sons, New York, 1995.
32. Liu, Kefu http://u.cs.biu.ac.il/~treismr/steepest.pdf, web paper on Steepest
descent Method 33. TaHong, Tuan DoHong , LMS software module for LMS Filters – Analog, digital and adaptive filtering Page 52 ...
View
Full
Document
This note was uploaded on 02/07/2011 for the course EEE EE567 taught by Professor Tutorials during the Spring '11 term at Birla Institute of Technology & Science, Pilani  Hyderabad.
 Spring '11
 Tutorials

Click to edit the document details