This preview shows pages 1–10. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EEL GSOZAJapHve 31‘de ProceSSTnS
HoWIﬂWork #4 p u,
M Name‘ Park. In jun
UFIDi M88 — 4206 HW #4 Problem 1. Identifying the unknown system using LMS filter Figure 1. Plot of the input, noisy system output In this problem, I design the Least Mean Square (LMS) filter to identify the
unknown system. The LMS algorithm is computed as follows. I '  1
Normalized LMS Algorithm . % 1 Parameters l M = Filter Length
1
0 < < , = 0.1 *
I J tum? true) I“ IuTYE
I Initialization ' M0) =6 Given: “07) = Mby—l input vector at time n 6101) = Desired response at time n I To be computed: W(n+1) = Estimate of weight vector at time ‘
n+1 Computation: For 7’1 = 0,1,2 , Compute
e(n) = d(n) —ﬁ1H (n)u(n) ﬁz(n +1) = ﬁlm) +2* ,u*u(n) *e(n) ' Data (1) Normalized MSE 0. h  I
V mmd H. V .ILJ‘L'. “I .{liﬂﬁlzrm .l.llllll;l 2:3”: II. I”. “II I. i ’I .101 In I_ H I
ml; NI T‘F‘iil't'www mil [WWW piﬂ . Him. ii! 0.11 1i! WWI F. ’ y“ u=0.00 —— mu = 0.005
mu = 0.001
mu = 0.0005
Wiener mu = 0.0005 n Mﬁ‘hﬂ‘ﬁuu. my... 3.1..“ '. :: Figure 3. Filter Order = 15, Noise Power =O.1 —— mu = 0.001
mu = 0.0005
mu = 0.0001
Wiener Figure 4. Filter Order = 30, Noise Power =O.1 1. The ensemble averaging of learning curve was performed over 100
independent trials of the experiment. 2. As the stepsize parameter # is reduced, the rate of convergence of the
LMS is decreased.
3. A reduction in the step  size parameter # also has the effect of reducing the variation in the learning curve.
4. The minimum NMSE Jmin is equal to the minimum NMSE produced by the
Wiener filter. 5. Generally the misadjustment is increased as the step size # is increased, i.e. M = (# /2)*tr(R). But in this case, ,U is assigned to the very small value
compared with 1/tr(R). Therefore, the effect of misadjustment is very small so the learning curve
converges to the minimum NMSE which is Wiener solution. 6. When ,U =0.001, the value of normalized MSE is as follows. (Window size of
Wiener =500) M=5 01119
01164 We can see the steep drop in the NMSE when the filter order changes from 5
to 15. Since the system order is 9, we can have smaller NMSE when filter
order is around 9. When the filter order changes from 15 to 30, the NMSE
slightly increases since longer length of the weights of the filter cause the
additional errors. Therefore, it is important to determine the filter order
properly to get a better quality of identification. (2) WSNR mu = 0.01
mu = 0.005 mu = 0.001
Wiener Figure 5. Filter Order =5, Noise Power = 0.1 mu = 0.005
mu = 0.001
mu = 0.0005
Wiener Figure 6. Filter Order = 15, Noise Power =0.1 mu = 0.0005 Figure 7. Filter Order = 30, Noise Power =0.1 1. As the number of iterations increases, WSNR tends to increase.
When # =0.001, the value of normalized MSE is as follows. (Window size of Wiener =500) M=5 0.7739
8.6081
17.8490 M = 30 19.4260
17.2541 When the filter order is similar to the system order, WSNR become large. We
can also see that the LMS outperforms the Wiener in the sense of the
accuracy of identification. (3) Effect of Noise Power 1. Normalized MSE (# =0.005) Figure 8. mu = 0.005, Filter Order = 15 The normalized mean square error is increased as the noise power increases. It
turns out that the learning curve converges to the value of normalized noise power
which is minimum mean square error. 2. WSNR (# =0.005) Maw, ..»A...,,..N—Vv~.»"‘” Figure 9. = 0.005, Filter Order = 15 I expected that WSNR keeps the same value even if noise power changes since the
noise is orthogonal to the inputs. But the value of WSNR is decreased as the noise
power increased in the steady state. Problem 2. LMS Predictor of Nonstationary SQeech 5 Figuve‘s»ﬁgur§71 Eil: Edit Mew Insert Icols Dehug Quktop window ﬂllp Deane: mew weﬁxqagnggnm Figure 10. Plot of the speech signal "We were away a year ago" Normalized Error Power Figure 11. Filter Order = 6 Figure 12. Filter Order = 15 . Since the speech signal is nonstationary, the learning curve is different from
that of stationary case. It is not reduced as the number of interations
increases. It means the amount of data to train the filter is not a dominant
factor for the performance in the nonstationary case. . As the stepsize parameter .11 is increased, the normalized power is increased.
. The mean of normalized error power is shown below. When the filter order increases, the error power is slightly reduced. + o 1252 o 1603 o 3014 ...
View
Full
Document
 Spring '08
 PRINCIPE

Click to edit the document details