result_homework4 - EEL GSOZAJapHve 31‘de ProceSSTnS...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 10
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EEL GSOZAJapHve 31‘de ProceSSTnS HoWIflWork #4- p u, M Name‘- Park. In jun UFIDi M88 — 4206 HW #4 Problem 1. Identifying the unknown system using LMS filter Figure 1. Plot of the input, noisy system output In this problem, I design the Least Mean Square (LMS) filter to identify the unknown system. The LMS algorithm is computed as follows. I ' - 1 Normalized LMS Algorithm . % 1 Parameters l M = Filter Length 1 0 < < , = 0.1 * I J tum? true) I“ IuTY-E I Initialization ' M0) =6 Given: “07) = M-by—l input vector at time n 6101) = Desired response at time n I To be computed: W(n+1) = Estimate of weight vector at time ‘ n+1 Computation: For 7’1 = 0,1,2---- , Compute e(n) = d(n) —fi1H (n)u(n) fiz(n +1) = film) +2* ,u*u(n) *e(n) ' Data (1) Normalized MSE 0. h | I V mmd H. V .ILJ‘L'. “I .{liflfilzrm .l.llllll;l 2:3”: II. I”. “II I. i ’I .101 In I_ H- I ml; NI T‘F‘iil't'www mil [WWW pifl .- Him. ii! 0.11 1i! WWI F. ’ y“ u=0.00 —-— mu = 0.005 mu = 0.001 mu = 0.0005 Wiener mu = 0.0005 n Mfi‘hfl‘fiuu. my... 3.1..“ '. :: Figure 3. Filter Order = 15, Noise Power =O.1 —— mu = 0.001 mu = 0.0005 mu = 0.0001 Wiener Figure 4. Filter Order = 30, Noise Power =O.1 1. The ensemble averaging of learning curve was performed over 100 independent trials of the experiment. 2. As the step-size parameter # is reduced, the rate of convergence of the LMS is decreased. 3. A reduction in the step - size parameter # also has the effect of reducing the variation in the learning curve. 4. The minimum NMSE Jmin is equal to the minimum NMSE produced by the Wiener filter. 5. Generally the misadjustment is increased as the step size # is increased, i.e. M = (# /2)*tr(R). But in this case, ,U is assigned to the very small value compared with 1/tr(R). Therefore, the effect of misadjustment is very small so the learning curve converges to the minimum NMSE which is Wiener solution. 6. When ,U =0.001, the value of normalized MSE is as follows. (Window size of Wiener =500) M=5 0-1119 0-1164 We can see the steep drop in the NMSE when the filter order changes from 5 to 15. Since the system order is 9, we can have smaller NMSE when filter order is around 9. When the filter order changes from 15 to 30, the NMSE slightly increases since longer length of the weights of the filter cause the additional errors. Therefore, it is important to determine the filter order properly to get a better quality of identification. (2) WSNR mu = 0.01 mu = 0.005 mu = 0.001 Wiener Figure 5. Filter Order =5, Noise Power = 0.1 mu = 0.005 mu = 0.001 mu = 0.0005 Wiener Figure 6. Filter Order = 15, Noise Power =0.1 mu = 0.0005 Figure 7. Filter Order = 30, Noise Power =0.1 1. As the number of iterations increases, WSNR tends to increase. When # =0.001, the value of normalized MSE is as follows. (Window size of Wiener =500) M=5 0.7739 -8.6081 17.8490 M = 30 19.4260 17.2541 When the filter order is similar to the system order, WSNR become large. We can also see that the LMS outperforms the Wiener in the sense of the accuracy of identification. (3) Effect of Noise Power 1. Normalized MSE (# =0.005) Figure 8. mu = 0.005, Filter Order = 15 The normalized mean square error is increased as the noise power increases. It turns out that the learning curve converges to the value of normalized noise power which is minimum mean square error. 2. WSNR (# =0.005) Maw, ..»A...,,..N-—Vv~.»"‘” Figure 9. = 0.005, Filter Order = 15 I expected that WSNR keeps the same value even if noise power changes since the noise is orthogonal to the inputs. But the value of WSNR is decreased as the noise power increased in the steady state. Problem 2. LMS Predictor of Nonstationary SQeech 5 Figuve‘s»figur§71 Eil: Edit Mew Insert Icols Dehug Quktop window flllp Deane: mew wefixqagnggnm Figure 10. Plot of the speech signal "We were away a year ago" Normalized Error Power Figure 11. Filter Order = 6 Figure 12. Filter Order = 15 . Since the speech signal is nonstationary, the learning curve is different from that of stationary case. It is not reduced as the number of interations increases. It means the amount of data to train the filter is not a dominant factor for the performance in the nonstationary case. . As the step-size parameter .11 is increased, the normalized power is increased. . The mean of normalized error power is shown below. When the filter order increases, the error power is slightly reduced. + o 1252 o 1603 o 3014 ...
View Full Document

Page1 / 10

result_homework4 - EEL GSOZAJapHve 31‘de ProceSSTnS...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online