Signals and Systems

Signals and Systems - Signals and Systems Collection...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Signals and Systems Collection Editor: Richard Baraniuk Signals and Systems Collection Editor: Richard Baraniuk Authors: Thanos Antoulas Richard Baraniuk Steven J. Cox Benjamin Fite Roy Ha Michael Haag Matthew Hutchinson Don Johnson Ricardo Radaelli-Sanchez Justin Romberg Phil Schniter Melissa Selik JP Slavinsky Online: < http://cnx.org/content/col10064/1.11/ > CONNEXIONS Rice University, Houston, Texas This selection and arrangement of content as a collection is copyrighted by Richard Baraniuk. It is licensed under the Creative Commons Attribution 1.0 license (http://creativecommons.org/licenses/by/1.0). Collection structure revised: July 23, 2007 PDF generated: October 30, 2009 For copyright and attribution information for the modules contained in this collection, see p. 330. Table of Contents 1 Signals 1.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Size of A Signal: Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4 Useful Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5 The Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.6 The Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.7 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2 Systems 2.1 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.2 Properties of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ?? 3 Time Domain Analysis of Continuous Time Systems 3.1 CT Linear Systems and Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2 Continuous-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3 Properties of Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4 BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ?? 4 Time Domain Analysis of Discrete Time Systems 4.1 Discrete-Time Systems in the Time-Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.2 Discrete-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Circular Convolution and the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4.4 Linear Constant-Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.5 Solving Linear Constant-Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5 Linear Algebra Overview 5.1 Linear Algebra: The Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.2 Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.3 Matrix Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.4 Eigen-stu in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.5 Eigenfunctions of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 104 5.6 Fourier Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6 Continuous Time Fourier Series 6.1 Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.2 Fourier Series: Eigenfunction Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.3 Derivation of Fourier Coecients Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.4 Fourier Series in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.5 Fourier Series Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.6 Symmetry Properties of the Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.7 Circular Convolution Property of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.8 Fourier Series and LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.9 Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.10 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 iv 6.11 6.12 Gibbs's Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Fourier Series Wrap-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7 Discrete Fourier Transform 7.1 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.2 Fourier Analysis in Complex Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7.3 Matrix Equation for the DTFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 7.4 Periodic Extension to DTFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 7.5 Circular Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 153 7.6 Circular Convolution and the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 8 Fast Fourier Transform (FFT) 8.1 DFT: Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.2 The Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 8.3 Deriving the Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 9 Convergence 9.1 Convergence of Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 9.2 Convergence of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 9.3 Uniform Convergence of Function Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ?? 10 Discrete Time Fourier Transform (DTFT) 10.1 Discrete Fourier Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 10.2 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 10.3 Table of Common Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 10.4 Discrete-Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 10.5 Discrete-Time Fourier Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 10.6 Discrete-Time Fourier Transform Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 10.7 DTFT Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 11 Continuous Time Fourier Transform (CTFT) 11.1 Continuous-Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 11.2 Properties of the Continuous-Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 12 Sampling Theorem 12.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 197 12.2 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 12.3 More on Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 12.4 Nyquist Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.5 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 12.6 Anti-Aliasing Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 12.7 Discrete Time Processing of Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 13 Laplace Transform and System Design 13.1 The Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 219 13.2 Properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 13.3 Table of Common Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 13.4 Region of Convergence for the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 v 13.5 13.6 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Z-Transform and Digital Filtering 14.1 The Z Transform: Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 14.2 Table of Common z-Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 14.3 Region of Convergence for the Z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 14.4 Inverse Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 14.5 Rational Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 14.6 Dierence Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 14.7 Understanding Pole/Zero Plots on the Z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 14.8 Filter Design using the Pole/Zero Plot of a Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ?? 15 Appendix: Hilbert Spaces and Orthogonal Expansions 15.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 15.2 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 15.3 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 15.4 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 15.5 Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 15.6 Common Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 15.7 Types of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 15.8 Orthonormal Basis Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 15.9 Function Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 287 15.10 Haar Wavelet Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 15.11 Orthonormal Bases in Real and Complex Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 15.12 Plancharel and Parseval's Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 297 15.13 Approximation and Projections in Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 ?? 16 Homework Sets 16.1 Homework 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 16.2 Homework 1 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ?? 17 Viewing Embedded LabVIEW Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 vi Chapter 1 Signals 1.1 Signal Classications and Properties 1.1.1 Introduction This module will lay out some of the fundamentals of signal classication. This is basically a list of denitions and properties that are fundamental to the discussion of signals and systems. It should be noted that some discussions like energy signals vs. power signals discussion, and will not be included here. 2 1 have been designated their own module for a more complete 1.1.2 Classications of Signals Along with the classication of signals below, it is also important to understand the Classication of Systems (Section 2.1). 1.1.2.1 Continuous-Time vs. Discrete-Time As the names suggest, this classication is determined by whether or not the time axis (x-axis) is (countable) or continuous (Figure 1.1). 3 discrete A continuous-time signal will contain a value for all real numbers along the time axis. In contrast to this, a discrete-time signal (Section 1.7) is often created by using the sampling theorem the time axis. to sample a continuous signal, so it will only have values at equally spaced intervals along 1 This content is available online at <http://cnx.org/content/m10057/2.18/>. 2 "Signal Energy vs. Signal Power" <http://cnx.org/content/m10055/latest/> 3 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 1 2 CHAPTER 1. SIGNALS Figure 1.1 1.1.2.2 Analog vs. Digital The dierence between analog and digital is similar to the dierence between continuous-time and discretetime. In this case, however, the dierence is with respect to the value of the function (y-axis) (Figure 1.2). Analog corresponds to a continuous y-axis, while digital corresponds to a discrete y-axis. An easy example of a digital signal is a binary sequence, where the values of the function can only be one or zero. Figure 1.2 1.1.2.3 Periodic vs. Aperiodic Periodic signals (Section 6.1) repeat with some be any number and period T , while aperiodic, or nonperiodic, signals do not (Figure 1.3). We can dene a periodic function through the following mathematical expression, where t can T is a positive constant: f (t) = f (T + t) The true. (1.1) that the still allows (1.1) to be fundamental period of our function, f (t), is the smallest value of T 3 (a) (b) Figure 1.3: (a) A periodic signal with period T0 (b) An aperiodic signal 1.1.2.4 Causal vs. Anticausal vs. Noncausal Causal signals are signals that are zero for all negative time, while anticausal are signals that are zero for all positive time. Noncausal signals are signals that have nonzero values in both positive and negative time (Figure 1.4). 4 CHAPTER 1. SIGNALS (a) (b) (c) Figure 1.4: (a) A causal signal (b) An anticausal signal (c) A noncausal signal 1.1.2.5 Even vs. Odd An even signal is any are symmetric around f (t) = − (f (−t)) signal f such that f (t) = f (−t). An the vertical axis. odd signal, Even signals can be easily spotted as they on the other hand, is a signal f such that (Figure 1.5). 5 (a) (b) Figure 1.5: (a) An even signal (b) An odd signal Using the denitions of even and odd signals, we can show that any signal can be written as a combination of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we have to look no further than a single equation. f (t) = 1 1 (f (t) + f (−t)) + (f (t) − f (−t)) 2 2 f (t) − f (−t) (1.2) Also, it can be shown that fullls the requirement of an By multiplying and adding this expression out, it can be shown to be true. f (t) + f (−t) fullls the requirement of an even function, while odd function (Figure 1.6). Example 1.1 6 CHAPTER 1. SIGNALS (a) (b) (c) (d) Figure 1.6: 1 (f (t) + f (−t)) 2 (a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) = 1 (c) Odd part: o (t) = 2 (f (t) − f (−t)) (d) Check: e (t) + o (t) = f (t) 7 1.1.2.6 Deterministic vs. Random A deterministic signal is a signal in which each value of the signal is xed and can be determined by a mathematical expression, rule, or table. Because of this the future values of the signal can be calculated from past values with complete condence. On the other hand, a only be guessed based on the averages 5 random signal 4 has a lot of uncertainty about its behavior. The future values of a random signal cannot be accurately predicted and can usually of sets of signals (Figure 1.7). (a) (b) Figure 1.7: (a) Deterministic Signal (b) Random Signal 1.1.2.7 Right-Handed vs. Left-Handed A right-handed signal and left-handed signal are those signals whose value is zero between a given variable and positive or negative innity. Mathematically speaking, a right-handed signal is dened as any signal where f (t) = 0 for t < t1 < ∞, and a left-handed signal t > t1 > −∞. See (Figure 1.8) for an example. Both gures negative innity with mainly nonzero values. is dened as any signal where "begin" at f (t) = 0 for t1 and then extends to positive or 4 "Introduction to Random Signals and Processes" <http://cnx.org/content/m10649/latest/> 5 "Random Processes: Mean and Variance" <http://cnx.org/content/m10656/latest/> 8 CHAPTER 1. SIGNALS (a) (b) Figure 1.8: (a) Right-handed signal (b) Left-handed signal 1.1.2.8 Finite vs. Innite Length As the name applies, signals can be characterized as to whether they have a nite or innite length set of values. Most nite length signals are used when dealing with discrete-time signals or a given sequence of values. Mathematically speaking, f (t) is a nite-length signal if it is nonzero over a nite interval t1 < f (t) < t2 where t1 > −∞ and t2 < ∞. An example can be seen in Figure 1.9. Similarly, an innite-length signal, f (t), is dened as nonzero over all real numbers: ∞ ≤ f (t) ≤ −∞ 9 Figure 1.9: Finite-Length Signal. Note that it only has nonzero values on a set, nite interval. 1.2 Size of A Signal: Norms "Size" indicates 6 largeness or strength. We will use the mathematical concept of the norm to quantify this notion for both continuous-time and discrete-time signals. First we consider a way to quantify the size of a signal which may already be familiar. 1.2.1 Continuous-Time Energy Our usual notion of the energy of a signal is the area under the curve (|f (t) |) 2 Figure 1.10 6 This content is available online at <http://cnx.org/content/m12363/1.3/>. 10 CHAPTER 1. SIGNALS ∞ Ef = −∞ (|f (t) |) dt 2 (1.3) Example 1.2 Calculate Ef for Figure 1.11 1.2.2 Generalized Energy: Norms The notion of "energy" can be generalized through the introduction of the 1 p Lp norm: f where p = (|f (t) |) dt p (1.4) 1 ≤ p < ∞. Example 1.3 Ef = ( f 2 2) Example 1.4 Calculate the Lp norm of 11 Figure 1.12 Exercise 1.1 What happens to f as p = (|f (t) |) dt p 1 p p → ∞? f ∞ L∞ norm = ess sup|f (t) | Figure 1.13 12 CHAPTER 1. SIGNALS 1.2.3 Discrete-Time Energy Figure 1.14 ∞ Ef = n=−∞ (|f [n] |) 2 f where p = (|f (t) |) dt p 1 p 1≤p<∞ f ∞ = maxn {|f [n] |} N −1 p p f where p = n=0 ((|f [n] |) ) 1≤p<∞ f ∞ = maxn=0, 1, ..., N-1 {|f [n] |} 1.2.4 Finite Norm Signals What are the conditions on a signal for f p < ∞? Look at all 4 fundamental signal classes 13 1.2.4.1 Discrete-Time and Finite Length Figure 1.15 This is a length N vector. f = f [0] f [1] f0 f 1 = f2 f [2] f [...] ... f [N − 1] fN 1 where f ∈ CN , or f ∈ N N -dimensional complex or real Euclidean space. Example 1.5 N = 3, f is a real signal. Figure 1.16 14 CHAPTER 1. SIGNALS Denition 1.1: lp [0, N − 1] = f ∈ CN , f but from previous discussion lp p <∞ [0, N − 1] = CN 1.2.4.2 Discrete-Time and Innite Length Figure 1.17 can still interpret f as an innite-length vector f = f [...] f [−1] f [0] f [1] f [2] f [...] but C∞ , R ∞ Denition 1.2: don't make sense. lp ( z ) = f , f p ∞ p <∞ p f where p = n=−∞ ((|f [n] |) ) 1≤p<∞ f ∞ = maxn∈z {|f [n] |} What does it take for an f to be in lp (z )? 15 Example 1.6 Exercise 1.2 Sketch an f ∈ lp (z ) and f ∈ lp (z ). / f ∈ lp (z ) have and what happens as we charge What characteristics does p? 1.2.4.3 Continuous-Time and Finite-Length Figure 1.18 We will still refer to f (t) Denition 1.3: as a vector; more on this later. Lp [T1 , T2 ] = f [T1 , T2 ] , f T2 p <∞ 1 p f where p = T1 (|f (t) |) dt p 1≤p<∞ f p = esssup|f (t) | Exercise 1.3 where T1 ≤ t ≤ T2 f to be in What does it take for and Lp [T1 , T2 ]? 16 CHAPTER 1. SIGNALS 1.2.4.4 Continuous-Time and Innite-Length Figure 1.19 We will still refer to f (t) Denition 1.4: as a vector. Lp (R) = f , f ∞ p <∞ 1 p f where p = −∞ (|f [n] |) dt p 1≤p<∞ f ∞ = esssup|f (t) | where −∞ < t < ∞ f ∈ Lp (R)? f ∈ Lp (R). / Exercise 1.4 What does it take for an Sketch an Example 1.7 f ∈ Lp (R) and 1.2.5 Power What do we do when f Example 1.8: Periodic Signal p = ∞? 17 Figure 1.20 Solution: Look at the " norm per unit time". ie: ie: norm over one period. norm of innite-length signal converted to nite length signal by windowing. Figure 1.21 f T p is the measure. Units for p = 2? L2 Power = "energy per unit time" • • Useful when Ef = ∞ T 2 time average of energy Pf = lim T →∞ (|f (t) |) dt −( T 2 2 ) 18 CHAPTER 1. SIGNALS Figure 1.22 1. compute 2. look at Energy T = ( T →∞ lim Energy T f 2) T 2 ( f 2) = T 2 Pf is often called the mean-square value of Units? f. Pf is called the root mean squared (RMS) value of f . and ( "Energy signals" have nite norm (energy) "Power signals" have nite and nonzero power Ef < ∞. Pf < ∞, Pf = 0, → Ef = ∞). 1.2.5.1 Conclusions Energy signals are not power signals. Power signals are not energy signals. Why? Exercise 1.5 Are all signals either energy or power signals? Example 1.9 f (t) = t 19 Figure 1.23 The 4 fundamental classes of signals we will study depend on the independent (time) variable. 20 CHAPTER 1. SIGNALS Figure 1.24 1.3 Signal Operations 7 This module will look at two signal operations, time shifting and time scaling. Signal operations are operations on the time variable of the signal. These operations are very common components to real-world systems and, as such, should be understood thoroughly when learning about signals and systems. 7 This content is available online at <http://cnx.org/content/m10125/2.9/>. 21 1.3.1 Time Shifting Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtracting the amount of the shift to the time variable in the function. signal to the left (advance). Subtracting a xed amount from the time variable will shift the signal to the right (delay) that amount, while adding to the time variable will shift the Figure 1.25: f (t − T ) moves (delays) f to the right by T . 1.3.2 Time Scaling Time scaling compresses and dilates a signal by multiplying the time variable by some amount. If that amount is greater than one, the signal becomes narrower and the operation is called compression, while if the amount is less than one, the signal becomes wider and is called dilation. It often takes people quite a while to get comfortable with these operations, as people's intuition is often for the multiplication by an amount greater than one to dilate and less than one to compress. Figure 1.26: f (at) compresses f by a. Example 1.10 Actually plotting shifted and scaled signals can be quite counter-intuitive. This example will show 22 CHAPTER 1. a fool-proof way to practice this until your proper intuition is developed. Given SIGNALS f (t) , plot f (− (at)). (a) (b) (c) Figure 1.27: b f t− a to get ´´ ` (a) Begin with f (t) (b) Then replace t with at to get f (at) (c) Finally, replace t with ` b a t − a = f (at − b) 1.3.3 Time Reversal A natural question to consider when learning about time scaling is: What happens when the time variable is multiplied by a negative number? The answer to this is time reversal. This operation is the reversal of the time axis, or ipping the signal over the y-axis. 23 Figure 1.28: Reverse the time axis [Media Object] 8 1.4 Useful Signals 9 Before looking at this module, hopefully you have some basic idea of what a signal is and what basic classications and properties (Section 1.1) a signal can have. To review, a signal is merely a function dened with respect to an independent variable. This variable is often time but could represent an index of a sequence or any number of things in any number of dimensions. Most, if not all, signals that you will encounter in your studies and the real world will be able to be created from the basic signals we discuss below. Because of this, these elementary signals are often referred to as the building blocks for all other signals. 1.4.1 Sinusoids Probably the most important elemental signal that you will deal with is the real-valued sinusoid. continuous-time form, we write the general form as In its x (t) = Acos (ωt + φ) where (1.5) A is the amplitude, ω is the frequency, and φ represents the phase. Note that it is common to see ωt replaced with signal, as 2πf t. Since sinusoidal signals are periodic, we can express the period of these, or any periodic T= 2π ω (1.6) 8 This media object is a LabVIEW VI. Please view or download it at <TDSignalOps.llb> 9 This content is available online at <http://cnx.org/content/m10058/2.14/>. 24 CHAPTER 1. SIGNALS Figure 1.29: Sinusoid with A = 2, w = 2, and φ = 0. 1.4.2 Complex Exponential Function Maybe as important as the general sinusoid, the complex exponential function will become a critical part (1.7) the phase constant, and of your study of signals and systems. Its general form is written as f (t) = Best where s, shown below, is a complex number in terms of σ, ω the frequency: s = σ + jω Please look at the complex exponential module (Section 1.6) or the other elemental signals page more in depth look at this important signal. 10 for a much 1.4.3 Real Exponentials Just as the name sounds, real exponentials contain no imaginary numbers and are expressed simply as f (t) = Beαt where both (1.8) B and α are real parameters. Unlike the complex exponential that oscillates, the real exponential either decays or grows depending on the value of α. • • Decaying Exponential, when α < 0 Growing Exponential, when α > 0 Signals": Section Complex Exponentials <http://cnx.org/content/m0004/latest/#sec2> 10 "Elemental 25 (a) Figure 1.30: (b) Examples of Real Exponentials (a) Decaying Exponential (b) Growing Exponential 1.4.4 Unit Impulse Function The unit impulse (Section 1.5) "function" (or Dirac delta function) is a signal that has innite height However, because of the way it is dened, it actually integrates to one. While and innitesimal width. in the engineering world, this signal is quite nice and aids in the understanding of many concepts, some mathematicians have a problem with it being called a function, since it is not dened at impulse is most commonly denoted as t=0 . Engineers The unit reconcile this problem by keeping it around integrals, in order to keep it more nicely dened. δ (t) The most important property of the unit-impulse is shown in the following integral: ∞ δ (t) dt = 1 −∞ (1.9) 1.4.5 Unit-Step Function Another very basic signal is the unit-step function that is dened as 0 if t < 0 u (t) = 1 if t ≥ 0 (1.10) 26 CHAPTER 1. SIGNALS 1 t (a) Figure 1.31: 1 t (b) Step Function Basic Step Functions (a) Continuous-Time Unit-Step Function (b) Discrete-Time Unit- Note that the step function is discontinuous at the origin; however, it does not need to be dened here as it does not matter in signal theory. The step function is a useful tool for testing and for dening other signals. For example, when dierent shifted versions of the step function are multiplied by other signals, one can select a certain portion of the signal and zero out the rest. 1.4.6 Ramp Function The ramp function is closely related to the unit-step discussed above. Where the unit-step goes from zero to one instantaneously, the ramp function better resembles a real-world signal, where there is some time needed for the signal to increase from zero to its set value, one in this case. We dene a ramp function as follows r (t) = 0 if t < 0 t t0 if 0 ≤ t ≤ t0 (1.11) 1 if t > t0 1 t0 Figure 1.32: t Ramp Function 27 1.5 The Impulse Function 11 In engineering, we often deal with the idea of an action occurring at a point. Whether it be a force at a point in space or a signal at a point in time, it becomes worth while to develop some way of quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important function, next to the complex exponential (Section 1.6), in systems and signals course. 1.5.1 Dirac Delta Function The Dirac Delta function, a− 2 to often referred to as the unit impulse or delta function, is the function that This function is one that is innitesimally narrow, innitely tall, yet Perhaps the simplest way to visualize this is as a rectangular denes the idea of a unit impulse. integrates to pulse from unity, one (see (1.12) below). a+ 2 with a height of we see that the width tends →0 to zero and the height tends to innity as the total area remains constant at one. The impulse function is often written as 1 . As we take the limit of this, lim0, δ (t). ∞ δ (t) dt = 1 −∞ (1.12) Figure 1.33: This is one way to visualize the Dirac Delta Function. 11 This content is available online at <http://cnx.org/content/m10059/2.20/>. 28 CHAPTER 1. SIGNALS Since it is quite dicult to draw something that is innitely tall, we represent the Dirac with an arrow centered at the point it is applied. If we wish to scale it, we may write the value it is scaled by next to the point of the arrow. This is a unit impulse (no scaling). Figure 1.34: [Media Object] 12 1.5.1.1 The Sifting Property of the Impulse The rst step to understanding what this unit impulse function gives us is to examine what happens when we multiply another function by it. f (t) δ (t) = f (0) δ (t) the function we are multiplying by evaluated at zero. (1.13) Since the impulse function is zero everywhere except the origin, we essentially just "pick o" the value of At rst glance this may not appear to give use much, since we already know that the impulse evaluated at zero is innity, and anything times innity is innity. However, what happens if we integrate this? Sifting Property ∞ −∞ f (t) δ (t) dt = ∞ −∞ f (0) δ (t) dt ∞ −∞ = f (0) = f (0) δ (t) dt (1.14) It quickly becomes apparent that what we end up with is simply the function evaluated at zero. Had we used δ (t − T ) instead of δ (t), we could have "sifted out" f (T ). This is what we call the Sifting Property of the Dirac function, which is often used to dene the unit impulse. The Sifting Property is very useful in developing the idea of convolution (Section 3.2) which is one of the fundamental principles of signal processing. By using convolution and the sifting property we can represent an approximation of any system's output if we know the system's impulse response and input. Click on the convolution link above for more information on this. 1.5.1.2 Other Impulse Properties Below we will briey list a few of the other properties of the unit impulse without going into detail of their proofs - we will leave that up to you to verify as most are straightforward. Note that these properties hold for continuous and discrete time. media object is a LabVIEW VI. Please view or download it at <ImpulseFunction.llb> 12 This 29 Unit Impulse Properties 1 • δ (αt) = |α| δ (t) • δ (t) = δ (−t) d • δ (t) = dt u (t), where u (t) is the unit step. 1.5.2 Discrete-Time Impulse (Unit Sample) The extension of the Unit Impulse Function to discrete-time becomes quite trivial. All we really need to realize is that integration in continuous-time equates to summation in discrete-time. looking for a signal that sums to zero and is zero everywhere except at zero. Therefore, we are Discrete-Time Impulse 1 if n = 0 δ [n] = 0 otherwise (1.15) Figure 1.35: The graphical representation of the discrete-time impulse function Looking at the discrete-time plot of any discrete signal one can notice that all discrete signals are composed of a set of scaled, time-shifted unit samples. If we let the value of a sequence at each integer by k be denoted s [k ] and the unit sample delayed that occurs at k to be written as δ [n − k ], we can write any signal as the sum of delayed unit samples that are scaled by the signal value, or weighted coecients. ∞ s [n] = k=−∞ (s [k ] δ [n − k ]) (1.16) This decomposition is strictly a property of discrete-time signals and proves to be a very useful property. note: Through the above reasoning, we have formed (1.16), which is the fundamental concept of discrete-time convolution (Section 4.2). 1.5.3 The Impulse Response The impulse response is exactly what its name implies - the response of an LTI system, such as a lter, when the system's input is the unit impulse (or unit sample). A system can be completed described by its impulse response due to the idea mentioned above that all signals can be represented by a superposition of 30 CHAPTER 1. 13 SIGNALS , since they signals. An impulse response gives an equivalent description of a system as a transfer function are Laplace Transforms (Section 13.1) of each other. Notation: Most texts use δ (t) and δ [n] to denote the continuous-time and discrete-time impulse response, respectively. 1.6 The Complex Exponential 1.6.1 The Exponential Basics The 14 complex exponential is one of the most fundamental and important signal in signal and system analysis. Its importance comes from its functions as a basis for periodic signals as well as being able to 15 characterize linear, time-invariant (Section 2.1) signals. Before proceeding, you should be familiar with the ideas and functions of complex numbers . 1.6.1.1 Basic Exponential For all numbers x, we easily derive and dene the exponential function from the Taylor's series below: (1.17) ex = 1 + x2 x3 x1 + + + ... 1! 2! 3! ∞ ex = k=0 1k x k! (1.18) We can prove, using the ratio test, that this series does indeed converge. Therefore, we can state that the exponential function shown above is continuous and easily dened. From this denition, we can prove the following property for exponentials that will be very useful, especially for the complex exponentials discussed in the next section. ex1 +x2 = (ex1 ) (ex2 ) (1.19) 1.6.1.2 Complex Continuous-Time Exponential Now for all complex numbers s, we can dene the complex continuous-time exponential signal as f (t) = = where Aest Aejωt s imaginary, (1.20) A is a constant, 16 t is our independent variable for time, and for this equation we can reveal the ever important short biography ): Euler's Identity (for more information on Euler read this (1.21) s = jω . Finally, from Aejωt = Acos (ωt) + j (Asin (ωt)) From Euler's Identity we can easily break the signal down into its real and imaginary components. Also we can see how exponentials can be combined to represent any real signal. By modifying their frequency and 13 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 14 This content is available online at <http://cnx.org/content/m10060/2.21/>. 15 "Complex Numbers" <http://cnx.org/content/m0081/latest/> 16 http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Euler.html 31 phase, we can represent any signal through a superposity of many signals - all capable of being represented by an exponential. The above expressions do not include any information on phase however. We can further generalize our above expressions for the exponential to generalize sinusoids with any phase by making a nal substitution for s, s = σ + jω , which leads us to f (t) = Aest = Ae(σ+jω)t = Aeσt ejωt (1.22) where we dene as S as the complex amplitude, or phasor, from the rst two terms of the above equation S = Aeσt (1.23) Going back to Euler's Identity, we can rewrite the exponentials as sinusoids, where the phase term becomes much more apparent. f (t) = Aeσt (cos (ωt) + j sin (ωt)) As stated above we can easily break this formula into its real and imaginary part as follows: (1.24) Re (f (t)) = Aeσt cos (ωt) Im (f (t)) = Aeσt sin (ωt) (1.25) (1.26) 1.6.1.3 Complex Discrete-Time Exponential discretetime exponential signal, which we will not give as much detail about as we did for its continuous-time Finally we have reached the last form of the exponential signal that we will be interested in, the counterpart, because they both follow the same properties and logic discussed above. Because it is discrete, there is only a slightly dierent notation used to represents its discrete nature f [n] = BesnT = BejωnT (1.27) where nT represents the discrete-time instants of our signal. 1.6.2 Euler's Relation Along with Euler's Identity, Euler also described a way to represent a complex exponential signal in terms of its real and imaginary parts through Euler's Relation: cos (ωt) = ejwt + e−(jwt) 2 ejwt − e−(jwt) 2j (1.28) sin (ωt) = (1.29) ejwt = cos (ωt) + j sin (ωt) (1.30) 32 CHAPTER 1. SIGNALS 1.6.3 Drawing the Complex Exponential At this point, we have shown how the complex exponential can be broken up into its real part and its imaginary part. It is now worth looking at how we can draw each of these parts. We can see that both the real part and the imaginary part have a sinusoid times a real exponential. We also know that sinusoids oscillate between one and negative one. From this it becomes apparent that the real and imaginary parts of the complex exponential will each oscillate between a window dened by the real exponential part. (a) (b) (c) The shapes possible for the real part of a complex exponential. Notice that the oscillations are the result of a cosine, as there is a local maximum at t = 0. (a) If σ is negative, we have the case of a decaying exponential window. (b) If σ is positive, we have the case of a growing exponential window. (c) If σ is zero, we have the case of a constant window. Figure 1.36: While the σ determines the rate of decay/growth, the ω part determines the rate of the oscillations. This is apparent by noticing that the ω Exercise 1.6 is part of the argument to the sinusoidal part. (Solution on p. 36.) Example 1.11 What do the imaginary parts of the complex exponentials drawn above look like? The following demonstration allows you to see how the argument changes the shape of the complex exponential. See here [Media Object] 17 for instructions on how to use the demo. 18 <ComplexEXP.llb> 17 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 18 This media object is a LabVIEW VI. Please view or download it at 33 1.6.4 The Complex Plane It becomes extremely useful to view the complex variable s as a point in the complex plane 19 (the s-plane). This is the s-plane. Notice that any time s lies in the right half plane, the complex exponential will grow through time, while any time it lies in the left half plane it will decay. Figure 1.37: 1.7 Discrete-Time Signals signals theory 21 22 20 So far, we have treated what are known as analog signals and systems. Mathematically, analog signals are functions having continuous quantities as their independent variables, such as space and time. Discrete-time are functions dened on the integers; they are sequences. One of the fundamental results of signal will detail conditions under which an analog signal can be converted into a discrete-time one and retrieved without error. This result is important because discrete-time signals can be manipulated by systems instantiated as computer programs. Subsequent modules describe how virtually all analog signal processing can be performed with software. As important as such results are, discrete-time signals are more general, encompassing signals derived from analog ones and signals that aren't. For example, the characters forming a text le form a sequence, 23 which is also a discrete-time signal. We must deal with such symbolic valued ponents. signals and systems as well. As with analog signals, we seek ways of decomposing real-valued discrete-time signals into simpler comWith this approach leading to a better understanding of signal structure, we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). For symbolic-valued signals, the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. From an information representation perspective, the most important issue becomes, for both real-valued and symbolic-valued signals, eciency; What is the most parsimonious and compact way to represent information so that it can be extracted later. 19 "The Complex Plane" <http://cnx.org/content/m10596/latest/> 20 This content is available online at <http://cnx.org/content/m0009/2.24/>. 21 "Discrete-Time Signals and Systems" <http://cnx.org/content/m10342/latest/> 22 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 23 "Discrete-Time Signals and Systems" <http://cnx.org/content/m10342/latest/#para11> 34 CHAPTER 1. SIGNALS 1.7.1 Real- and Complex-valued Signals A discrete-time signal is represented symbolically as s (n), where n = {. . . , −1, 0, 1, . . . }. We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. We can delay a discrete-time signal by an integer just as with analog ones. A delayed unit sample has the expression δ (n − m), and equals one when n = m. Discrete-Time Cosine Signal sn 1 … n … Figure 1.38: this signal? The discrete-time cosine signal is plotted as a stem plot. Can you nd the formula for 1.7.2 Complex Exponentials The most important signal is, of course, the complex exponential sequence. s (n) = ej 2πf n (1.31) 1.7.3 Sinusoids Discrete-time sinusoids have the obvious form time counterparts yield unique waveforms has no eect on the signal's value. s (n) = Acos (2πf n + φ). lies in the interval As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value, frequencies of their discrete- 1 1 2 , 2 . This property can be easily understood by noting that adding an integer to the frequency of the discrete-time complex exponential only when f − ej 2π(f +m)n = ej 2πf n ej 2πmn = ej 2πf n (1.32) This derivation follows because the complex exponential evaluated at an integer multiple of 2π equals one. 1.7.4 Unit Sample The second-most important discrete-time signal is the unit sample, which is dened to be (1.33) 1 if n = 0 δ (n) = 0 otherwise 35 Unit Sample δn 1 n Figure 1.39: The unit sample. Examination of a discrete-time signal's plot, like that of the cosine signal shown in Figure 1.38 (DiscreteTime Cosine Signal), reveals that all signals consist of a sequence of delayed and scaled unit samples. Because the value of a sequence at each integer written and scaled by the signal value. m is denoted by s (m) and the unit sample delayed to occur at m is δ (n − m), we can decompose any signal as a sum of unit samples delayed to the appropriate location ∞ s (n) = m=−∞ (s (m) δ (n − m)) (1.34) This kind of decomposition is unique to discrete-time signals, and will prove useful subsequently. Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. Because of the role of software in discrete-time systems, many more dierent systems can be envisioned and constructed with programs than can be with analog signals. In fact, a special class of analog signals can be converted into discrete-time signals, processed with software, and converted back into an analog signal, all without the incursion of error. For such signals, systems can be easily produced in software, with equivalent analog realizations dicult, if not impossible, to design. 1.7.5 Symbolic-valued Signals Another interesting aspect of discrete-time signals is that their values do not need to be real numbers. We do have real-valued discrete-time signals like the sinusoid, but we also have signals that denote the sequence of characters typed on the keyboard. Such characters certainly aren't real numbers, and as a collection of possible signal values, they have little mathematical structure other than that they are members of a set. More formally, each element of the symbolic-valued signal comprise the alphabet A. s (n) takes on one of the values {a1 , . . . , aK } which This technical terminology does not mean we restrict symbols to being mem- bers of the English or Greek alphabet. They could represent keyboard characters, bytes (8-bit quantities), integers that convey daily temperature. Whether controlled by software or not, discrete-time systems are ultimately constructed from digital circuits, which consist entirely of analog circuit elements. Furthermore, the transmission and reception of discrete-time signals, like e-mail, is accomplished with analog signals and systems. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course. [Media Object] 24 24 This media object is a LabVIEW VI. Please view or download it at <SignalApprox.llb> 36 CHAPTER 1. SIGNALS Solutions to Exercises in Chapter 1 Solution to Exercise 1.6 (p. 32) They look the same except the oscillation is that of a sinusoid as opposed to a cosinusoid (i.e. it passes through the origin rather than being a local maximum at t = 0). Chapter 2 Systems 2.1 System Classications and Properties 2.1.1 Introduction In this module some of the basic classications of systems will be briey introduced and the most important properties of these systems are explained. way to separate one system from another. As can be seen, the properties of a system provide an easy Understanding these basic dierence's between systems, and 1 their properties, will be a fundamental concept used in all signal and system courses, such as digital signal processing (DSP). Once a set of systems can be identied as sharing particular properties, one no longer has to deal with proving a certain characteristic of a system each time, but it can simply be accepted do the the systems classication. Also remember that this classication presented here is neither exclusive (systems can belong to several dierent classications) nor is it unique (there are other methods of classication Examples of simple systems can be found here . 3 2 ). 2.1.2 Classication of Systems Along with the classication of systems below, it is also important to understand other Classication of Signals (Section 1.1). 2.1.2.1 Continuous vs. Discrete This may be the simplest classication to understand as the idea of discrete-time and continuous-time is one of the most fundamental properties to all of signals and system. A system where the input and output signals are continuous is a a discrete system. continuous system, and one where the input and output signals are discrete is 2.1.2.2 Linear vs. Nonlinear A linear system is any system that obeys the properties of scaling (homogeneity) and superposition (additivity), while a nonlinear system is any system that does not obey at least one of these. To show that a system H obeys the scaling property is to show that H (kf (t)) = kH (f (t)) (2.1) 1 This content is available online at <http://cnx.org/content/m10084/2.20/>. 2 "Introduction to Systems" <http://cnx.org/content/m0005/latest/> 3 "Simple Systems" <http://cnx.org/content/m0006/latest/> 37 38 CHAPTER 2. SYSTEMS Figure 2.1: A block diagram demonstrating the scaling property of linearity To demonstrate that a system H obeys the superposition property of linearity is to show that (2.2) H (f1 (t) + f2 (t)) = H (f1 (t)) + H (f2 (t)) Figure 2.2: A block diagram demonstrating the superposition property of linearity It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine the rst two steps to get H (k1 f1 (t) + k2 f2 (t)) = k2 H (f1 (t)) + k2 H (f2 (t)) (2.3) 2.1.2.3 Time Invariant vs. Time Variant A time invariant system is one that does not depend on when it occurs: change with a delay of the input. That is to say that for a system invariant if for all the shape of the output does not H where H (f (t)) = y (t), H is time (2.4) T H (f (t − T )) = y (t − T ) 39 This block diagram shows what the condition for time invariance. The output is the same whether the delay is put on the input or the output. Figure 2.3: When this property does not hold for a system, then it is said to be time variant, or time-varying. 2.1.2.4 Causal vs. Noncausal A causal system is one that is nonanticipative; that is, the output may depend on current and past inputs, but not future inputs. All "realtime" systems must be causal, since they can not have future inputs available to them. One may think the idea of future inputs does not seem to make much physical sense; however, we have only been dealing with time as our dependent variable so far, which is not always the case. Imagine rather that we wanted to do image processing. Then the dependent variable might represent pixels to the left and right (the "future") of the current position on the image, and we would have a noncausal system. 40 CHAPTER 2. SYSTEMS (a) (b) (a) For a typical system to be causal... (b) ...the output at time t0 , y (t0 ), can only depend on the portion of the input signal before t0 . Figure 2.4: 2.1.2.5 Stable vs. Unstable A stable system is one where the output does not diverge as long as the input does not diverge. denition of divergence relates to whether the signal is bounded or not. output. There are many ways to say that a signal "diverges"; for example it could have innite energy. One particularly useful bounded input-bounded output (BIBO) stable if every possible bounded input produces a bounded Representing this in a mathematical way, a stable system must have the following property, where is the input and Then a system is referred to as x (t) y (t) is the output. The output must satisfy the condition |y (t) | ≤ My < ∞ when we have an input to the system that can be described as (2.5) |x (t) | ≤ Mx < ∞ Mx and (2.6) My both represent a set of nite positive numbers and these relationships hold for all of t. If these conditions are not met, i.e. a system's output grows without limit (diverges) from a bounded input, then the system is unstable. Note that the BIBO stability of a linear time-invariant system (LTI) is neatly described in terms of whether or not its impulse response is absolutely integrable (Section 3.4). 41 2.2 Properties of Systems 2.2.1 Linear Systems system is scaled by the same amount. 4 If a system is linear, this means that when an input to a given system is scaled by a value, the output of the Linear Scaling (a) Figure 2.5 (b) In Figure 2.5(a) above, an input x to the linear system L gives the output y. If x is scaled by a value α and passed through this same system, as in Figure 2.5(b), the output will also be scaled by A linear system also obeys the principle of superposition. α. This means that if two inputs are added together and passed through a linear system, the output will be the sum of the individual inputs' outputs. (a) Figure 2.6 (b) 4 This content is available online at <http://cnx.org/content/m2102/2.17/>. 42 CHAPTER 2. SYSTEMS Superposition Principle Figure 2.7: If Figure 2.6 is true, then the principle of superposition says that Figure 2.7 (Superposition Principle) is true as well. This holds for linear systems. That is, if Figure 2.6 is true, then Figure 2.7 (Superposition Principle) is also true for a linear system. The scaling property mentioned above still holds in conjunction with the superposition principle. Therefore, if the inputs x and y are scaled by factors α and β, respectively, then the sum of these scaled inputs will give the sum of the individual scaled outputs: (a) Figure 2.8 (b) Superposition Principle with Linear Scaling Given Figure 2.8 for a linear system, Figure 2.9 (Superposition Principle with Linear Scaling) holds as well. Figure 2.9: 43 2.2.2 Time-Invariant Systems A time-invariant system has the property that a certain input will always give the same output, without regard to when the input was applied to the system. Time-Invariant Systems (a) Figure 2.10: (b) Figure 2.10(a) shows an input at time t while Figure 2.10(b) shows the same input t0 seconds later. In a time-invariant system both outputs would be identical except that the one in Figure 2.10(b) would be delayed by t0 . In this gure, due to invariant, the inputs x (t) and x (t − t0 ) are passed x (t) and x (t − t0 ) produce x (t − t0 ) is shifted by a time t0 . through the system TI. Because the system TI is timethe same output. The only dierence is that the output Whether a system is time-invariant or time-varying can be seen in the dierential equation (or dierence equation) describing it. A constant coecient dierential (or dierence) equation means that the parameters of the system are changing over time and an input now will give the same result as the same input later. Time-invariant systems are modeled with constant coecient equations. not 2.2.3 Linear Time-Invariant (LTI) Systems Certain systems are both linear and time-invariant, and are thus referred to as LTI systems. Linear Time-Invariant Systems (a) Figure 2.11: (b) This is a combination of the two cases above. Since the input to Figure 2.11(b) is a scaled, time-shifted version of the input in Figure 2.11(a), so is the output. As LTI systems are a subset of linear systems, they obey the principle of superposition. In the gure below, we see the eect of applying time-invariance to the superposition denition in the linear systems section above. 44 CHAPTER 2. SYSTEMS (a) Figure 2.12 (b) Superposition in Linear Time-Invariant Systems Figure 2.13: The principle of superposition applied to LTI systems 2.2.3.1 LTI Systems in Series If two or more LTI systems are in series with each other, their order can be interchanged without aecting the overall output of the system. Systems in series are also called cascaded systems. 45 Cascaded LTI Systems (a) (b) Figure 2.14: eect. The order of cascaded LTI systems can be interchanged without changing the overall 2.2.3.2 LTI Systems in Parallel If two or more LTI systems are in parallel with one another, an equivalent system is one that is dened as the sum of these individual systems. Parallel LTI Systems (a) Figure 2.15: (b) Parallel systems can be condensed into the sum of systems. 46 CHAPTER 2. SYSTEMS 2.2.4 Causality A system is causal if it does not depend on future values of the input to determine the output. This means that if the rst input to a system comes at time an output before the input arrived: t0 , then the system should not give any output until that time. An example of a non-causal system would be one that "sensed" an input coming and gave Non-causal System Figure 2.16: time. In this non-causal system, an output is produced due to an input that occurs later in A causal system is also characterized by an impulse response h (t) that is zero for t < 0. Chapter 3 Time Domain Analysis of Continuous Time Systems 3.1 CT Linear Systems and Dierential Equations 3.1.1 Continuous-Time Linear Systems Physically realizable, linear time-invariant systems can be described by a set of linear dierential equations (LDEs): 1 output, y (t). Figure 3.1: Graphical description of a basic linear time-invariant system with an input, f (t) and an dn dn−1 d dm d y (t) + an−1 n−1 y (t) + · · · + a1 y (t) + a0 y (t) = bm m f (t) + · · · + b1 f (t) + b0 f (t) dtn dt dt dt dt Equivalently, n ai i=0 with di y (t) dti m = i=0 bi di f (t) dti (3.1) an = 1. A natural It is easy to show that these equations dene a system that is linear and time invariant. question to ask, then, is how to nd the system's output response solution can be written as y (t) to an input f (t). Recall that such a yi (t) as the zero-input response  the homogeneous solution due only to the initial conditions of the system. We refer to ys (t) as the zero-state response  the particular solution in response to the input f (t). We now discuss how to solve for each of these components of the system's response. We refer to y (t) = yi (t) + ys (t) 1 This content is available online at <http://cnx.org/content/m10855/2.7/>. 47 48 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS 3.1.1.1 Finding the Zero-Input Response The zero-input response, yi (t), is the system response due to initial conditions only. Example 3.1: Zero-Input Response Close the switch in the circuit pictured in Figure 3.2 at time t=0 and then leave everything else alone. The voltage response is shown in Figure 3.3. Figure 3.2 49 Figure 3.3 Example 3.2: Zero-Input Response Imagine a mass attached to a spring, as shown in Figure 3.4. When you pull the mass up and let it go, you have an example of a zero-input response. A plot of this response is shown in Figure 3.5. 50 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS Figure 3.4 51 Figure 3.5 There is no input, so we solve for y0 (t) n such that ai i=0 If di y0 (t) dti = 0 , an = 1 (3.2) D is the derivative operator, we can write the previous equation as: Dn + an−1 Dn−1 + · · · + a0 y0 (t) = 0 Since we need the weighted sum of a bunch of (3.3) to be y0 (t)'s derivatives 0 y0 (t) , d dt y0 (t) , d2 dt2 y0 for all t, then (t) , . . . e st must all be of the same form. where Only the exponential, s ∈ C, has this property (see your Dierential Equation's textbook for (3.4) details). So we must assume that, y0 (t) = cest , c = 0 for some Since and s. d y0 (t) = dt c csest , d2 dt2 y0 (t) = cs2 est , . . . we have Dn + an−1 Dn−1 + · · · + a0 y0 (t) = 0 52 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS c sn + an−1 sn−1 + · · · + a1 s + a0 est = 0 (3.5) holds for all (3.5) t only when sn + an−1 sn−1 + · · · + a1 s + a0 = 0 Where this equation is referred to as the are the roots of this polynomial (3.6) The possible values of {s1 , s2 , . . . , sn } characteristic equation of the system. s (s − s1 ) (s − s2 ) (s − s3 ) . . . (s − sn ) = 0 i.e. possible solutions are c1 es1 t , c2 es2 t , form: . . ., cn esn t . Since the system is linear, the general solution if of the (3.7) y0 (t) = c1 es1 t + c2 es2 t + · · · + cn esn t Then, solve for the {c1 , . . . , cn } using the initial conditions. Example 3.3 See Lathi p.108 for a good example! We generally assume that the IC's of a system are zero, which implies solving for yi (t) = 0. However, the method of yi (t) will prove useful later on. 3.1.1.2 Finding the Zero-State Response Solving a linear dierential equation n i=0 given a specic input the nature of di ai i y (t) dt m = i=0 bi di f (t) dti (3.8) f (t) is a dicult task in general. More importantly, the method depends entirely on f (t); if we change the input signal, we must completely re-solve the system of equations to nd the system response. Convolution (Section 3.2) helps to bypass these diculties. In section 2, we explain how convolution helps to determine the system's output, given only the input f (t) and the system's impulse response (Section 1.5), h (t). Before deriving the convolution procedure, we show that a system's impulse response is easily derived from its linear, dierential equation (LDE). We will show the derivation for the LDE below, where m < n: (3.9) dn dn−1 d dm d y (t) + an−1 n−1 y (t) + · · · + a1 y (t) + a0 y (t) = bm m f (t) + · · · + b1 f (t) + b0 f (t) n dt dt dt dt dt We can rewrite (3.9) as QD [y (t)] = PD [f (t)] where (3.10) QD [·] is an operator that maps y (t) to the left hand side of (3.9) QD [y (t)] = and dn dn−1 d y (t) + an−1 n−1 y (t) + · · · + a1 y (t) + a0 y (t) n dt dt dt (3.11) PD [·] maps f (t) to the right hand side of (3.9). Lathi shows (in Appendix 2.1) that the impulse response of the system described by (3.9) is given by: h (t) = bn δ (t) + PD [yn (t)] µ (t) where for (3.12) m<n we have bn = 0. Also, yn equals the zero input response with initial conditions. y n−1 (0) = 1, y n−2 (0) = 1, . . . , y (0) = 0 53 3.2 Continuous-Time Convolution 3.2.1 Motivation 2 Convolution helps to determine the eect a system has on an input signal. It can be shown that a linear, time-invariant system (Section 2.1) is completely characterized by its impulse response. At rst glance, this may appear to be of little use, since impulse functions are not well dened in real applications. However, the sifting property of impulses (Section 1.5.1.1: The Sifting Property of the Impulse) tells us that a signal can be decomposed into an innite sum (integral) of scaled and shifted impulses. By knowing how a system aects a single impulse, and by understanding the way a signal is comprised of scaled and summed impulses, it seems reasonable that it should be possible to scale and sum the impulse responses of a system in order to determine what output signal will results from a particular input. This is precisely what convolution does - convolution determines the system's output from knowledge of the input and the system's impulse response. In the rest of this module, we will examine exactly how convolution is dened from the reasoning above. This will result in the convolution integral (see the next section) and its properties (Section 3.3). time is spent now to truly understand what is going on. In order to fully understand convolution, you may nd it useful to look at the discrete-time convolution (Section 4.2) as well. It will also be helpful to experiment with the applets resources will oer dierent approaches to this crucial concept. 3 These concepts are very important in Electrical Engineering and will make any engineer's life a lot easier if the available on the internet. These 3.2.2 Convolution Integral As mentioned above, the convolution integral provides an easy mathematical way to express the output of an LTI system based on an arbitrary signal, integral is expressed as x (t), and the system's impulse response, h (t). The convolution (3.13) ∞ y (t) = −∞ x (τ ) h (t − τ ) dτ ∗, and can be written as Convolution is such an important tool that it is represented by the symbol y (t) = x (t) ∗ h (t) By making a simple change of variables into the convolution integral, convolution is (3.14) commutative: τ = t − τ, we can easily show that (3.15) x (t) ∗ h (t) = h (t) ∗ x (t) For more information on the characteristics of the convolution integral, read about the Properties of Convolution (Section 3.3). We now present two distinct approaches for deriving the convolution integral. These derivations, along with a basic example, will help to build intuition about convolution. 3.2.3 Derivation I: The Short Approach The derivation used here closely follows the one discussed in the Motivation (Section 3.2.1: Motivation) section above. To begin this, it is necessary to state the assumptions we will be making. In this instance, the only constraints on our system are that it be linear and time-invariant. Brief Overview of Derivation Steps: 2 This content is available online 3 http://www.jhu.edu/∼signals 1. An impulse input leads to an impulse response output. at <http://cnx.org/content/m10085/2.28/>. 54 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS 2. A shifted impulse input leads to a shifted impulse response output. This is due to the time-invariance of the system. 3. We now scale the impulse input to get a scaled impulse output. This is using the scalar multiplication property of linearity. 4. We can now "sum up" an innite number of these scaled impulses to get a sum of an innite number of scaled impulse responses. This is using the additivity attribute of linearity. 5. Now we recognize that this innite sum is nothing more than an integral, so we convert both sides into integrals. 6. Recognizing that the input is the function convolution integral. f (t), we also recognize that the output is exactly the Figure 3.6: We begin with a system dened by its impulse response, h (t). We then consider a shifted version of the input impulse. Due to the time invariance of the system, we obtain a shifted version of the output impulse response. Figure 3.7: 55 ʽ ½ ´ µÆ ´Ø µ ʽ ½ ´ µ ´Ø µ Figure 3.8: Now we use the scaling part of linearity by scaling the system by a value, f (τ ), that is constant with respect to the system variable, t. We can now use the additivity aspect of linearity to add an innite number of these, one for each possible τ . Since an innite sum is exactly an integral, we end up with the integration known as the Convolution Integral. Using the sifting property, we recognize the left-hand side simply as the input f (t). Figure 3.9: 3.2.4 Derivation II: The Long Approach This derivation is really not too dierent from the one above. It is, however, a little more rigorous and a little longer. Hopefully, if you think you "kind of" get the derivation above, this will help you gain a more complete understanding of convolution. The rst step in this derivation is to dene a particular realization of the unit impulse function (Section 1.5). For this, we will use δ∆ (t) = 1 ∆ if − ∆ 2 <t< ∆ 2 0 otherwise 56 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS Figure 3.10: The realization of the unit impulse function that we will use for this derivation. After dening our realization of the unit impulse response, we can derive our convolution integral from the following steps found in the table below. Note that the left column represents the input and the right column is the system's output given that input. Derivation II of Convolution Integral Input ∆→ 0 ∆→ 0 ∆→ 0 Output →h→ →h→ →h→ →h→ →h→ →h→ Table 3.1 lim δ∆ (t) lim δ∆ (t − n∆) lim f (n∆) δ∆ (t − n∆) ∆ lim n ∆→0 ∆→0 ∆→0 lim h (t) lim h (t − n∆) lim f (n∆) h (t − n∆) ∆ lim n ∆→ 0 ∞ −∞ (f (n∆) δ∆ (t − n∆) ∆) f (τ ) δ (t − τ ) dτ ∆→0 ∞ −∞ (f (n∆) h (t − n∆) ∆) ∞ −∞ f (τ ) h (t − τ ) dτ f (τ ) h (t − τ ) dτ f (t) y (t) = 3.2.5 Implementation of Convolution Taking a closer look at the convolution integral, we nd that we are multiplying the input signal by the time-reversed impulse response and integrating. This will give us the value of the output at one given value of t. If we then shift the time-reversed impulse response by a small amount, we get the output for another value of t. Repeating this for every possible value of t, yields the total output function. While we would never actually do this computation by hand in this fashion, it does provide us with some insight into what is actually happening. We nd that we are essentially reversing the impulse response function and sliding it across the input function, integrating as we go. This method, referred to as the graphical method, provides us with a much simpler way to solve for the output for simple (contrived) signals, while improving 57 our intuition for the more complex cases where we rely on computers. In fact Texas Instruments Digital Signal Processors 5 4 develops which have special instruction sets for computations such as convolution. Example 3.4 how to use the demo. This media object is a LabVIEW VI. Please view or download it at <ConvolutionTime.llb> This demonstration illustrates the graphical method for convolution. See here 6 for instructions on 3.2.6 Basic Example Let us look at a basic continuous-time convolution example to help express some of the ideas mentioned above through a short example. We will convolve together two unit pulses, x (t) and h (t). (a) Figure 3.11: (b) Here are the two basic signals that we will convolve together. 3.2.6.1 Reect and Shift Now we will take one of the functions and reect it around the y-axis. Then we must shift the function, such that the origin, the point of the function that was originally on the origin, is labeled as point shown in the gure below, τ. This step is h (t − τ ). Since convolution is commutative it will never matter which function is reected and shifted; however, as the functions become more complicated reecting and shifting the "right one" will often make the problem much easier. 4 http://www.ti.com 5 http://dspvillage.ti.com/docs/toolssoftwarehome.jhtml 6 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 58 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS Figure 3.12: The reected and shifted unit pulse. 3.2.6.2 Regions of Integration Next, we want to look at the functions and divide the span of the functions into dierent limits of integration. These dierent regions can be understood by thinking about how we slide h (t − τ ) over the other function. These limits come from the dierent regions of overlap that occur between the two functions. If the function were more complex, then we would need to have more limits so that that overlapping parts of both function could be expressed in a single, linear integral. have the four regions. Note that the function, labeled as For this problem we will have the following four regions. Compare these limits of integration to the sketches of t in the limits of integration refers to the right-hand side of h (t − τ ) and x (t) to see if you can understand why we h (t − τ )'s t Four Limits of Integration 1. 2. 3. 4. between zero and one on the plot. t<0 0≤t<1 1≤t<2 t≥2 3.2.6.3 Using the Convolution Integral Finally we are ready for a little math. Using the convolution integral, let us integrate the product of x (t) h (t − τ ). 0 ≤ t < 1, will For our rst and fourth region this will be trivial as it will always be require the following math: 0. The second region, y (t) = = t t 0 1dτ (3.16) The third region, though. As we move 1 ≤ t < 2, is solved in much the same manner. Take note of the changes in our integration h (t − τ ) across our other function, the left-hand edge of the function, t − 1, becomes 59 our lowlimit for the integral. This is shown through our convolution integral as y (t) = = = 1 t−1 1dτ (3.17) 1 − (t − 1) 2−t The above formulas show the method for calculating convolution; however, do not let the simplicity of this example confuse you when you work on other problems. The method will be the same, you will just have to deal with more math in more complicated integrals. 3.2.6.4 Convolution Results Thus, we have the following results for our four regions: y (t) = 0 if t < 0 t if 0 ≤ t < 1 (3.18) 2 − t if 1 ≤ t < 2 0 if t ≥ 2 Now that we have found the resulting function for each of the four regions, we can combine them together and graph the convolution of x (t) ∗ h (t). Figure 3.13: Shows the system's response to the input, x (t). 3.3 Properties of Convolution 7 This 7 In this module we will study several of the most prevalent properties of convolution. Note that these properties apply to both continuous-time convolution (Section 3.2) and discrete-time convolution (Section 4.2). content is available online at <http://cnx.org/content/m10088/2.15/>. 60 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS (Refer back to these two modules if you need a review of convolution). Also, for the proofs of some of the properties, we will be using continuous-time integrals, but we could prove them the same way using the discrete-time summations. 3.3.1 Associativity Theorem 3.1: Associative Law f1 (t) ∗ (f2 (t) ∗ f3 (t)) = (f1 (t) ∗ f2 (t)) ∗ f3 (t) (3.19) Figure 3.14: Graphical implication of the associative property of convolution. 3.3.2 Commutativity Theorem 3.2: Commutative Law y (t) = f (t) ∗ h (t) = h (t) ∗ f (t) (3.20) Proof: To prove (3.20), all we need to do is make a simple change of variables in our convolution integral (or sum), ∞ y (t) = −∞ By letting f (τ ) h (t − τ ) dτ (3.21) τ = t − τ, we can easily show that convolution is commutative: (3.22) y (t) = = ∞ −∞ ∞ −∞ f (t − τ ) h (τ ) dτ h (τ ) f (t − τ ) dτ f (t) ∗ h (t) = h (t) ∗ f (t) (3.23) 61 Figure 3.15: The gure shows that either function can be regarded as the system's input while the other is the impulse response. 3.3.3 Distribution Theorem 3.3: Distributive Law f1 (t) ∗ (f2 (t) + f3 (t)) = f1 (t) ∗ f2 (t) + f1 (t) ∗ f3 (t) (3.24) Proof: The proof of this theorem can be taken directly from the denition of convolution and by using the linearity of the integral. Figure 3.16 62 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS 3.3.4 Time Shift Theorem 3.4: For and Shift Property then c (t) = f (t) ∗ h (t), c (t − T ) = f (t − T ) ∗ h (t) c (t − T ) = f (t) ∗ h (t − T ) (3.25) (3.26) (a) (b) (c) Figure 3.17: Graphical demonstration of the shift property. 3.3.5 Convolution with an Impulse Theorem 3.5: Convolving with Unit Impulse f (t) ∗ δ (t) = f (t) (3.27) Proof: For this proof, we will let δ (t) be the unit impulse located at the origin. Using the denition of convolution we start with the convolution integral ∞ f (t) ∗ δ (t) = −∞ δ (τ ) f (t − τ ) dτ (3.28) 63 From the denition of the unit impulse, we know that to reduce the above equation to the following: δ (τ ) = 0 whenever τ = 0. We use this fact f (t) ∗ δ (t) = = ∞ −∞ δ (τ ) f (t) dτ ∞ −∞ f (t) (δ (τ )) dτ (3.29) The integral of δ (τ ) will only have a value when τ =0 (from the denition of the unit impulse), therefore its integral will equal one. Thus we can simplify the equation to our theorem: f (t) ∗ δ (t) = f (t) (3.30) (a) (b) Figure 3.18: The gures, and equation above, reveal the identity function of the unit impulse. 3.3.6 Width In continuous time, if Duration (f1 ) = T1 and Duration (f2 ) = T2 , then (3.31) Duration (f1 ∗ f2 ) = T1 + T2 64 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS (a) (b) (c) In continuous-time, the duration of the convolution result equals the sum of the lengths of each of the two signals that are convolved. Figure 3.19: In discrete time, if Duration (f1 ) = N1 and Duration (f2 ) = N2 , then (3.32) Duration (f1 ∗ f2 ) = N1 + N2 − 1 65 3.3.7 Causality If f and h are both causal, then 8 f ∗h is also causal. [Media Object] 3.4 BIBO Stability a stable output. 9 BIBO stands for bounded input, bounded output. BIBO stable is a condition such that any bounded input yields a bounded output. This is to say that as long as we input a stable signal, we are guaranteed to have In order to understand this concept, we must rst look more closely into exactly what we mean by bounded. A bounded signal is any signal such that there exists a value such that the absolute value of the signal is never greater than some value. Since this value is arbitrary, what we mean is that at no point can the signal tend to innity. Figure 3.20: A bounded signal is a signal for which there exists a constant A such that |f (t) | < A Once we have identied what it means for a signal to be bounded, we must turn our attention to the condition a system must posess in order to guarantee that if any bounded signal is passed through the system, a bounded signal will arise on the output. It turns out that a continuous-time LTI (Section 2.1) system with impulse response if and only if Continuous-Time Condition for BIBO Stability h (t) is BIBO stable ∞ |h (t) |dt < ∞ −∞ This is to say that the transfer function is (3.33) absolutely integrable. media object is a LabVIEW VI. Please view or download it at <ConvTIMEDOM.llb> 9 This content is available online at <http://cnx.org/content/m10113/2.10/>. 8 This 66 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS and get that the transfer function, absolutely summable. Discrete-Time Condition for BIBO Stability h (n), must be To extend this concept to discrete-time, we make the standard transition from integration to summation That is ∞ (|h (n) |) < ∞ n=−∞ (3.34) 3.4.1 Stability and Laplace Stability is very easy to infer from the pole-zero plot (Section 13.6) of a transfer function. The only condition necessary to demonstrate stability is to show that the jω -axis is in the region of convergence. (a) Figure 3.21: (b) (a) Example of a pole-zero plot for a stable continuous-time system. (b) Example of a pole-zero plot for an unstable continuous-time system. 3.4.2 Stability and the Z-Transform Stability for discrete-time signals (Section 1.1) in the z-domain (Section 14.1) is about as easy to demonstrate as it is for continuous-time signals in the Laplace domain. However, instead of the region of convergence needing to contain the jω -axis, the ROC must contain the unit circle. 67 (a) Figure 3.22: (b) (a) A stable discrete-time system. (b) An unstable discrete-time system. [Media Object] 10 10 This media object is a LabVIEW VI. Please view or download it at <BIBO.llb> 68 CHAPTER 3. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME SYSTEMS Chapter 4 Time Domain Analysis of Discrete Time Systems 4.1 Discrete-Time Systems in the Time-Domain A discrete-time signal can 1 s (n) is delayed by n0 samples when we write s (n − n0 ), with n0 > 0. 2 Choosing n0 to be negative advances the signal along the integers. As opposed to analog delays , discrete-time delays only be integer valued. In the frequency domain, delaying a signal corresponds to a linear phase shift of the signal's discrete-time Fourier transform: Linear discrete-time systems have the superposition property. s (n − n0 ) ↔ e−(j 2πf n0 ) S ej 2πf . S (a1 x1 (n) + a2 x2 (n)) = a1 S (x1 (n)) + a2 S (x2 (n)) A discrete-time system is called (4.1) 3 shift-invariant (analogous to time-invariant analog systems S (x (n − n0 )) = y (n − n0 ) ) if delaying the input delays the corresponding output. If S (x (n)) = y (n), then a shift-invariant system has the property (4.2) We use the term shift-invariant to emphasize that delays can only have integer values in discrete-time, while in analog signals, delays can be arbitrarily valued. We want to concentrate on systems that are both linear and shift-invariant. It will be these that allow us the full power of frequency-domain analysis and implementations. Because we have no physical constraints in "constructing" such systems, we need only a mathematical specication. In analog systems, the dierential equation species the input-output relationship in the time-domain. The corresponding discrete-time specication is the dierence equation. y (n) and y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q ) Here, the output signal number of coecients aside: (4.3) is related to its past values y (n − l), l = {1, . . . , p}, and to the current and {a1 , . . . , ap } a0 ? and past values of the input signal x (n). The system's characteristics are determined by the choices for the p q and the coecients' values {b0 , b1 , . . . , bq }. There is an asymmetry in the coecients: where is This coecient would multiply the y (n) term in (4.3). We have essentially divided the equation by it, which does not change the input-output relationship. We have thus created the convention that a0 is always one. 1 This content is available online at <http://cnx.org/content/m10251/2.23/>. 2 "Simple Systems": Section Delay <http://cnx.org/content/m0006/latest/#delay> 3 "Simple Systems" <http://cnx.org/content/m0006/latest/#para4wra> 69 70 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS As opposed to dierential equations, which only provide an somehow solve the dierential equation), dierence equations provide an from the previous output values, and the current and previous inputs. Dierence equations are usually expressed in software with compute the rst 1000 values of the output has the form implicit description of a system (we must explicit way of computing the A MATLAB program that would output for any input. We simply express the dierence equation by a program that calculates each output for loops. for n=1:1000 y(n) = sum(a.*y(n-1:-1:n-p)) + sum(b.*x(n:-1:n-q)); end An important detail emerges when we consider making this program work; in fact, as written it has (at least) two bugs. What input and output values enter into the computation of y (1)? We need values for y (0), y (−1), ..., values we have not yet computed. To compute them, we would need more previous values of the output, which we have not yet computed. To compute these values, we would need even earlier values, ad innitum. The way out of this predicament is to specify the system's the initial conditions: 4 we must provide Make the p output values that occurred before the input started. These values can be arbitrary, but the choice does impact how the system responds to a given input. One choice gives rise to a linear system: initial conditions zero. The reason lies in the denition of a linear system : The only way that the output to a sum of signals can be the sum of the individual outputs occurs when the initial conditions in each case are zero. Exercise 4.1 What is it? How can it be "xed?" (Solution on p. 90.) The initial condition issue resolves making sense of the dierence equation for inputs that start at some index. However, the program will not work because of a programming, not conceptual, error. Example 4.1 Let's consider the simple system having p=1 and q = 0. (4.4) y (n) = ay (n − 1) + bx (n) To compute the output at some index, this dierence equation says we need to know what the previous output negative compute this system's output to a unit-sample input: y (n − 1) and what the input signal is at that moment of time. In more detail, let's x (n) = δ (n). Because the input is zero for indices, we start by trying to compute the output at n = 0. y (0) = ay (−1) + b (4.5) What is the value of y (−1)? 5 Because we have used an input that is zero for all negative indices, it is reasonable to assume that the output is also zero. Certainly, the dierence equation would not describe a linear system this assumption, if the input that is zero for leaving y (−1) = 0, all time did not produce a zero output. With leaves us with the dierence equation y (0) = b. For n > 0, the input unit-sample is zero, which y (n) = ay (n − 1) , n > 0 . We can envision how the lter responds to this input by making a table. y (n) = ay (n − 1) + bδ (n) (4.6) 4 "Simple 5 "Simple Systems": Section Linear Systems <http://cnx.org/content/m0006/latest/#linearsys> Systems": Section Linear Systems <http://cnx.org/content/m0006/latest/#linearsys> 71 n −1 0 1 2 : x (n) 0 1 0 0 0 0 y (n) 0 b ba ba2 : n ban Table 4.1 Coecient values determine how the output behaves. The parameter serves as a gain. The eect of the parameter lasts forever; such systems are said to be b can be any value, and a is more complicated (Table 4.1). If it equals zero, the output simply equals the input times the gain IIR (Innite Impulse Response). a b. For all non-zero values of a, the output The reason for this terminology is that the unit sample also known as the impulse (especially in analog situations), and the system's response to the "impulse" lasts forever. If is a decaying exponential. than When is positive and less than one, the output If a is negative and greater a = −1, the output changes when |a| > 1; whether positive a = 1, and the output is a unit step. More dramatic eects −1, the output oscillates while decaying exponentially. When sign forever, alternating between b − b. or negative, the output signal becomes larger and larger, growing exponentially. x(n) n 1 n y(n) a = 0.5, b = 1 1 y(n) a = –0.5, b = 1 4 2 0 y(n) a = 1.1, b = 1 n n -1 n The input to the simple example system, a unit sample, is shown at the top, with the outputs for several system parameter values shown below. Figure 4.1: Positive values of over time. Here, a are used in population models to describe how population size increases n might correspond to generation. The dierence equation says that the number The same dierence indexes the times at in the next generation is some multiple of the previous one. If this multiple is less than one, the population becomes extinct; if greater than one, the population ourishes. equation also describes the eect of compound interest on deposits. Here, which compounding occurs (daily, monthly, etc.), n a equals the compound interest rate plusone, and b=1 (the bank provides no gain). In signal processing applications, we typically require that the 72 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS output remain bounded for any input. For our example, that means that we restrict chose values for it and the gain according to the application. |a| = 1 and Exercise 4.2 (Solution on p. 90.) Note that the dierence equation (4.3), y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q ) does not involve terms like y (n + 1) or x (n + 1) on the equation's right side. Can such terms also be included? Why or why not? y(n) 1 5 n Figure 4.2: The plot shows the unit-sample response of a length-5 boxcar lter. Example 4.2 A somewhat dierent system has no "a" coecients. Consider the dierence equation y (n) = 1 (x (n) + · · · + x (n − q + 1)) q When the input is a unit-sample, the output equals (4.7) Because this system's output depends only on current and previous input values, we need not be concerned with initial conditions. n = {0, . . . , q − 1}, Response) because their unit sample responses have nite duration. ure 4.2) shows that the unit-sample response is a pulse of width then equals zero thereafter. Such systems are said to be FIR (Finite I 1 q for mpulse Plotting this response (Fig- is also known as a boxcar, hence the name boxcar lter 1 q . This waveform given to this system. We'll derive its q and height frequency response and develop its ltering interpretation in the next section. For now, note that the dierence equation says that each output value equals the average of the input's current and q = 7) previous values. Thus, the output equals the running average of input's previous system could be used to produce the average weekly temperature (q daily. [Media Object] 6 values. Such a that could be updated 4.2 Discrete-Time Convolution 4.2.1 Overview ( 7 Convolution is a concept that extends to all systems that are both linear and time-invariant (Section 2.1) LTI). The idea of discrete-time convolution is exactly the same as that of continuous-time convolution 6 This media object is a LabVIEW VI. Please view or download it at <DiscreteTimeSys.llb> 7 This content is available online at <http://cnx.org/content/m10087/2.19/>. 73 (Section 3.2). For this reason, it may be useful to look at both versions to help your understanding of this extremely important concept. Recall that convolution is a very powerful tool in determining a system's output from knowledge of an arbitrary input and the system's impulse response. It will also be helpful to see convolution graphically with your own eyes and to play around with it some, so experiment with the applets 8 available on the internet. These resources will oer dierent approaches to this crucial concept. 4.2.2 Convolution Sum As mentioned above, the convolution sum provides a concise, mathematical way to express the output of an LTI system based on an arbitrary discrete-time input signal and the system's response. The sum is expressed as convolution (4.8) ∞ y [n] = k=−∞ (x [k ] h [n − k ]) As with continuous-time, convolution is represented by the symbol *, and can be written as y [n] = x [n] ∗ h [n] By making a simple change of variables into the convolution sum, convolution is (4.9) commutative: k = n − k, we can easily show that (4.10) x [n] ∗ h [n] = h [n] ∗ x [n] For more information on the characteristics of convolution, read about the Properties of Convolution (Section 3.3). 4.2.3 Derivation We know that any discrete-time signal can be represented by a summation of scaled and shifted discrete-time impulses. Since we are assuming the system to be linear and time-invariant, it would seem to reason that an input signal comprised of the sum of scaled and shifted impulses would give rise to an output comprised of a sum of scaled and shifted impulse responses. This is exactly what occurs in present a more rigorous and mathematical look at the derivation: Letting convolution. Below we H be a DT LTI system, we start with the following equation and work our way down the convolution sum! y [n] = H [x [n]] =H = = = ∞ k=−∞ ∞ k=−∞ ∞ k=−∞ ∞ k=−∞ (x [k ] δ [n − k ]) (4.11) (H [x [k ] δ [n − k ]]) (x [k ] H [δ [n − k ]]) (x [k ] h [n − k ]) Let us take a quick look at the steps taken in the above derivation. After our initial equation, we using the DT sifting property (Section 1.5.1.1: The Sifting Property of the Impulse) to rewrite the function, sum of the function times the unit impulse. Next, we can move around the because x [n], as a H operator and the summation H [·] is a linear, DT system. Because of this linearity and the fact that x [k ] is a constant, we can pull the previous mentioned constant out and simply multiply it by H [·]. Finally, we use the fact that H [·] is time invariant in order to reach our nal state - the convolution sum! A quick graphical example may help in demonstrating why convolution works. 8 http://www.jhu.edu/∼signals 74 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS Figure 4.3: A single impulse input yields the system's impulse response. Figure 4.4: linearity. A scaled impulse input yields a scaled response, due to the scaling property of the system's 75 We now use the time-invariance property of the system to show that a delayed input results in an output of the same shape, only delayed by the same amount as the input. Figure 4.5: We now use the additivity portion of the linearity property of the system to complete the picture. Since any discrete-time signal is just a sum of scaled and shifted discrete-time impulses, we can nd the output from knowing the input and the impulse response. Figure 4.6: 76 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS 4.2.4 Convolution Through Time (A Graphical Approach) In this section we will develop a second graphical interpretation of discrete-time convolution. We will begin this by writing the convolution sum allowing x to be a causal, length-m signal and h to be a causal, length-k , m−1 LTI system. This gives us the nite summation, y [n] = l=0 Notice that for any given we multiply the terms of (x [l] h [n − l]) (4.12) n we have a sum of the products of xl and a time-delayed h−l . x by the terms of a time-reversed h and add them up. This is to say that Going back to the previous example: Figure 4.7: This is the end result that we are looking to nd. 77 Figure 4.8: Here we reverse the impulse response, h , and begin its traverse at time 0. We continue the traverse. See that at time 1 , we are multiplying two elements of the input signal by two elements of the impulse response. Figure 4.9: 78 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS Figure 4.10 79 Figure 4.11: If we follow this through to one more step, n = 4, then we can see that we produce the same output as we saw in the initial example. What we are doing in the above demonstration is reversing the impulse response in time and "walking it across" the input signal. responses. This approach of time-reversing, and sliding across is a common approach to presenting convolution, since it demonstrates how convolution builds up an output through time. Clearly, this yields the same result as scaling, shifting and summing impulse 4.3 Circular Convolution and the DFT 4.3.1 Introduction time signals as 9 You should be familiar with Discrete-Time Convolution (Section 4.2), which tells us that given two discrete- x [n], the system's input, and h [n], = = the system's response, we dene the output of the system y [n] x [n] ∗ h [n] ∞ k=−∞ (x [k ] h [n − k ]) (4.13) When we are given two DFTs (nite-length sequences usually of length together as we do in the above convolution formula, often referred to as DFTs are periodic, they have nonzero values for nonzero for n≥N linear convolution. N ), we cannot just multiply them Because the and thus the multiplication of these two DFTs will be This idea led to the development of n ≥ N. We need to dene a new type of convolution operation that will result in our convolved signal being zero outside of the range convolution, also called cyclic or periodic convolution. 9 This n = {0, 1, . . . , N − 1}. circular content is available online at <http://cnx.org/content/m10786/2.8/>. 80 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS 4.3.2 Circular Convolution Formula What happens when we multiply two DFT's together, where Y [k ] is the DFT of y [n]? (4.14) Y [k ] = F [k ] H [k ] when 0≤k ≤N −1 y [n] 1 N N −1 Using the DFT synthesis formula for y [n] = F [k ] H [k ] ej N kn k=0 N −1 m=0 2π (4.15) And then applying the analysis formula F [k ] = N −1 m=0 f [m] e(−j ) N kn 2π 2π y [n] = = 1 N N −1 k=0 N −1 m=0 f f [m] e(−j ) N kn N −1 k=0 2π H [k ] ej N kn (4.16) 2π [m] 1 N H [k ] ej N k(n−m) where we can reduce the second summation found in the above equation into h [((n − m))N ] = 1 N N −1 k=0 H [k ] e j 2π k(n−m) N N −1 y [n] = m=0 (f [m] h [((n − m))N ]) 0≤n≤N −1 h [n]) in the above, then we get: (4.17) which equals circular convolution! When we have y [n] ≡ (f [n] note: The notation represents cyclic convolution "mod N". 4.3.2.1 Steps for Cyclic Convolution Steps for cyclic convolution are the same as the usual convo, except all index calculations are done "mod N" = "on the wheel" Steps for Cyclic Convolution • Step 1: "Plot" f [m] and h [((−m))N ] (a) Figure 4.12: (b) Step 1 81 • Step 2: "Spin" rotate the sequence, h [((−m))N ] n notches ACW (counter-clockwise) h [n], clockwise by n steps). to get h [((n − m))N ] (i.e. Simply Figure 4.13: Step 2 • Step 3: Pointwise multiply the f [m] wheel and the h [((n − m))N ] wheel. sum = y [n] • Step 4: Repeat for all 0≤n≤N −1 Example 4.3: Convolve (n = 4) (a) Figure 4.14: (b) Two discrete-time signals to be convolved. • h [((−m))N ] Figure 4.15 Multiply f [m] and sum to yield: y [0] = 3 82 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS • h [((1 − m))N ] Figure 4.16 Multiply f [m] and sum to yield: y [1] = 5 • h [((2 − m))N ] Figure 4.17 Multiply f [m] and sum to yield: y [2] = 3 • h [((3 − m))N ] Figure 4.18 Multiply f [m] and sum to yield: y [3] = 1 See Example 4.4 The following demonstration allows you to explore this algorithm for circular convolution. here 10 for instructions on how to use the demo. 10 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 83 This is an unsupported media type. To view, please see http://cnx.org/content/m10786/latest/DTCircularConvolution.llb 4.3.2.2 Alternative Algorithm Alternative Circular Convolution Algorithm • • • Step 1: Calculate the DFT of Step 2: Pointwise multiply Step 3: Inverse DFT f [n] which yields F [k ] and calculate the DFT of h [n] which yields H [k ]. Y [k ] = F [k ] H [k ] Y [k ] which yields y [n] Seems like a roundabout way of doing things, calculate the DFT of a sequence. To circularily convolve but it turns out that there are extremely fast ways to 2 N -point sequences: N −1 y [n] = m=0 For each (f [m] h [((n − m))N ]) n : N multiples, N −1 additions N points implies N2 multiplications, N (N − 1) additions implies O N2 complexity. 4.4 Linear Constant-Coecient Dierence Equations • remember linear dierential equations? 11 d y (t) − y (t) = x (t) dt • • A dierence equation subclass order LCCDE: is the discrete-time analogue of a rather than derivatives ( dierences ( x [n] − x [n − 1]) An important d dt x (t)). of linear systems consists of those whose input N M dierentail equation. x [n] We simply use and output x [n] obey an N -th (aK y [n − K ]) = K =0 K =0 (bK x [n − K ]) (4.18) Example 4.5: Moving average system 2 1 y [n] = (x [n − K ]) M1 + M2 + 1 K =M1 1 if K = 0 1 if 0 ≤ K ≤ M M +1 M2 = M , aK = , bK = 0 otherwise 0 otherwise M where set M1 = 0 and 11 This content is available online at <http://cnx.org/content/m12325/1.3/>. 84 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS 4.4.1 How to Implement? code / hardware IIR? FIR? M = 2: y [n] = 1 3 2 K =0 (x [n − K ]) Example 4.6: Recursive System N y [n] = K =1 N (αK y [n − K ]) + x [n] (aK y [n − K ]) = x [n] K =0 Where aK = 1 if K = 0 −αK if 1≤K≤N 0 otherwise 4.4.1 How to Implement? N = 2: y [n] = IIR? FIR? FIR 2 K =1 (αK y [n − K ]) + x [n] ∼ FEED FORWARD ∼ MOVING AVERAGE IIR ∼ RECURSIVE ∼ FEED BACK • • The SOLUTION of a dierence equation is similar to a dierential equation. x [n], yp [n]) M that solves the DE is not enough to In particular, note that a single input-output pair ( characterize the solution. N (ak yp [n − k ]) = k=0 Add in zero to get the homogenous equation: (bk x [n − k ]) k=0 N k=0 (ak yh [n − k ]) = 0. M N (ak (yp [n − k ] + yh [n − k ])) = k=0 where the particular solution ("forced") (bk x [n − k ]) k=0 General Solution yp [n] and homogeneous solution ("unforced") yh [n] (4.19) y (n) = yp (n) + yh (n) 4.5 Solving Linear Constant-Coecient Dierence Equations Step 1: Given the input 12 x [n], nd a solution to N M (ak yp [n − k ]) = k=0 k=0 (bk x [n − k ]) note: Just any old solution will do! 12 This content is available online at <http://cnx.org/content/m12326/1.4/>. 85 yp [n] - particular solution. Solve the homogeneous equation N (ak yh [n − k ]) = 0 k=0 for yh [n] - homogeneous solution. Complete solution given by y [n] = yp [n] + yh [n] 4.5.1 Solving The Homogeneous Equation • What does it mean? Figure 4.19 • Clearly yh [n] depends on the INITIAL CONDITIONS of the system T . · · · • Linearity Time-Invariance Causality will each depend on these conditions. In this course, we will emphasize the simplest case, when conditions." T is "initially at rest" with "zero initial → we will get LTI and causal solutions. (although possibly at the expense of stability Example 4.7 Solve y [n] − ay [n − 1] = x [n] where |a| < 1 for x [n] = δ [n]. Assume Step 1: Particular Solution: n≥0 and "zero initial conditions" yp [0] = (δ [0] → 1) + a (yp [−1] → 0) = 1 yp [1] = (δ [1] → 0) + a (yp [0] → 1) = a yp [2] = (δ [2] → 0) + a (yp [1] → a) = a2 yp [n] = an where (4.20) n≥0 86 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS Figure 4.20 Step 2: Homogeneous Solution: A solution is given by If x [n] = 0, then yh [n] − ayh [n − 1] = 0 yh [n] = ayh [n − 1] yh [n] = can for all (4.21) n. Figure 4.21 Step 3: Reconcile: y [n] = yn [p] + yh [n] = an u [n] + can How to pick c? Need auxilliary conditions. 87 Figure 4.22 If we desire a causal system, then c=0 and y [n] = an u [n] Figure 4.23 If we desire an anticausal system then choose c = −1, so y [n] = − (an u [−n]) This does not assume "system initially at rest!" 88 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS Figure 4.24 Notes 1. Solution 1 (p. 87) was causal and stable. 2. Solution 2 (p. 87) was anticausal and unstable. In general, linearity, time-invariance, and causality of a system implemented as a DE will depend on the auxilliary conditions. Fact: If we assume that the system is initially at rest ("zero initial conditions"), then it will be LINEAR, TIME-INVARIANT, and CAUSAL. Note: Setting input x = δ0 impulse and setting initial conditions all =0 and solving for yp yields yp = h as the impulse response of this LSI system. Example 4.8: Frequency Response of a "wire" Figure 4.25 89 Impulse Response: Figure 4.26 so Frequency Response: 1 H = F H δ0 = √ N 1 if n = 0 δ0 [n] = 0 otherwise 1 =√ N N −1 n=0 2π 1 h [n] e−( N kn) = √ N N −1 δ0 [n] e−( N kn) 2π n=0 Flat Figure 4.27 90 CHAPTER 4. TIME DOMAIN ANALYSIS OF DISCRETE TIME SYSTEMS Solutions to Exercises in Chapter 4 Solution to Exercise 4.1 (p. 70) The indices can be negative, and this condition is not allowed in MATLAB. To x it, we must start the signals later in the array. Solution to Exercise 4.2 (p. 72) Such terms would require the system to know what future input or output values would be before the current value was computed. Thus, such terms can cause diculties. Chapter 5 Linear Algebra Overview 5.1 Linear Algebra: The Basics 1 This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those of you trying to gain a deep insight into linear algebra. Rather, this brief introduction to some of the terms and ideas of linear algebra is meant to provide a little background to those trying to get a better understanding or learn about eigenvectors and eigenfunctions, which play a big role in deriving a few important ideas on Signals and Systems. The goal of these concepts will be to provide a background for signal decomposition and to lead up to the derivation of the Fourier Series (Section 6.2). 5.1.1 Linear Independence A set of vectors {x1 , x2 , . . . , xk } , xi ∈ Cn are linearly independent if none of them can be written as a linear combination of the others. Denition 5.1: Linearly Independent For a given set of vectors, {x1 , x2 , . . . , xn }, they are linearly independent if c1 x1 + c2 x2 + · · · + cn xn = 0 Example only when c1 = c2 = · · · = cn = 0 We are given the following two vectors: x1 = x2 = These are 3 2 −6 −4 not linearly independent as proven by the following statement, which, by inspection, x2 = −2x1 ⇒ 2x1 + x2 = 0 can be seen to not adhere to the denition of linear independence stated above. Another approach to reveal a vectors independence is by graphing the vectors. Looking at these two vectors geometrically (as in Figure 5.1), one can again prove that these vectors are independent. not linearly 1 This content is available online at <http://cnx.org/content/m10734/2.5/>. 91 92 CHAPTER 5. LINEAR ALGEBRA OVERVIEW Figure 5.1: Graphical representation of two vectors that are not linearly independent. Example 5.1 We are given the following two vectors: x1 = x2 = These are 3 2 1 2 linearly independent since c1 x1 = − (c2 x2 ) Based on the denition, this proof shows that these vectors are indeed linearly only if c1 = c2 = 0. independent. independence. Again, we could also graph these two vectors (see Figure 5.2) to check for linear Figure 5.2: Graphical representation of two vectors that are linearly independent. Exercise 5.1 Are (Solution on p. 109.) linearly independent? {x1 , x2 , x3 } x1 = 3 2 93 x2 = x3 = 1 2 −1 0 As we have seen in the two above examples, often times the independence of vectors can be easily seen through a graph. However this may not be as easy when we are given three or more vectors. Can you easily tell whether or not these vectors are independent from Figure 5.3. Probably not, which is why the method used in the above solution becomes important. Plot of the three vectors. Can be shown that a linear combination exists among the three, and therefore they are not linear independent. Figure 5.3: Hint: A set of m vectors in Cn cannot be linearly independent if m > n. 5.1.2 Span Denition 5.2: Span The span 2 of a set of vectors combination of {x1 , x2 , . . . , xk } {x1 , x2 , . . . , xk } is the set of vectors that can be written as a linear span ({x1 , . . . , xk }) = {α1 x1 + α2 x2 + · · · + αk xk , αi ∈ Cn } Example Given the vector x1 = 3 2 the span of x1 is a Example Given the vectors line. x1 = 3 2 2 "Subspaces", Denition 2: "Span" <http://cnx.org/content/m10297/latest/#defn2> 94 CHAPTER 5. LINEAR ALGEBRA OVERVIEW x2 = the span of these vectors is 1 2 C2 . 5.1.3 Basis Denition 5.3: Basis A basis for Cn is a set of vectors that: (1) spans Cn Clearly, any set of n linearly independent vectors is a Example 5.2 We are given the following vector and (2) is linearly independent. basis for Cn . 0 . . . 0 ei = 1 0 . . . 0 where the 1 is always in the ith place and the remaining values are zero. {ei , i = [1, 2, . . . , n] } Then the basis for Cn is note: {ei , i = [1, 2, . . . , n] } is called the standard basis. Example 5.3 h1 = h2 = {h1 , h2 } is a basis for 1 1 1 −1 C2 . 95 Figure 5.4: Plot of basis for C2 If {b1 , . . . , b2 } is a basis for Cn , then we can express any x ∈ Cn as a linear combination of the bi 's: x = α1 b1 + α2 b2 + · · · + αn bn , αi ∈ C Example 5.4 Given the following vector, x= writing 1 2 x in terms of {e1 , e2 } gives us x = e1 + 2 e2 Exercise 5.2 Try and write (Solution on p. 109.) x in terms of {h1 , h2 } x (dened in the previous example). In the two basis examples above, idea of a basis to note: is the same vector in both cases, but we can express it in many dierent ways (we give only two out of many, many possibilities). You can take this even further by extending this function spaces. As mentioned in the introduction, these concepts of linear algebra will help prepare you to understand the Fourier Series (Section 6.2), which tells us that we can express periodic functions, f (t), in terms of their basis functions, 3 4 ejω0 nt . [Media Object] [Media Object] 5.2 Eigenvectors and Eigenvalues 5 In this section, our linear systems will be n×n matrices of complex numbers. For a little background into some of the concepts that this module is based on, refer to the basics of linear algebra (Section 5.1). 3 This media object is a LabVIEW VI. Please view or download it at <LinearAlgebraCalc3.llb> 4 This media object is a LabVIEW VI. Please view or download it at <LinearTransform.llb> 5 This content is available online at <http://cnx.org/content/m10736/2.8/>. 96 CHAPTER 5. LINEAR ALGEBRA OVERVIEW 5.2.1 Eigenvectors and Eigenvalues Let A be an n×n matrix, where A is a linear operator on vectors in Cn . (5.1) Ax = b where x and b are n×1 vectors (Figure 5.5). (a) (b) Figure 5.5: Illustration of linear system and vectors. Denition 5.4: eigenvector An eigenvector of A is a vector v ∈ Cn such that Av = λ v where (5.2) λ is called the corresponding eigenvalue. A only changes the length of v, not its direction. 5.2.1.1 Graphical Model Through Figure 5.6 and Figure 5.7, let us look at the dierence between (5.1) and (5.2). Figure 5.6: Represents (5.1), Ax = b. 97 If v is an eigenvector of A, then only its length changes. length is simply scaled by our variable, λ, called the eigenvalue: See Figure 5.7 and notice how our vector's Figure 5.7: Represents (5.2), Av = λv. note: When dealing with a matrix A, eigenvectors are the simplest possible vectors to operate on. 5.2.1.2 Examples Exercise 5.3 A= Also, what are the corresponding eigenvalues, these values soon. (Solution on p. 109.) From inspection and understanding of eigenvectors, nd the two eigenvectors, v1 and v2 , of 3 0 0 −1 Do not worry if you are having problems λ1 and λ2 ? seeing these values from the information given so far, we will look at more rigorous ways to nd Exercise 5.4 (Solution on p. 109.) Show that these two vectors, v1 = v2 = are eigenvectors of 1 1 1 −1 A, where A= 3 −1 −1 3 . Also, nd the corresponding eigenvalues. 5.2.2 Calculating Eigenvalues and Eigenvectors In the above examples, we relied on your understanding of the denition and on some basic observations to nd and prove the values of the eigenvectors and eigenvalues. However, as you can probably tell, nding these values will not always be that easy. Below, we walk through a rigorous and mathematical approach at calculating the eigenvalues and eigenvectors of a matrix. 98 CHAPTER 5. LINEAR ALGEBRA OVERVIEW 5.2.2.1 Finding Eigenvalues Find λ∈C such that v = 0, where 0 is the "zero vector." We will start with (5.2), and then work our way down until we nd a way to explicitly calculate λ. Av = λ v Av − λ v = 0 (A − λI ) v = 0 In the previous step, we used the fact that λv = λI v where I is the identity matrix. 1 0 1 0 ... ... ... 0 0 I= 0 0 So, 0 . .. . . . ... 1 A − λI is just a new matrix. Example 5.5 Given the following matrix, A, then we can nd our new matrix, A − λI . A= A − λI = If a11 a21 a12 a22 This means: a11 − λ a21 a12 a22 − λ (A − λI ) v = 0 for some v = 0, then A − λI is not invertible. det (A − λI ) = 0 This determinant (shown directly above) turns out to be a polynomial expression (of order examples below to see what this means. n). Look at the Example 5.6 Starting with matrix A (shown below), we will nd the polynomial expression, where our eigen- values will be the dependent variable. A= A − λI = 3 −1 −1 3 3−λ −1 2 −1 3−λ 2 det (A − λI ) = (3 − λ) − (−1) = λ2 − 6λ + 8 99 λ = {2, 4} Example 5.7 Starting with matrix A (shown below), we will nd the polynomial expression, where our eigen- values will be the dependent variable. A= A − λI = a11 a21 a12 a22 a11 − λ a21 a12 a22 − λ det (A − λI ) = λ2 − (a11 + a22 ) λ − a21 a12 + a11 a22 If you have not already noticed it, calculating the eigenvalues is equivalent to calculating the roots of det (A − λI ) = cn λn + cn−1 λn−1 + · · · + c1 λ + c0 = 0 Conclusion: Therefore, by simply using calculus to solve for the roots of our polynomial we can easily nd the eigenvalues of our matrix. 5.2.2.2 Finding Eigenvectors Given an eigenvalue, λi , the associated eigenvectors are given by Av = λi v A set of v1 . . . λ 1 v1 . . . vn = λ n vn n equations with n unknowns. Simply solve the n equations to nd the eigenvectors. Cn , 5.2.3 Main Point Say the eigenvectors of A, {v1 , v2 , . . . , vn }, span (Section 5.1.2: Span) meaning {v1 , v2 , . . . , vn } as are linearly independent (Section 5.1.1: Linear Independence) and we can write any x ∈ Cn x = α1 v1 + α2 v2 + · · · + αn vn where (5.3) {α1 , α2 , . . . , αn } ∈ C. All that we are doing is rewriting x in terms of eigenvectors of A. Then, Ax = A (α1 v1 + α2 v2 + · · · + αn vn ) Ax = α1 Av1 + α2 Av2 + · · · + αn Avn Ax = α1 λ1 v1 + α2 λ2 v2 + · · · + αn λn vn = b 100 CHAPTER 5. LINEAR ALGEBRA OVERVIEW Therefore we can write, x= i and this leads us to the following depicted system: (αi vi ) Figure 5.8: Depiction of system where we break our vector, x, into a sum of its eigenvectors. where in Figure 5.8 we have, b= i Main Point: (αi λi vi ) By breaking up a vector, x, into a combination of eigenvectors, the calculation of Ax is broken into "easy to swallow" pieces. 5.2.4 Practice Problem Exercise 5.5 For the following matrix, (Solution on p. 109.) A and vector, x, solve for their product. A= −1 3 5 3 Try solving it using two dierent methods: directly and using eigenvectors. 3 −1 x= [Media Object] 6 5.3 Matrix Diagonalization our operator matrix, 7 From our understanding of eigenvalues and eigenvectors (Section 5.2) we have discovered several things about A. We know that if the eigenvectors of A span Cn and we know how to express any vector x in terms of {v1 , v2 , . . . , vn }, then we have the operator A all gured out. If we have A acting on x, then this is equal to A acting on the combinations of eigenvectors. Which we know proves to be fairly easy! We are still left with two questions that need to be addressed: 1. When do the eigenvectors pendent)? 2. How do we express a given vector {v1 , v2 , . . . , vn } x of A span Cn (assuming {v1 , v2 , . . . , vn } are linearly inde- in terms of {v1 , v2 , . . . , vn }? 6 This media object is a LabVIEW VI. Please view or download it at <LinearAlgebraCalc3.llb> 7 This content is available online at <http://cnx.org/content/m10738/2.5/>. 101 5.3.1 Answer to Question #1 note: When do the eigenvectors {v1 , v2 , . . . , vn } of A span Cn ? If A has n distinct eigenvalues λi = λj , i = j j are integers, then where i and A has n linearly independent eigenvectors {v1 , v2 , . . . , vn } which then span Cn . aside: The proof of this statement is not very hard, but is not really interesting enough to include here. If you wish to research this idea further, read Strang, G., "Linear Algebra and its Application" for the proof. Furthermore, n distinct eigenvalues means det (A − λI ) = cn λn + cn−1 λn−1 + · · · + c1 λ + c0 = 0 has n distinct roots. 5.3.2 Answer to Question #2 note: How do we express a given vector x in terms of {v1 , v2 , . . . , vn }? We want to nd {α1 , α2 , . . . , αn } ∈ C such that x = α1 v1 + α2 v2 + · · · + αn vn In order to nd this set of variables, we will begin by collecting the vectors n×n matrix (5.4) {v1 , v2 , . . . , vn } as columns in a V. . . . . . . . . . V = v1 . . . Now (5.4) becomes v2 . . . ... vn . . . . . . . . . . . . α1 . . . x = v1 . . . or v2 . . . ... vn . . . αn x=Vα which gives us an easy form to solve for our variables in question, α: α = V −1 x Note that V is invertible since it has n linearly independent columns. 102 CHAPTER 5. LINEAR ALGEBRA OVERVIEW 5.3.2.1 Aside Let us recall our knowledge of functions and there basis and examine the role of V. x=Vα where x1 . . . =V α1 . . . xn αn α is just x expressed in a dierent basis (Section 5.1.3: Basis): 1 0 0 . . . 0 1 0 x = x1 . + x2 . + · · · + xn . . . . . . . 1 0 0 . . . . . . x = α1 v1 + α2 v2 + · · · + αn vn . . . . . . . . . V transforms x from the standard basis to the basis {v1 , v2 , . . . , vn } {v1 , v2 , . . . , vn } 5.3.3 Matrix Diagonalization and Output We can also use the vectors to represent the output, b, of a system: b = Ax = A (α1 v1 + α2 v2 + · · · + αn vn ) Ax = α1 λ1 v1 + α2 λ2 v2 + · · · + αn λn vn = b . . . . . . . . . λ1 α1 . . . Ax = v 1 . . . v2 . . . ... vn . . . λ1 αn Ax = V Λα Ax = V Λ V − 1 x where Λ is the matrix with the eigenvalues down the diagonal: λ1 0 λ2 . . . ... ... .. . 0 0 Λ= . . . 0 Finally, we can cancel out the 0 ... −1 0 . . . λn A: x and are left with a nal equation for A = V ΛV 103 5.3.3.1 Interpretation For our interpretation, recall our key formulas: α = V −1 x b= i We can interpret operating on (αi λi vi ) x with A as: x1 . . . α1 . . . λ1 α1 . . . b1 . . . xn → αn → 1 αn → bn where the three steps (arrows) in the above illustration represent the following three operations: 1. Transform x using 2. Multiplication by V −1 , Λ which yields α b 3. Inverse transform using V, which gives us This is the paradigm we will use for LTI systems! Figure 5.9: Simple illustration of LTI system! This is an unsupported media type. To view, please see http://cnx.org/content/m10738/latest/LinearAlgebraCalc3.llb 5.4 Eigen-stu in a Nutshell A on one of its eigenvectors 8 5.4.1 A Matrix and its Eigenvector The reason we are stressing eigenvectors (Section 5.2) and their importance is because the action of a matrix v is 1. extremely easy (and fast) to calculate Av = λ v just (5.5) multiply v by λ. A just 2. easy to interpret: scales v, keeping its direction constant and only altering the vector's length. A.... If only every vector were an eigenvector of 8 This content is available online at <http://cnx.org/content/m10742/2.5/>. 104 CHAPTER 5. LINEAR ALGEBRA OVERVIEW 5.4.2 Using Eigenvectors' Span Of course, not every vector can be ... values, BUT ... For certain matrices (including ones with distinct eigen- λ's), their eigenvectors {α1 , α2 , αn } ∈ C such that: Given (5.6), we can rewrite span (Section 5.1.2: Span) Cn , meaning that for any x ∈ Cn , we can nd (5.6) x = α1 v1 + α2 v2 + · · · + αn vn Ax = b. This equation is modeled in our LTI system pictured below: Figure 5.10: LTI System. x= i (αi vi ) (αi λi vi ) i b= The LTI system above represents our (5.5). Below is an illustration of the steps taken to go from x to b. x → α = V −1 x → ΛV −1 x → V ΛV −1 x = b where the three steps (arrows) in the above illustration represent the following three operations: 1. Transform 2. Action of x using V −1 - yields α A in new basis - a multiplication by Λ V, which gives us 3. Translate back to old basis - inverse transform using a multiplication by b 5.5 Eigenfunctions of LTI Systems 5.5.1 Introduction review of eigen-stu (Section 5.4). linear time invariant (LTI) system output 9 Hopefully you are familiar with the notion of the eigenvectors of a "matrix system," if not they do a quick We can develop the same ideas for LTI systems acting on signals. A 10 H operating on a continuous input f (t) to produce continuous time (5.7) y (t) H [f (t)] = y (t) 9 This content is available online at <http://cnx.org/content/m10500/2.8/>. 10 "Introduction to Systems" <http://cnx.org/content/m0005/latest/> 105 Figure 5.11: H [f (t)] = y (t). f and t are continuous time (CT) signals and H is an LTI operator. is mathematically analogous to an vector N xN matrix A operating on a vector x ∈ CN to produce another b ∈ CN (see Matrices and LTI Systems for an overview). Ax = b (5.8) Figure 5.12: Ax = b where x and b are in CN and A is an N x N matrix. Just as an eigenvector (Section 5.2) of A is a v ∈ CN such that Av = λv, λ ∈ C, Figure 5.13: Av = λv where v ∈ CN is an eigenvector of A. we can dene an eigenfunction (or eigensignal) of an LTI system H to be a signal f (t) such that H [f (t)] = λf (t) , λ ∈ C (5.9) 106 CHAPTER 5. LINEAR ALGEBRA OVERVIEW Figure 5.14: H [f (t)] = λf (t) where f is an eigenfunction of H. Eigenfunctions are the simplest possible signals for H to operate on: λ. to calculate the output, we simply multiply the input by a complex number 5.5.2 Eigenfunctions of any LTI System The class of LTI systems has a set of eigenfunctions in common: the complex exponentials (Section 1.6) s∈C are eigenfunctions for all LTI systems. est , H est = λs est (5.10) Figure 5.15: ˆ˜ H est = λs est where H is an LTI system. note: the only eigenfunctions. While {est , s ∈ C } are always eigenfunctions of an LTI system, they are not necessarily est and the We can prove (5.10) by expressing the output as a convolution (Section 3.2) of the input impulse response (Section 1.5) h (t) of H: = = = ∞ h (τ ) es(t− ) dτ −∞ ∞ h (τ ) est e−(sτ ) dτ −∞ ∞ est −∞ h (τ ) e−(sτ ) dτ H [est ] (5.11) 107 Since the expression on the right hand side does not depend on t, it is a constant, λs . Therefore (5.12) H est = λs est The eigenvalue λs is a complex number that depends on the exponent s and, of course, the system H. To make these dependencies explicit, we will use the notation H (s) ≡ λs . Figure 5.16: est is the eigenfunction and H (s) are the eigenvalues. Since the action of an LTI operator on its eigenfunctions convenient to represent an arbitrary signal complicated) Fourier transform 11 est is easy to calculate and interpret, it is The Fourier f (t) as a linear combination of complex exponentials. series (Section 6.2) gives us this representation for periodic continuous time signals, while the (slightly more lets us expand arbitrary continuous time signals. 5.6 Fourier Transform Properties s(t) e−(at) u (t) e(−a)|t| 1 if |t| < p (t) = 0 if |t| > sin(2πW t) πt ∆ 2 ∆ 2 12 Short Table of Fourier Transform Pairs S(f) 1 j 2πf +a 2a 4π 2 f 2 +a2 sin(πf ∆) πf 1 if |f | < W S (f ) = 0 if |f | > W Table 5.1 11 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 12 This content is available online at <http://cnx.org/content/m0045/2.8/>. 108 CHAPTER 5. LINEAR ALGEBRA OVERVIEW Fourier Transform Properties Time-Domain Linearity Conjugate Symmetry Even Symmetry Odd Symmetry Scale Change Time Delay Complex Modulation Amplitude Modulation by Cosine Amplitude Modulation by Sine Dierentiation Integration Multiplication by Area Value at Origin Parseval's Theorem Frequency Domain a1 S1 (f ) + a2 S2 (f ) S (f ) = S (−f ) S (f ) = S (−f ) S (f ) = − (S (−f )) 1 |a| S f a ∗ a1 s1 (t) + a2 s2 (t) s (t) ∈ R s (t) = s (−t) s (t) = − (s (−t)) s (at) s (t − τ ) ej 2πf0 t s (t) s (t) cos (2πf0 t) s (t) sin (2πf0 t) d dt s (t) t s (α) dα −∞ e−(j 2πf τ ) S (f ) S (f − f0 ) S (f −f0 )+S (f +f0 ) 2 S (f −f0 )−S (f +f0 ) 2j j 2πf S (f ) 1 j 2πf S (f ) if S 1 d −(j 2π ) df S (f ) (0) = 0 t ts (t) ∞ −∞ s (t) dt (|s (t) |) dt 2 S (0) ∞ −∞ ∞ −∞ s (0) ∞ −∞ S (f ) df (|S (f ) |) df 2 Table 5.2 109 Solutions to Exercises in Chapter 5 Solution to Exercise 5.1 (p. 92) By playing around with the vectors and doing a little trial and error, we will discover the following relationship: x1 − x2 + 2x3 = 0 Thus we have found a linear combination of these three vectors that equals zero without setting the coecients equal to zero. Therefore, these vectors are Solution to Exercise 5.2 (p. 95) Solution to Exercise 5.3 (p. 97) not linearly independent! x= −1 3 h1 + h2 2 2 The eigenvectors you found should be: v1 = v2 = And the corresponding eigenvalues are 1 0 0 1 λ1 = 3 λ2 = −1 Solution to Exercise 5.4 (p. 97) In order to prove that these two vectors are eigenvectors, we will show that these statements meet the requirements stated in the denition (Denition: "eigenvector", p. 96). Av1 = Av2 = These results show us that 3 −1 3 −1 3 −1 3 1 1 1 = 2 2 4 −1 −1 = −4 A only scales the two vectors (i.e. changes their length) and thus it proves that (5.2) holds true for the following two eigenvalues that you were asked to nd: λ1 = 2 λ2 = 4 If you need more convincing, then one could also easily graph the vectors and their corresponding product with Solution to Exercise 5.5 (p. 100) Direct Method (use basic matrix multiplication) Ax = 3 −1 −1 3 A to see that the results are merely scaled versions of our original vectors, v1 and v2 . 5 3 = 12 4 110 CHAPTER 5. LINEAR ALGEBRA OVERVIEW Eigenvectors (use the eigenvectors and eigenvalues we found earlier for this same matrix) v1 = v2 = 1 1 1 −1 λ1 = 2 λ2 = 4 As shown in (5.3), we want to represent x as a sum of its scaled eigenvectors. For this case, we have: x = 4v1 + v2 x= 5 3 = 4 1 1 + 1 −1 Ax = A (4v1 + v2 ) = λi (4v1 + v2 ) Therefore, we have Ax = 4 × 2 1 1 + 4 1 −1 = 12 4 This may have seemed more Notice that this method using eigenvectors required complicated here, but just imagine no matrix multiplication. A being really big, or even just a few dimensions larger! Chapter 6 Continuous Time Fourier Series 6.1 Periodic Signals represent the denition of a 1 Recall that a periodic function is a function that repeats itself exactly after some given period, or cycle. We periodic function mathematically as: f (t) = f (t + mT ) m ∈ Z (6.1) where T >0 represents the period. Because of this, you may also see a signal referred to as a T-periodic signal. Any function that satises this equation is periodic. We can think of periodic functions (with period #1) as functions on all of R T) two dierent ways: Figure 6.1: Function over all of R where f (t0 ) = f (t0 + T ) #2) or, we can cut out all of the redundancy, and think of them as functions on an interval more generally, [0, T ] (or, [a, a + T ]). If we know the signal is T-periodic then all the information of the signal is captured by the above interval. 1 This content is available online at <http://cnx.org/content/m10744/2.7/>. 111 112 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES Figure 6.2: Remove the redundancy of the period function so that f (t) is undened outside [0, T ]. An aperiodic CT function f (t) does not repeat for any T ∈ R; i.e. there exists no T s.t. this equation (6.1) holds. Question: DT denitions? 6.1.1 Continuous-Time 6.1.2 Discrete-Time Note: Circular vs. Line [Media Object] 2 6.2 Fourier Series: Eigenfunction Approach 6.2.1 Introduction tion 5.5), calculating the output of an LTI system where 3 Since complex exponentials (Section 1.6) are eigenfunctions of linear time-invariant (LTI) systems (Sec- H given est as an input amounts to simple multiplcation, H (s) ∈ C is a constant (that depends on s). In the gure (Figure 6.3) below we have a simple exponential input that yields the following output: y (t) = H (s) est (6.2) Figure 6.3: Simple LTI system. 2 This media object is a LabVIEW VI. Please view or download it at <PhaseShift.llb> 3 This content is available online at <http://cnx.org/content/m10496/2.22/>. 113 Using this and the fact that linear system 1. H is linear, calculating y (t) for combinations of complex exponentials is also on the right: straightforward. This linearity property is depicted in the two equations below - showing the input to the H on the left side and the output, y (t), c1 es1 t + c2 es2 t → c1 H (s1 ) es1 t + c2 H (s2 ) es2 t 2. cn esn t → n n cn H (sn ) esn t H dently scales each exponential component es can write a function The action of H on an input such as those in the two equations above is easy to explain: nt by a dierent complex number H (sn ) ∈ C. indepenH (s)) As such, if we f (t) as a combination of complex exponentials it allows us to: • • easily calculate the output of interpret how H manipulates H given f (t) f (t) as an input (provided we know the eigenvalues 6.2.2 Fourier Series Joseph Fourier 4 demonstrated that an arbitrary T-periodic function (Section 6.1) f (t) can be written as a linear combination of harmonic complex sinusoids ∞ f (t) = n=−∞ where cn ejω0 nt (6.3) 2π T is the fundamental frequency. For almost all f (t) of practical interest, there exists cn to make 2 (6.3) true. If f (t) is nite energy ( f (t) ∈ L [0, T ]), then the equality in (6.3) holds in the sense of energy ω0 = convergence; if The f (t) is continuous, then (6.3) holds pointwise. Also, if f (t) meets some mild conditions (the 0 Dirichlet conditions), then (6.3) holds pointwise everywhere except at points of discontinuity. cn - called the Fourier coecients - tell us "how much" of the sinusoid ejω nt f (t) LTI system). is in f (t). (6.3) essentially breaks down an eigenfunction of exponentials ejω0 nt , n ∈ Z every into pieces, each of which is easily processed by an LTI system (since it is Mathematically, (6.3) tells us that the set of harmonic complex form a basis for the space of T-periodic continuous time functions. Below are a few examples that are intended to help you think about a given signal or function, its exponential basis functions. f (t), in terms of 6.2.2.1 Examples For each of the given functions below, break it down into its "simpler" parts and nd its fourier coecients. Click to see the solution. Exercise 6.1 (Solution on p. 139.) f (t) = cos (ω0 t) Exercise 6.2 f (t) = sin (2ω0 t) (Solution on p. 139.) Exercise 6.3 f (t) = 3 + 4cos (ω0 t) + 2cos (2ω0 t) 4 http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html (Solution on p. 139.) 114 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES 6.2.3 Fourier Coecients In general f (t), the Fourier coecients can be calculated from (6.3) by solving for cn , which requires a little algebraic manipulation (for the complete derivation see the Fourier coecients derivation (Section 6.3)). The end results will yield the following general equation for the fourier coecients: cn = The sequence of complex numbers Knowing the Fourier coecients function, we can 1 T T f (t) e−(jω0 nt) dt 0 is just an alternate representation of the function (6.4) {cn , n ∈ Z } f (t). transform it into it Fourier series representation using (6.4). Likewise, we can inverse transform a given sequence of complex numbers, cn , using (6.3) to reconstruct the function f (t). Along with being a natural representation for signals being manipulated by LTI systems, the Fourier series provides a description of periodic signals that is convenient in many ways. By looking at the Fourier series of a signal cn is the same as knowing f (t) explicitly and vice versa. Given a periodic f (t), we can infer mathematical properties of f (t) such as smoothness, existence of certain symmetries, as well as the physically meaningful frequency content. 6.2.3.1 Example: Using Fourier Coecient Equation Here we will look at a rather simple example that almost requires the use of (6.4) to solve for the fourier coecients. Once you understand the formula, the solution becomes a straightforward calculus problem. Find the fourier coecients for the following equation: Exercise 6.4 (Solution on p. 139.) 1 if |t| ≤ T f (t) = 0 otherwise 6.2.4 Summary: Fourier Series Equations Our rst equation (6.3) is the synthesis equation, which builds our function, f (t), by combining sinusoids. ∞ Synthesis f (t) = cn ejω0 nt n=−∞ (6.5) And our second equation (6.4), termed the analysis equation, reveals how much of each sinusoid is in f (t). 1 T T Analysis cn = where we have stated that note: f (t) e−(jω0 nt) dt 0 (6.6) ω0 = 2π T. Understand that our interval of integration does not have to be [0, T ] in our Analysis Equation. We could use any interval [a, a + T ] of length T. Example 6.1 This demonstration lets you synthesize a signal by combining sinusoids, similar to the synthesis equation for the Fourier series. See here 5 for instructions on how to use the demo. 5 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 115 [Media Object] 6 6.3 Derivation of Fourier Coecients Equation 6.3.1 Introduction Fourier Series), which is written as: 7 You should already be familiar with the existence of the general Fourier Series equation (Section 6.2.2: ∞ f (t) = n=−∞ cn ejω0 nt cn , given a function f (t). (6.7) What we are interested in here is how to determine the Fourier coecients, Below we will walk through the steps of deriving the general equation for the Fourier coecients of a given function. 6.3.2 Derivation To solve (6.7) for of (6.7) by cn , we have to do e−(jω0 kt) , where k ∈ Z. a little algebraic manipulation. First of all we will multiply both sides ∞ f (t) e−(jω0 kt) = n=−∞ Now integrate both sides over a given period, cn ejω0 nt e−(jω0 kt) (6.8) T: T ∞ T f (t) e−(jω0 kt) dt = 0 0 n=−∞ cn ejω0 nt e−(jω0 kt) dt (6.9) On the right-hand side we can switch the summation and integral along with pulling out the constant out of the integral. T ∞ T f (t) e−(jω0 kt) dt = 0 n=−∞ cn 0 ejω0 (n−k)t dt (6.10) T jω0 (n−k)t e dt, 0 on the right-hand side of the above equation. For this integral we will need to consider two cases: n = k and Now that we have made this seemingly more complicated, let us focus on just the integral, For n = k. n=k we will have: T ejω0 (n−k)t dt = T , n = k 0 For (6.11) n = k, we will have: T T T ejω0 (n−k)t dt = 0 But cos (ω0 (n − k ) t) dt + j 0 0 sin (ω0 (n − k ) t) dt , n = k between (6.12) cos (ω0 (n − k ) t) has an integer number of periods, n − k, 0 and T. Imagine a graph of the cosine; because it has an integer number of periods, there are equal areas above and below the x-axis of the graph. This statement holds true for sin (ω0 (n − k ) t) T as well. What this means is cos (ω0 (n − k ) t) dt = 0 0 (6.13) 6 This media object is a LabVIEW VI. Please view or download it at <FourierCompManip.llb> 7 This content is available online at <http://cnx.org/content/m10733/2.6/>. 116 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES as well as the integral involving the sine function. Therefore, we conclude the following about our integral of interest: T 0 T if n = k ejω0 (n−k)t dt = 0 otherwise (6.14) Now let us return our attention to our complicated equation, (6.10), to see if we can nish nding an equation for our Fourier coecients. Using the facts that we have just proven above, we can see that the only time (6.10) will have a nonzero result is when k and n are equal: T f (t) e−(jω0 nt) dt = T cn , n = k 0 Finally, we have our general equation for the Fourier coecients: (6.15) cn = 1 T T f (t) e−(jω0 nt) dt 0 (6.16) 6.3.2.1 Finding Fourier Coecients Steps To nd the Fourier coecients of periodic 1. For a given f (t): and take the area under the curve (dividing by k, multiply 2. Repeat step (1) for all f (t) by e−(jω0 kt) , k ∈ Z. T ). 6.4 Fourier Series in a Nutshell 6.4.1 Introduction system. However, it has three shortcomings: 1. It can be tedious to calculate. 8 The convolution integral (Section 3.2) is the fundamental expression relating the input and output of an LTI 2. It oers only limited physical interpretation of what the system is actually doing. 3. It gives little insight on how to design systems to accomplish certain tasks. The Fourier Series (Section 6.2), along with the Fourier Transform and Laplace Transofrm, provides a way to address these three points. Central to all of these methods is the concept of an eigenfunction (Section 5.5) (or eigenvector (Section 5.3)). We will look at how we can rewrite any given signal, exponentials (Section 1.6). In fact, by making our notions of signals and linear systems more mathematically abstract, we will be able to draw enlightening parallels between signals and systems and linear algebra (Section 5.1). f (t), in terms of complex 6.4.2 Eigenfunctions and LTI Systems The action of a LTI system H [. . . ] on one of its eigenfunctions est is 1. extremely easy (and fast) to calculate H [st] = H [s] est 2. easy to interpret: (6.17) H [. . . ] just scales e st , keeping its frequency constant. If only every function were an eigenfunction of H [. . . ] ... 8 This content is available online at <http://cnx.org/content/m10751/2.4/>. 117 6.4.2.1 LTI System ... of course, not every function can be, but for LTI systems, their eigenfunctions span (Section 5.1.2: Span) the space of periodic functions (Section 6.1), meaning that for (almost) any periodic function nd f (t) we can {cn } where n∈Z and ci ∈ C such that: ∞ f (t) = n=−∞ Given (6.18), we can rewrite cn ejω0 nt (6.18) H [t] = y (t) as the following system Figure 6.4: Transfer Functions modeled as LTI System. where we have: f (t) = n cn ejω0 nt cn H (jω0 n) ejω0 nt y (t) = n This transformation from f (t) to y (t) can also be illustrated through the process below. Note that each arrow indicates an operation on our signal or coecients. f (t) → {cn } → {cn H (jω0 n)} → y (t) where the three steps (arrows) in the above illustration represent the following three operations: 1. Transform with analysis ( Fourier Coecient (Section 6.3) equation): (6.19) cn = 2. Action of 1 T T f (t) e−(jω0 nt) dt 0 H on the Fourier series (Section 6.2) - equals a multiplication by H (jω0 n) 3. Translate back to old basis - inverse transform using our synthesis equation from the Fourier series: ∞ y (t) = n=−∞ cn ejω0 nt 6.4.3 Physical Interpretation of Fourier Series The Fourier series Coecient {cn } of a signal f (t), dened in (6.18), also has a very important physical interpretation. cn tells us "how much" of frequency Signals that vary slowly over time - smooth signals - have large cn for small n. ω0 n is in the signal. 118 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES (a) Figure 6.5: (b) We begin with our smooth signal f (t) on the left, and then use the Fourier series to nd our Fourier coecients - shown in the gure on the right. Signals that vary quickly with time - edgy or noisy signals - will have large cn for large n. (a) (b) Figure 6.6: We begin with our noisy signal f (t) on the left, and then use the Fourier series to nd our Fourier coecients - shown in the gure on the right. Example 6.2: Periodic Pulse We have the following pulse function, f (t), over the interval − T 2 ,T 2 : Figure 6.7: Periodic Signal f (t) Using our formula for the Fourier coecients, cn = 1 T T f (t) e−(jω0 nt) dt 0 (6.20) 119 we can easily calculate our the the equation for our cn . We will leave the calculation as f (t), you will get the following results: 2T1 T if n = 0 cn = 2sin(ω0 nT1 ) if n = 0 nπ an exercise for you! After solving (6.21) For T1 = T 8 , see the gure below for our results: Figure 6.8: Our Fourier coecients when T1 = T 8 Our signal are large and Question: f (t) is at except for two edges (discontinuities). cn gets smaller as n approaches innity. cn = 0 of n?) for Because of this, cn around n=0 Why does n = {. . . , −4, 4, 8, 16, . . . }? (What part of e−(jω0 nt) lies over the pulse for these values 6.5 Fourier Series Properties 9 We will begin by refreshing your memory of our basic Fourier series (Section 6.2) equations: ∞ f (t) = n=−∞ T cn ejω0 nt (6.22) cn = Let 1 T f (t) e−(jω0 nt) dt 0 to the Fourier coecients (6.23) F {·} denote the transformation from f (t) F {f (t)} = cn , n ∈ Z F {·} maps complex valued functions to sequences of complex numbers 10 . 6.5.1 Linearity F {·} is a linear transformation. 9 This content is available online at <http://cnx.org/content/m10740/2.8/>. 10 "Complex Numbers" <http://cnx.org/content/m0081/latest/> 120 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES Theorem 6.1: If F {f (t)} = cn and F {g (t)} = dn . Then F {αf (t)} = αcn , α ∈ C and F {f (t) + g (t)} = cn + dn Proof: Easy. Just linearity of integral. F {f (t) + g (t)} = = = = T (f (t) + g (t)) e−(jω0 nt) dt , n ∈ Z 0 1T 1T −(jω0 nt) dt + T 0 g (t) e−(jω0 nt) dt T 0 f (t) e , n∈Z cn + dn , n ∈ Z cn + dn (6.24) 6.5.2 Shifting Shifting in time equals a phase shift of Fourier coecients (Section 6.3) Theorem 6.2: F {f (t − t0 )} = e−(jω0 nt0 ) cn if cn = |cn |ej ∠cn , then |e−(jω0 nt0 ) cn | = |e−(jω0 nt0 ) ||cn | = |cn | ∠e−(jω0 t0 n) = ∠cn − ω0 t0 n Proof: F {f (t − t0 )} = = = = T f (t − t0 ) e−(jω0 nt) dt , n ∈ Z 0 T −t0 f (t − t0 ) e−(jω0 n(t−t0 )) e−(jω0 nt0 ) dt , −t0 “ ∼” ∼ − jω0 n t 1 T −t0 fte e−(jω0 nt0 ) dt , n ∈ Z T −t0 “ ” ∼ − jω0 n t e cn , n ∈ Z 1 T 1 T n∈Z (6.25) 6.5.3 Parseval's Relation T 0 ∞ (|f (t) |) dt = T n=−∞ 2 (|cn |) 2 (6.26) Parseval's relation allows us to calculate the energy of a signal from its Fourier series. note: Parseval tells us that the Fourier series maps L2 ([0, T ]) to l2 (Z). 121 Figure 6.9 Exercise 6.5 Exercise 6.6 Exercise 6.7 Now, if If For (Solution on p. 140.) f (t) 1 n to have "nite energy," what do the cn do as n → ∞? (Solution on p. 140.) cn = , |n| > 0 1 √ n , is f ∈ L2 ([0, T ])? (Solution on p. 140.) , is cn = , |n| > 0 f ∈ L2 ([0, T ])? f (t) has The rate of decay of the Fourier series determines if nite energy. 6.5.4 Dierentiation in Fourier Domain F {f (t)} = cn ⇒ F Since d f (t) dt = jnω0 cn (6.27) ∞ f (t) = n=−∞ then cn ejω0 nt (6.28) d dt f (t) = = ∞ n=−∞ ∞ n=−∞ d cn dt ejω0 nt cn jω0 nejω0 nt (6.29) A dierentiator attenuates the low frequencies in f (t) and accentuates the high frequencies. It removes general trends and accentuates areas of sharp variation. note: A common way to mathematically measure the smoothness of a function f (t) is to see how many derivatives are nite energy. This is done by looking at the Fourier coecients of the signal, specically how fast they F {f (t)} = cn and |cn | has the form th the m derivative to have nite energy, If 1 , then nk we need F dm dtm f (t) = (jnω0 ) cn m decay as n → ∞. and has the form nm . So for nk | nm | nk 2 <∞ 122 CHAPTER 6. nm decays nk CONTINUOUS TIME FOURIER SERIES thus faster than 1 n which implies that 2k − 2m > 1 or k> 2m + 1 2 Thus the decay rate of the Fourier series dictates smoothness. 6.5.5 Integration in the Fourier Domain If F {f (t)} = cn then (6.30) t F −∞ f (τ ) dτ = 1 cn jω0 n (6.31) note: If c0 = 0, this expression doesn't make sense. eral trends in signals and suppress much nicer than dierentiators. Integration accentuates low frequencies and attenuates high frequencies. Integrators bring out the gen- short term variation (which is noise in many cases). Integrators are 6.5.6 Signal Multiplication Given a signal a new signal, such that f (t) with Fourier coecients cn and a signal g (t) with Fourier coecients dn , we can dene y (t), where y (t) = f (t) g (t). We nd that the Fourier Series representation of y (t), en , is ∞ en = k=−∞ (ck dn−k ). This is to say that signal multiplication in the time domain is equivalent 1 T 1 T T f (t) g (t) e−(jω0 nt) dt 0 T ∞ jω0 kt g (t) e−(jω0 nt) dt k=−∞ ck e 0 ∞ 1T −(jω0 (n−k)t) dt k=−∞ ck T 0 g (t) e ∞ k=−∞ (ck dn−k ) to discrete-time convolution (Section 4.2) in the frequency domain. The proof of this is as follows en = = = = (6.32) 6.6 Symmetry Properties of the Fourier Series 6.6.1 Symmetry Properties 6.6.1.1 Real Signals Real signals have a conjugate symmetric Fourier series. 11 Theorem 6.3: If f (t) is real it implies that f (t) = f (t)∗ (f (t)∗ is the complex conjugate of f (t)), then cn = c−n ∗ which implies that magnitude is i.e. the imaginary part of 11 This Re (cn ) = Re (c−n ), i.e. the real part of cn is even, and Im (cn ) = − (Im (c−n )), cn is odd. See Figure 6.10. It also implies that |cn | = |c−n |, i.e. that even, and that ∠cn = (∠ − c−n ), i.e. the phase is odd. content is available online at <http://cnx.org/content/m10838/2.4/>. 123 Proof: c−n = = = 1 T 1 T 1 T T 0 T 0 T 0 f (t) ejω0 nt dt f (t) e−(jω0 nt) dt f (t) e−(jω0 nt) dt ∗ ∗ , f (t) = f (t) ∗ (6.33) ∗ = cn ∗ (a) (b) Figure 6.10: Re (cn ) = Re (c−n ), and Im (cn ) = − (Im (c−n )). 124 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES (a) (b) Figure 6.11: |cn | = |c−n |, and ∠cn = (∠ − c−n ). 6.6.1.2 Real and Even Signals Real and even signals have real and even Fourier series. Theorem 6.4: If f (t) = f (t)∗ and f (t) = (f (−t)), i.e. Proof: cn = = = = 1 T 1 T 1 T 2 T T 2 the signal is real and even, then cn = c−n and cn = cn ∗ . −( T ) 2 0 T −( 2 ) T 2 T 2 f (t) e−(jω0 nt) dt f (t) e−(jω0 nt) dt + 1 T 1 T T 2 T 2 0 f (t) e−(jω0 nt) dt (6.34) 0 0 f (−t) ejω0 nt dt + f (t) cos (ω0 nt) dt that 0 f (t) e−(jω0 nt) dt f (t) and cos (ω0 nt) are both real which implies cn = c−n . It is also easy to show that f (t) = 2 are all real and even. ∞ n=0 cn is real. Also cos (ω0 nt) = cos (− (ω0 nt)) so (cn cos (ω0 nt)) since f (t), cn , and cos (ω0 nt) 6.6.1.3 Real and Odd Signals Real and odd signals have Fourier Series that are odd and purely imaginary. cn = − (cn ∗ ), Theorem 6.5: If f (t) = − (f (−t)) i.e. and f (t) = f (t) ∗ , i.e. the signal is real and odd, then cn = −c−n and Proof: cn is odd and purely imaginary. Do it at home. 125 If f (t) is odd, then we can expand it in terms of sin (ω0 nt): ∞ f (t) = n=1 (2cn sin (ω0 nt)) 6.6.2 Summary In summary, we can nd fe (t), an even function, and fo (t), an odd function, such that (6.35) f (t) = fe (t) + fo (t) which implies that, for any f (t), we can nd {an } and {bn } such that ∞ ∞ f (t) = n=0 (an cos (ω0 nt)) + n=1 (bn sin (ω0 nt)) (6.36) Example 6.3: Triangle Wave Figure 6.12: T = 1 and ω0 = 2π . f (t) is real and odd. cn = − 4A jπ 2 n2 if n = {. . . , −11, −7, −3, 1, 5, 9, . . . } 4A if n = {. . . , −9, −5, −1, 3, 7, 11, . . . } jπ 2 n2 0 if n = {. . . , −4, −2, 0, 2, 4, . . . } Does cn = −c−n ? 126 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES Figure 6.13: The Fourier series of a triangle wave. note: We can often gather information about the smoothness of a signal by examining its Fourier coecients. Take a look at the above examples. The pulse and sawtooth waves are not continuous and there Fourier 1 n . The triangle wave is continuous, but not dierentiable and its Fourier series falls o 1 like 2 . n The next 3 properties will give a better feel for this. series' fall o like 6.7 Circular Convolution Property of Fourier Series 6.7.1 Signal Circular Convolution Given a signal a new signal, 12 f (t) with Fourier coecients cn and a signal g (t) with Fourier coecients dn , we can dene v (t), where v (t) = (f (t) g (t)) We nd that the Fourier Series (Section 6.2) representation of y (t), an , is such that an = cn dn . (f (t) g (t)) is the circular convolution (Section 4.3) of two periodic TT signals and is equivalent to the convolution over one interval, i.e. (f (t) g (t)) = 0 0 f (τ ) g (t − τ ) dτ dt. note: Circular convolution in the time domain is equivalent to multiplication of the Fourier coecients. This is proved as follows an = = = = = = = = T v (t) e−(jω0 nt) dt 0 TT f (τ ) g (t − τ ) dτ e−(jω0 nt) dt 0 0 T 1T f (τ ) T 0 g (t − τ ) e−(jω0 nt) dt dτ 0 1T 1 T −τ g (ν ) e−(jω0 (ν +τ )) dν dτ , ν = T 0 f (τ ) T −τ 1T 1 T −τ g (ν ) e−(jω0 nν ) dν e−(jω0 nτ ) dτ T 0 f (τ ) T −τ 1T −(jω0 nτ ) dτ T 0 f (τ ) dn e 1T −(jω0 nτ ) dn T 0 f (τ ) e dτ 1 T 1 T2 1 T t−τ (6.37) cn dn 12 This content is available online at <http://cnx.org/content/m10839/2.4/>. 127 Example 6.4 Take a look at a square pulse with a period, T1 = T 4: Figure 6.14 For this signal cn = 1 T if n = 0 π 1 sin( 2 n) otherwise π 2 2n Exercise 6.8 What signal has Fourier coecients (Solution on p. 140.) an = cn = 2 2π 1 sin ( 2 n) 4 ( π n)2 ? 2 6.8 Fourier Series and LTI Systems (Section 5.5). Recall, for 13 6.8.1 Introducing the Fourier Series to LTI Systems Before looking at this module, one should be familiar with the concepts of eigenfunction and LTI systems H LTI system we get the following relationship Figure 6.15: Input and output signals to our LTI system. where est is an eigenfunction of H. Its corresponding eigenvalue (Section 5.2) H (s) can be calculated using the impulse response (Section 1.5) h (t) ∞ H (s) = −∞ h (τ ) e−(sτ ) dτ 13 This content is available online at <http://cnx.org/content/m10752/2.7/>. 128 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES So, using the Fourier Series (Section 6.2) expansion for periodic (Section 6.1) f (t) where we input f (t) = n into the system, cn ejω0 nt Figure 6.16: LTI system our output y (t) will be y (t) = n H (jω0 n) cn ejω0 nt f (t) to So we can see that by applying the fourier series expansion equations, we can go from versa, and we do the same for our output, cn and vice y (t) 6.8.2 Eects of Fourier Series We can think of an LTI system as coecients and scales them. Given the Fourier coecients series of the output is note: shaping the frequency content of the input. The LTI system, Keep in mind the basic LTI system we presented above in Figure 6.16. H, simply multiplies all of our Fourier {cn } of the input and the eigenvalues of the system {H (jw0 n)}, the Fourier {H (jw0 n) cn } (simple term-by-term multiplication). H (jw0 n) completely describe what a LTI system does to periodic signals The eigenvalues with period T = 2πw0 Example 6.5 What does this system do? Figure 6.17 Example 6.6 What about this system? 129 (a) Figure 6.18 (b) 6.8.3 Examples Example 6.7: RC Circuit h (t) = 1 −t e RC u (t) RC f (t)? What does this system do to the Fourier Series of an input Calculate the eigenvalues of this system H (s) = = = = = ∞ h (τ ) e−(sτ ) dτ −∞ −τ ∞1 e RC e−(sτ ) dτ 0 RC ∞ (−τ )( 1 +s) 1 RC dτ RC 0 e 1 (−τ )( RC +s) ∞ 1 1 |τ =0 1 RC RC +s e 1 1+RCs (6.38) Now, say we feed the RC circuit a periodic (period Look at the eigenvalues for T = 2πw0 ) input f (t). s = jw0 n 1 1 =√ 2 C 2 w 2 n2 |1 + RCjw0 n| 1+R 0 n around |H (jw0 n) | = The RC circuit is a high frequencies (large lowpass system: it passes low frequencies ( 0) and attenuates n). Example 6.8: Square pulse wave through RC circuit • Input Signal: Taking the fourier series of f (t) 1 sin π n 2 π 2 n 2 cn = 1 t at n = 0 System: eigenvalues • H (jw0 n) = 1 1 + jRCw0 n 130 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES • Output Signal: Taking the fourier series of y (t) 1 1 sin π n 2 π 1 + jRCw0 n 2 2n dn = H (jw0 n) cn = dn = 1 1 sin π n 2 π 1 + jRCw0 n 2 n 2 dn ejw0 nt y (t) = What can we infer about 1. Is 2. Is y (t) from {dn }? y (t) y (t) real? even symmetric? odd symmetric? 3. Qualitatively, what does y (t) look like? Is it "smoother" than f (t)? (decay rate of dn vs. cn ) dn = |dn | = 1 1 sin π n 2 π 1 + jRCw0 n 2 2n 1 1 + (RCw0 ) n 2 1 sin π n 2 π 2 n 2 2 6.9 Convergence of Fourier Series 6.9.1 Introduction function, 14 Before looking at this module, hopefully you have become fully convinced of the fact that any periodic f (t), can be represented as a sum of complex sinusoids (Section 1.4). If you are not, then try looking back at eigen-stu in a nutshell (Section 5.4) or eigenfunctions of LTI systems (Section 5.5). We have shown that we can represent a signal as the sum of exponentials through the Fourier Series (Section 6.2) equations below: f (t) = n cn ejω0 nt T (6.39) cn = Joseph Fourier 15 1 T f (t) e−(jω0 nt) dt 0 (6.40) insisted that these equations were true, but could not prove it. Lagrange publicly ridiculed Fourier, and said that only continuous functions can be represented by (6.39) (indeed he proved that (6.39) holds for continuous-time functions). However, we know now that the real truth lies in between Fourier and Lagrange's positions. 14 This content is available online at <http://cnx.org/content/m10745/2.4/>. 15 http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html 131 6.9.2 Understanding the Truth Formulating our question mathematically, let N fN (t) = n=−N where cn ejω0 nt cn equals the Fourier coecients of f (t) (see (6.40)). fN (t) is a "partial reconstruction" of f (t) using the rst 2N + 1 Fourier coecients. fN (t) approximates f (t), with the approximation getting better and better as N gets large. Therefore, we can think of the set fN (t) , N = {0, 1, . . . } as a sequence of functions, each one approximating f (t) better than the one before. The question is, does this sequence converge to f (t)? Does fN (t) → f (t) as N → ∞? We will try to answer this question by thinking about convergence in two dierent ways: 1. Looking at the energy of the error signal: eN (t) = f (t) − fN (t) at 2. Looking at N →∞ lim fN (t) each point and comparing to f (t). 6.9.2.1 Approach #1 Let eN (t) be the dierence (i.e. error) between the signal f (t) and its partial reconstruction fN (t) (6.41) eN (t) = f (t) − fN (t) If f (t) ∈ L2 ([0, T ]) (nite energy), then the energy of eN (t) → 0 as N →∞ 2 is T 0 (|eN (t) |) dt = 0 We can prove this equation using Parseval's relation: 2 T f (t) − fN (t) dt → 0 (6.42) T N →∞ ∞ lim |f (t) − fN (t) | dt = lim 0 2 N →∞ |Fn f (t) − Fn fN (t) | N =−∞ 2 = lim N →∞ (|cn |) |n|>N 2 =0 f (t) ∈ where the last equation before zero is the tail sum of the Fourier Series, which approaches zero because L2 ([0, T ]). Since physical systems respond to energy, the Fourier 2 for all f (t) ∈ L ([0, T ]) equaling nite energy over one period. Series provides an adequate representation 6.9.2.2 Approach #2 The fact that eN → 0 says nothing about f (t) and N →∞ lim fN (t) being equal at a given point. Take the two functions graphed below for example: 132 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES (a) Figure 6.19 (b) Given these two functions, f (t) and g (t), T 0 then we can see that for all t, f (t) = g (t), but (|f (t) − g (t) |) dt = 0 From this we can see the following relationships: 2 energy convergence = pointwise convergence pointwise convergence ⇒ convergence in L2 ([0, T ]) However, the reverse of the above statement does not hold true. It turns out that if f (t) has a discontinuity (as can be seen in gure of g (t) above) at t0 , then f (t0 ) = lim fN (t0 ) N →∞ But as long as f (t) meets some other fairly mild conditions, then f (t ) = lim fN (t ) N →∞ if f (t) is continuous at t = t . 16 6.10 Dirichlet Conditions tions to guarantee Named after the German mathematician, Peter Dirichlet, the Dirichlet conditions are the sucient condiexistence and convergence of the Fourier series (Section 6.2) or the Fourier transform . 17 6.10.1 The Weak Dirichlet Condition for the Fourier Series Condition 6.1: The Weak Dirichlet Condition The Condition guarantees this existence. For the Fourier Series to exist, the Fourier coecients must be nite. Weak Dirichlet It essentially says that the integral of the absolute value of the signal must be nite. The limits of integration are dierent for the Fourier Series case than for the Fourier Transform case. This is a direct result of the diering denitions of the two. 16 This content is available online at <http://cnx.org/content/m10089/2.10/>. 17 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 133 Proof: The Fourier Series exists (the coecients are nite) if Weak Dirichlet Condition for the Fourier Series T |f (t) |dt < ∞ 0 This can be shown from the initial condition that the Fourier Series coecients be nite. (6.43) |cn | = | 1 T T f (t) e−(jω0 nt) dt| ≤ 0 1 T T |f (t) ||e−(jω0 nt) |dt 0 (6.44) Remembering our complex exponentials (Section 1.6), we know that in the above equation |e−(jω0 nt) | = 1, which gives us 1 T T |f (t) |dt = 0 1 T T |f (t) |dt 0 (6.45) <∞ (6.46) note: If we have the function: f (t) = then you should note that this functions fails the above condition. 1 , 0<t≤T t 6.10.1.1 The Weak Dirichlet Condition for the Fourier Transform Condition 6.2: Weak Dirichlet Condition for the Fourier Transform ∞ The Fourier Transform exists if |f (t) |dt < ∞ −∞ (6.47) This can be derived the same way the weak Dirichlet for the Fourier Series was derived, by beginning with the denition and showing that the Fourier Transform must be less than innity everywhere. 6.10.2 The Strong Dirichlet Conditions The Fourier Transform exists if the signal has a nite number of discontinuities and a nite number of maxima and minima. 1. In one period, 2. In one period, For the Fourier Series to exist, the following two conditions must be satised (along with the Weak Dirichlet Condition): f (t) f (t) has only a nite number of minima and maxima. has only a nite number of discontinuities and each one is nite. These are what we refer to as the violate these conditions, Strong Dirichlet Conditions. In theory we can think of signals that sin (logt) for instance. However, it is not possible to create a signal that violates these conditions in a lab. Therefore, any real-world signal will have a Fourier representation. 134 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES 6.10.2.1 Example Let us assume we have the following function and equality: f (t) = lim fN (t) N →∞ If (6.48) f (t) meets all three conditions of the Strong Dirichlet Conditions, then f (τ ) = f (τ ) at every τ at which f (t) is continuous. And where f (t) is discontinuous, f (t) is the average of the values on the right and left. See Figure 6.20 as an example: (a) Figure 6.20: (b) Discontinuous functions, f (t). note: The functions that fail the Dirchlet conditions are pretty pathological - as engineers, we are not too interested in them. 6.11 Gibbs's Phenomena 6.11.1 Introduction 18 The Fourier Series (Section 6.2) is the representation of continuous-time, periodic signals in terms of complex exponentials. The Dirichlet conditions (Section 6.10) suggest that discontinuous signals may have a Fourier Series representation so long as there are a nite number of discontinuities. This seems counter-intuitive, however, as complex exponentials (Section 1.6) are continuous functions. It does not seem possible to exactly reconstruct a discontinuous function from a set of continuous ones. In fact, it is not. However, it can be if we relax the condition of 'exactly' and replace it with the idea of 'almost everywhere'. This is to say that the reconstruction is exactly the same as the original signal except at a nite number of points. These points, not necessarily surprisingly, occur at the points of discontinuities. 6.11.1.1 History In the late 1800s, many machines were built to calculate Fourier coecients and re-synthesize: N fN (t) = n=−N cn ejω0 nt (6.49) 18 This content is available online at <http://cnx.org/content/m10092/2.9/>. 135 Albert Michelson (an extraordinary experimental physicist) built a machine in 1898 that could compute cn up to n = ±79, and he re-synthesized 79 f79 (t) = n=−79 cn ejω0 nt (6.50) The machine performed very well on all tests except those involving discontinuous functions. When a square wave, like that shown in Figure 6.21 (Fourier series approximation of a square wave), was inputed into the machine, "wiggles" around the discontinuities appeared, and even as the number of Fourier coecients approached innity, the wiggles never disappeared - these can be seen in the last plot in Figure 6.21 (Fourier series approximation of a square wave). J. Willard Gibbs rst explained this phenomenon in 1899, and therefore these discontinuous points are referred to as Gibbs Phenomenon. 6.11.2 Explanation We begin this discussion by taking a signal with a nite number of discontinuities (like a square pulse) and nding its Fourier Series representation. We then attempt to reconstruct it from these Fourier coecients. What we nd is that the more coecients we use, the more the signal begins to resemble the original. However, around the discontinuities, we observe rippling that does not seem to subside. As we consider even more coecients, we notice that the ripples narrow, but do not shorten. As we approach an innite number of coecients, this rippling still does not go away. This is when we apply the idea of almost everywhere. While these ripples remain (never dropping below 9% of the pulse height), the area inside them tends to zero, meaning that the energy of this ripple goes to zero. This means that their width is approaching zero and we can assert that the reconstruction is exactly the original except at the points of discontinuity. Since the Dirichlet conditions assert that there may only be a nite number of discontinuities, we can conclude that the principle of almost everywhere is met. This phenomenon is a specic case of that reveal this phenomenon more mathematically. nonuniform convergence. Below we will use the square wave, along with its Fourier Series representation, and show several gures 6.11.2.1 Square Wave The Fourier series representation of a square signal below says that the left and right sides are "equal." In order to understand Gibbs Phenomenon we will need to redene the way we look at equality. ∞ s (t) = a0 + k=1 ak cos 2πkt T ∞ + k=1 bk sin 2πkt T (6.51) 6.11.2.2 Example Figure 6.21 (Fourier series approximation of a square wave) shows several Fourier series approximation of the square wave 19 using a varied number of terms, denoted by K: 19 "Fourier Series Approximation of a Square Wave" <http://cnx.org/content/m0041/latest/> 136 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES Fourier series approximation of a square wave Fourier series approximation to sq (t). The number of terms in the Fourier sum is indicated in each plot, and the square wave is shown as a dashed line over two periods. Figure 6.21: When comparing the square wave to its Fourier series representation in Figure 6.21 (Fourier series approximation of a square wave), it is not clear that the two are equal. The fact that the square wave's Fourier series requires more terms for a given representation accuracy is not important. However, close inspection of Figure 6.21 (Fourier series approximation of a square wave) does reveal a potential issue: Does the Fourier series really equal the square wave at all values of t? In particular, at each step-change in the square wave, As more terms are added to the series, the Fourier series exhibits a peak followed by rapid oscillations. the oscillations seem to become more rapid and smaller, but the peaks are not decreasing. Consider this mathematical question intuitively: Can a discontinuous function, like the square wave, be expressed as a sum, even an innite one, of continuous ones? One should at least be suspicious, and in fact, it can't be thus expressed. This issue brought Fourier 20 much criticism from the French Academy of Science (Laplace, Legendre, and Lagrange comprised the review committee) for several years after its presentation on 1807. It was not resolved for also a century, and its resolution is interesting and important to understand from a 20 http://www-groups.dcs.st-and.ac.uk/∼history/Mathematicians/Fourier.html 137 practical viewpoint. The extraneous peaks in the square wave's Fourier series never disappear; they are termed Gibb's phenomenon after the American physicist Josiah Willard Gibbs. They occur whenever the signal is discontinuous, and will always be present whenever the signal has jumps. 6.11.2.3 Redene Equality Let's return to the question of equality; how can the equal sign in the denition of the Fourier series be justied? The partial answer is that pointwiseeach and every value of always zero. tequality is not guaranteed. What mathematicians later in the nineteenth century showed was that the rms error of the Fourier series was K →∞ lim rms ( K) =0 (6.52) What this means is that the dierence between an actual signal and its Fourier series representation may not be zero, but the square of this quantity has we dene equality: Two signals s1 (t), s2 (t) These signals are said to be equal mean square if rms (s1 − s2 ) = 0. pointwise if s1 (t) = s2 (t) for all values of t. For Fourier series, Gibb's are said to be equal in the zero integral! It is through the eyes of the rms value that phenomenon peaks have nite height and zero width: The error diers from zero only at isolated points whenever the periodic signal contains discontinuitiesand equals about 9% of the size of the discontinuity. The value of a function at a nite set of points does not aect its integral. This eect underlies the reason why dening the value of a discontinuous function at its discontinuity is meaningless. Whatever you pick for a value has no practical relevance for either the signal's spectrum or for how a system responds to the signal. The Fourier series value "at" the discontinuity is the average of the values on either side of the jump. This is an unsupported media type. To view, please see http://cnx.org/content/m10092/latest/FFTSymbolic.llb 6.12 Fourier Series Wrap-Up 21 Below we will highlight some of the most important concepts about the Fourier Series (Section 6.2) and our understanding of it through eigenfunctions and eigenvalues. Hopefully you are familiar with all of this material, so this document will simply serve as a refresher, but if not, then refer to the many links below for more information on the various ideas and topics. 1. We can represent a periodic function (Section 6.1) (or a function on an interval) of complex exponentials (Section 1.6): f (t) as a combination ∞ f (t) = n=−∞ cn ejω0 nt T (6.53) cn = Where the fourier coecients, 2. Since 1 T f (t) e−(jω0 nt) dt 0 (6.54) cn , approximately equal how much of frequency ω0 n is in the signal. ejω0 nt are eigenfunctions of LTI systems (Section 5.5), we can interpret the action of a system on a signal in terms of its eigenvalues (Section 5.2): ∞ H (jω0 n) = −∞ h (t) e−(jω0 nt) dt (6.55) 21 This content is available online at <http://cnx.org/content/m10749/2.5/>. 138 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES • |H (jω0 n) | • |H (jω0 n) | 3. In addition, the is large is small ⇒ system accentuates frequency ω0 n ⇒ system attenuates frequency ω0 n f (t) can tell us about: {cn } of a periodic function • • symmetries in smoothness of f (t) f (t), where smoothness can be interpreted as the decay rate of |cn |. 4. We can approximate a function by re-synthesizing using only some of the Fourier coecients (trunfN (t) = n≤|N | cating the F.S.) cn ejω0 nt f (t) (6.56) This approximation works well where f (t) is continuous, but not so well where is discontinuous. This idea is explained by Gibb's Phenomena. [Media Object] 22 22 This media object is a LabVIEW VI. Please view or download it at <TransferFunctions.llb> 139 Solutions to Exercises in Chapter 6 Solution to Exercise 6.1 (p. 113) The tricky part of the problem is nding a way to represent the above function in terms of its basis, cosine function in terms of the exponential. ejω0 nt . To do this, we will use our knowledge of Euler's Relation (Section 1.6.2: Euler's Relation) to represent our f (t) = 1 jω0 t e + e−(jω0 t) 2 Now from this form of our function and from (6.3), by inspection we can see that our fourier coecients will be: cn = 1 2 if |n| = 1 0 otherwise Solution to Exercise 6.2 (p. 113) As done in the previous example, we will again use Euler's Relation (Section 1.6.2: Euler's Relation) to represent our sine function in terms of exponential functions. f (t) = And so our fourier coecients are 1 ejω0 t − e−(jω0 t) 2j −j 2 if n = −1 j 2 if n = 1 cn = 0 otherwise Solution to Exercise 6.3 (p. 113) Once again we will use the same technique as was used in the previous two problems. The break down of our function yields f (t) = 3 + 4 1 2 ejω0 t + e−(jω0 t) + 2 1 2 ej 2ω0 t + e−(j 2ω0 t) And from this we can nd our fourier coecients to be: 3 if n = 0 2 if |n| = 1 cn = 1 if |n| = 2 0 otherwise Solution to Exercise 6.4 (p. 114) We will begin by plugging our above function, to match the interval specied by the function. f (t), T1 −T1 and into (6.4). Our interval of integration will now change cn = Notice that we must consider two cases: get 1 T (1) e−(jω0 nt) dt For n=0 n = 0. n=0 we can tell by inspection that we will cn = 2T1 , n=0 T 140 CHAPTER 6. CONTINUOUS TIME FOURIER SERIES For n = 0, we will need to take a few more steps to solve. We can begin by looking at the basic integral of the exponential we have. Remembering our calculus, we are ready to integrate: cn = 1 T 1 jω0 n 1 e−(jω0 nt) |T=−T1 t Let us now evaluate the exponential functions for the given limits and expand our equation to: cn = 1 T 1 − (jω0 n) e−(jω0 nT1 ) − ejω0 nT1 2j 2j and distribute our negative sign into the parenthesis, we can utilize Euler's Relation (Section 1.6.2: Euler's Relation) to greatly simplify our expression into: Now if we multiple the right side of our equation by cn = Now, recall earlier that we dened 1 T 2j jω0 n sin (ω0 nT1 ) T and substitute in. ω0 = 2π T . We can solve this equation for cn = 2jω0 sin (ω0 nT1 ) jω0 n2π And nally, if we make a few simple cancellations we will arrive at our nal answer for the Fourier coecients of f (t): cn = sin (ω0 nT1 ) , n=0 nπ Solution to Exercise 6.5 (p. 121) Solution to Exercise 6.6 (p. 121) Yes, because No, because (|cn |) < ∞ 2 for f (t) 2 to have nite energy. Solution to Exercise 6.7 (p. 121) Solution to Exercise 6.8 (p. 127) (|cn |) = 2 (|cn |) = 1 n2 , which is summable. 1 n , which is not summable. Figure 6.22: A triangle pulse train with a period of T 4 . Chapter 7 Discrete Fourier Transform 7.1 Fourier Analysis 1 Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result of the fact that sinusoids are Eigenfunctions (Section 5.5) of linear, time-invariant (LTI) (Section 2.1) systems. This is to say that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoid on the output. Then, since Fourier analysis allows us to redene the signals in terms of sinusoids, all we need to do is determine how any given system eects all possible sinusoids (its transfer function ) and we have a complete understanding of the system. Furthermore, since we are able to dene the passage of sinusoids through a system as multiplication of that sinusoid by the transfer function at the same frequency, we can convert the passage of any signal through a system from convolution (Section 3.3) (in time) to multiplication (in frequency). These ideas are what give Fourier analysis its power. Now, after hopefully having sold you on the value of this method of analysis, we must examine exactly what we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the Fourier Series , Continuous-Time Fourier Transform (Section 11.1), Discrete-Time Fourier Transform (Section 10.4) and Discrete Fourier Transform . 4 3 2 For this document, we will view the Laplace Transform (Section 13.1) All of these and Z-Transform (Section 14.2) as simply extensions of the CTFT and DTFT respectively. transforms act essentially the same way, by converting a signal in time to an equivalent signal in frequency (sinusoids). However, depending on the nature of a specic signal i.e. whether it is nite- or innite-length and whether it is discrete- or continuous-time) there is an appropriate transform to convert the signal into the frequency domain. Below is a table of the four Fourier transforms and when each is appropriate. It also includes the relevant convolution for the specied space. Table of Fourier Representations Transform Continuous-Time Fourier Series Time Domain L ([0, T )) 2 Frequency Domain l (Z) 2 Convolution Continuous-Time cular Cir- continued on next page 1 This content is available online at <http://cnx.org/content/m10096/2.11/>. 2 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 3 "Continuous-Time Fourier Series (CTFS)" <http://cnx.org/content/m10097/latest/> 4 "Discrete Fourier Transform" <http://cnx.org/content/m0502/latest/> 141 142 CHAPTER 7. DISCRETE FOURIER TRANSFORM Continuous-Time Fourier Transform Discrete-Time Transform Discrete Fourier Transform Fourier L2 (R) l2 (Z) l2 ([0, N − 1]) L2 (R) L2 ([0, 2π )) l2 ([0, N − 1]) Continuous-Time ear Lin- Discrete-Time Linear Discrete-Time Circular Table 7.1 7.2 Fourier Analysis in Complex Spaces 7.2.1 Introduction 5 By now you should be familiar with the derivation of the Fourier series (Section 6.2) for continuous-time, periodic (Section 6.1) functions. This derivation leads us to the following equations that you should be quite familiar with: f (t) = n cn ejω0 nt f (t) e−(jω0 nt) dt n (7.1) cn = = 1 T 1 T in (7.2) < f, ejω0 nt > f (t). where cn tells us the amount of frequency ω0 n In this module, we will derive a similar expansion for discrete-time, periodic functions. In doing so, we will derive the (DFT). Discrete Time Fourier Series (DTFS), or the Discrete Fourier Transform (Section 10.2) 7.2.2 Derivation of DTFS Much like a periodic, continuous-time function can be thought of as a function on the interval [0, T ] (a) Figure 7.1: (b) We will just consider one interval of the periodic function throughout this section. (a) Periodic Function (b) Function on the interval [0, T ] 5 This content is available online at <http://cnx.org/content/m10784/2.7/>. 143 A periodic, discrete-time signal (with period N) can be thought of as a nite set of numbers. For example, say we have the following set of numbers that describe a periodic, discrete-time signal, where N = 4: {. . . , 3, 2, −2, 1, 3, . . . } We can represent this signal as either a periodic signal or as just a single interval as follows: (a) Figure 7.2: (b) Here we can look at just one period of the signal that has a vector length of four and is contained in C4 . (a) Periodic Function (b) Function on the interval [0, T ] note: The set of discrete time signals with period N equal CN . Before we look into Just like the continuous case, we are going to form a basis using harmonic sinusoids. this, it will be worth our time to look at the discrete-time, complex sinusoids in a little more detail. 7.2.2.1 Complex Sinusoids If you are familiar with the basic sinusoid signal complex sinusoid noted as: 6 and with complex exponentials (Section 1.6) then you should not have any problem understanding this section. In most texts, you will see the the discrete-time, ejωn Example 7.1 Figure 7.3: Complex sinusoid with frequency ω = 0 6 "Elemental Signals" <http://cnx.org/content/m0004/latest/> 144 CHAPTER 7. DISCRETE FOURIER TRANSFORM Example 7.2 Figure 7.4: Complex sinusoid with frequency ω = π 4 7.2.2.1.1 In the Complex Plane The complex sinusoid can be directly mapped onto our complex plane , which allows us to easily visualize changes to the complex sinusoid and extract certain properties. The absolute value of our complex sinusoid has the following characteristic: 7 |ejωn | = 1 , n ∈ R statement holds true: (7.3) which tells that our complex sinusoid only takes values on the unit circle. As for the angle, the following ∠ejωn = wn As (7.4) n increases, we can picture ejωn equaling the values we get moving counterclockwise around the unit circle. See Figure 7.5 for an illustration: (a) Figure 7.5: (b) (c) These images show that as n increases, the value of ejωn moves around the unit circle counterclockwise. (a) n = 0 (b) n = 1 (c) n = 2 7 "The Complex Plane" <http://cnx.org/content/m10596/latest/> 145 note: For ejωn to be periodic (Section 6.1), we need ejωN = 1 ω= for some N. Example 7.3 For our rst example let us look at a periodic signal where 2π 7 and N = 7. (a) Figure 7.6: (b) “ 2π n 7 (a) N = 7 (b) Here we have a plot of Re ej ” . Example 7.4 Now let us look at the results of plotting a non-periodic signal where ω=1 and N = 7. (a) Figure 7.7: (b) ` ´ (a) N = 7 (b) Here we have a plot of Re ejn . 7.2.2.1.2 Aliasing Our complex sinusoids have the following property: ejωn = ej (ω+2π)n Given this property, if we have a sinusoid with frequency with frequency note: (7.5) then this signal "aliases" to a sinusoid ω + 2π , ω. ejωn is unique for Each ω ∈ [0, 2π ) 146 CHAPTER 7. DISCRETE FOURIER TRANSFORM 7.2.2.1.3 "Negative" Frequencies If we are given a signal with frequency plane as: π < ω < 2π , then this signal will be represented on our complex (a) Figure 7.8: (b) Plot of our complex sinusoid with a frequency greater than π. From the above images, the value of our complex sinusoid on the complex plane may be more easily interpreted as cycling "backwards" (clockwise) around the unit circle with frequency counterclockwise by 2π − ω . Rotating w is the same as rotating clockwise by 2π − ω . ω= 5π 4 and Example 7.5 Let us plot our complex sinusoid, ejωn , where we have n = 1. Figure 7.9: The above plot of our given frequency is identical to that of one where ω = − ` 3π ´ 4 . This plot is the same as a sinusoid with "negative" frequency note: − 3π 4. It makes more physical sense to chose [−π, π ) as the interval for ω. Remember that ejωn and e−(jωn) are conjugates. ∗ This gives us the following notation and property: (7.6) ejωn = e−(jωn) The real parts of of both exponentials in the above equation are the same; the imaginary parts are negative of one another. This idea is the basic denition of a conjugate. Now that we have looked over the concepts of complex sinusoids, let us turn our attention back to nding a basis for discrete-time, periodic signals. After looking at all the complex sinusoids, we must answer the question of which discrete-time sinusoids do we need to represent periodic sequences with a period note: N. Find a set of vectors bk = ejωk n , n = {0, . . . , N − 1} such that {bk } are a basis for Cn 147 In answer to the above question, let us try the "harmonic" sinusoids with a fundamental frequency ω0 = 2π N: Harmonic Sinusoid ej N kn 2π (7.7) (a) Figure 7.10: (b) (c) Examples of our Harmonic Sinusoids (a) Harmonic sinusoid“with k ” 0 (b) Imaginary = “ 2π ” j 2π 2n j N 1n N , with k = 1 (c) Imaginary part of sinusoid, Im e , with k = 2 part of sinusoid, Im e ej N kn 2π Theorem 7.1: If we let is periodic with period N and has k "cycles" between n=0 and n = N − 1. 2π 1 bk [n] = √ ej N kn , n = {0, . . . , N − 1} N (Section 15.7.3: Orthonormal Basis) Proof: First of all, we must show where the exponential term is a vector in CN , then {bk } |k={0,...,N −1} N for C . < bk , bl >= δkl 1 N N −1 is an orthonormal basis {bk } is orthonormal, i.e. N −1 < bk , bl >= n=0 bk [n] bl [n] ∗ = ej N kn e−(j N ln) 2π 2π n=0 < bk , bl >= If 1 N N −1 ej N (l−k)n n=0 2π (7.8) l = k, then < bk , bl > = = 1 N N −1 n=0 (1) 1 (7.9) If l = k, then we must use the "partial summation formula" shown below: N −1 ∞ ∞ (αn ) = n=0 n=0 (α n ) − n=N (αn ) = N −1 1 αN 1 − αN − = 1−α 1−α 1−α 2π < bk , bl >= 1 N ej N (l−k)n n=0 148 CHAPTER 7. where in the above equation we can say that DISCRETE FOURIER TRANSFORM α = ej N (l−k) , 2π and thus we can see how this is in the form needed to utilize our partial summation formula. 1 < bk , bl >= N So, 1 − ej N (l−k)N 1− 2π ej N (l−k) 2π = 1 N 1−1 1 − ej N (l−k) 2π =0 1 if k = l < bk , bl >= 0 if k = l Therefore: (7.10) {bk } is an orthonormal set. {bk } is also a basis (Section 5.1.3: Basis), since there are N vectors which are linearly independent (Section 5.1.1: Linear Independence) (orthogonality implies linear independence). And nally, we have shown that the harmonic sinusoids for 2π 1 √ ej N kn N form an orthonormal basis Cn 7.2.2.2 Discrete-Time Fourier Series (DTFS) Using the steps shown above in the derivation and our previous understanding of Hilbert Spaces (Section 15.3) and Orthogonal Expansions (Section 15.8), the rest of the derivation is automatic. Given a discrete-time, periodic signal (vector in Cn ) f [n], we can write: 1 f [n] = √ N 1 ck = √ N 1 Note: Most people collect both the √ note: N −1 ck ej N kn k=0 2π (7.11) N −1 f [n] e−(j N kn) 2π (7.12) n=0 N terms into the expression for ck . Here is the common form of the DTFS with the above note taken into account: N −1 f [n] = k=0 N −1 ck ej N kn 2π 1 ck = N This what the f [n] e−(j N kn) 2π n=0 fft command in MATLAB does. 7.3 Matrix Equation for the DTFS Basis) in 8 The DTFS (Section 7.2.2.2: Discrete-Time Fourier Series (DTFS)) is just a change of basis (Section 5.1.3: CN . To start, we have f [n] = = in terms of the standard basis. f [n] f [0] e0 + f [1] e1 + · · · + f [N − 1] eN −1 n−1 k=0 (f [k ] δ [k − n]) (7.13) 8 This content is available online at <http://cnx.org/content/m10771/2.6/>. 149 f [0] f [1] f [2] . . . f [0] 0 0 . . . 0 0 0 0 0 . . . = f [N − 1] Taking the DTFS, we can write 0 f [n] f [1] + 0 . . . 0 N −1 0 + f [2] . . . 0 + ··· + (7.14) f [N − 1] in terms of the sinusoidal Fourier basis f [n] = k=0 ck ej N kn 1 1 ej N ej N . . . 4π 8π 2π (7.15) f [0] f [1] 1 + ... 2π 1 ej N 4π = c0 1 + c1 f [2] ej N . . . . . . . . . 2π j N (N −1) f [N − 1] 1 e W here instead of + c2 (7.16) ej N (N −1) 4π We can form the basis matrix (we'll call it B ) by stacking the basis vectors in as columns W = b0 [n] b1 [n] . . . 1 1 ej N ej N . . . 2π 4π bN −1 [n] 1 ej N ej N . . . 4π 8π ... ... ... . . . 2π 2π 1 ej N (N −1) ej N 2(N −1) . . . (7.17) 1 = 1 . . . 1 with ej N (N −1) 2π ej N 2(N −1) 2π ... ej N (N −1)(N −1) 2π 2π bk [n] = ej N kn Wj,k = ej N kn = Wn,k 1 −1 W N 2π note: the entry in the k-th row and n-th column is So, here we have an additional symmetry W = WT ⇒ WT = W∗ = (since ∗ {bk [n]} are orthogonal) We can now rewrite the DTFS equations in matrix form where we have: • f = signal (vector in CN ) • c = DTFS coes. (vector in CN ) "synthesis" "analysis" f = Wc c = W T f = W ∗f Table 7.2 f [n] =< c, bn ∗ > c [k ] =< f , bk > ∗ Finding (and inverting) the DTFS is just Everything in CN is clean: matrix multiplication. no limits, no convergence questions, just good ole matrix arithmetic. 150 CHAPTER 7. 9 DISCRETE FOURIER TRANSFORM 7.4 Periodic Extension to DTFS 7.4.1 Introduction Time Fourier Series (DTFS)), we can consider the mapped over an innite number of intervals. Now that we have an understanding of the discrete-time Fourier series (DTFS) (Section 7.2.2.2: Discrete- periodic extension of c [k] (the Discrete-time Fourier coecients). Figure 7.11 shows a simple illustration of how we can represent a sequence as a periodic signal (a) (b) Figure 7.11: (a) vectors (b) periodic sequences Exercise 7.1 Why does a periodic (Section 6.1) extension to the DTFS coecients (Solution on p. 162.) c [k ] make sense? 7.4.2 Examples Example 7.6: Discrete time square wave 9 This content is available online at <http://cnx.org/content/m10778/2.9/>. 151 Figure 7.12 Calculate the DTFS c [k ] using: c [k ] = 1 N N −1 f [n] e−(j N kn) 2π (7.18) n=0 Just like continuous time Fourier series, we can take the summation over any interval, so we have ck = Let 1 N N1 e−(j N kn) 2π (7.19) n=−N1 m = n + N1 (so we can get a geometric series starting at 0) ck = = 1 N 2N1 m=0 1 j 2π k N Ne e−(j N (m−N1 )k) 2π 2N1 e−(j N mk) 2π (7.20) m=0 Now, using the "partial summation formula" M (an ) = n=0 1 − aM +1 1−a 2N1 m=0 2π (7.21) ck = = e−(j N k) − j 2π (2N1 +1)) 1 j 2π N1 k 1−e ( N eN N − jk 2π 1−e ( N ) 1 j 2π N1 k N Ne − jk 2π 2N m (7.22) Manipulate to make this look like a sinc function (distribute): ck = 1 N e ( ) ( „ e jk 2π N (N1 + 1 ) −e−(jk 2π (N1 + 1 )) 2 N 2 2π 1 N2 « e sin − jk 2π 2N ) ) = = 1 N 2πk N1 + 1 2 N ( ejk ! „ −e − jk 2π 1 N2 ( ) « (7.23) sin( πk ) N digital sinc 152 CHAPTER 7. DISCRETE FOURIER TRANSFORM note: It's periodic! Figure 7.13, Figure 7.14, and Figure 7.15show our above function and coecients for various values of N1 . (a) Figure 7.13: (b) N1 = 1 (a) Plot of f [n]. (b) Plot of c [k]. (a) Figure 7.14: (b) N1 = 3 (a) Plot of f [n]. (b) Plot of c [k]. (a) Figure 7.15: (b) N1 = 7 (a) Plot of f [n]. (b) Plot of c [k]. 153 7.5 Circular Shifts straightforward ( 10 The many properties of the DFT (Section 7.2.2.2: Shifts. very similar to the Fourier Series (Section 6.2)) once we have once concept down: Circular Discrete-Time Fourier Series (DTFS)) become really 7.5.1 Circular shifts We can picture periodic (Section 6.1) sequences as having discrete points on a circle as the domain Figure 7.16 Shifting by m, f (n + m), corresponds to rotating the cylinder m notches ACW (counter clockwise). For m = −2, we get a shift equal to that in the following illustration: Figure 7.17: for m = −2 10 This content is available online at <http://cnx.org/content/m10780/2.7/>. 154 CHAPTER 7. DISCRETE FOURIER TRANSFORM Figure 7.18 To cyclic shift we follow these steps: 1) Write f (n) on a cylinder, ACW Figure 7.19: N =8 155 2) To cyclic shift by m, spin cylinder m spots ACW (f [n] → f [((n + m))N ]) Figure 7.20: m = −3 Example 7.7 If f (n) = [0, 1, 2, 3, 4, 5, 6, 7], It's called circular shifting, since we're moving around the circle. then f (((n − 3))N ) = [3, 4, 5, 6, 7, 0, 1, 2] The usual shifting is called "linear shifting" (shifting along a line). 7.5.1.1 Notes on circular shifting f [((n + N ))N ] = f [n] Spinning N spots is the same as spinning all the way around, or not spinning at all. f [((n + N ))N ] = f [((n − (N − m)))N ] Shifting ACW m is equivalent to shifting CW N −m 156 CHAPTER 7. DISCRETE FOURIER TRANSFORM Figure 7.21 f [((−n))N ] The above expression, simply writes the values of f [n] clockwise. 157 (a) Figure 7.22: (b) ˆ ˜ (a) f [n] (b) f ((−n))N 7.5.2 Circular shifts and the DFT Theorem 7.2: If then Circular Shifts and DFT f [n] ↔ F [k ] f [((n − m))N ] ↔ e−(j N km) F [k ] DFT DFT 2π (i.e. circular shift in time domain = phase shift in DFT) Proof: 1 f [n] = N so phase shifting the DFT N −1 F [k ] ej N kn k=0 2π (7.24) f [n] = = 1 N 1 N N −1 k=0 N −1 k=0 F [k ] e−(j N kn) ej N kn 2π 2π F [k ] ej N k(n−m) 2π (7.25) = f [((n − m))N ] 7.6 Circular Convolution and the DFT 7.6.1 Introduction time signals as 11 You should be familiar with Discrete-Time Convolution (Section 4.2), which tells us that given two discrete- x [n], the system's input, and h [n], = = the system's response, we dene the output of the system y [n] 11 This x [n] ∗ h [n] ∞ k=−∞ (x [k ] h [n − k ]) (7.26) content is available online at <http://cnx.org/content/m10786/2.8/>. 158 CHAPTER 7. DISCRETE FOURIER TRANSFORM When we are given two DFTs (nite-length sequences usually of length together as we do in the above convolution formula, often referred to as DFTs are periodic, they have nonzero values for nonzero for n≥N linear convolution. N ), we cannot just multiply them Because the and thus the multiplication of these two DFTs will be This idea led to the development of n ≥ N. We need to dene a new type of convolution operation that will result in our convolved signal being zero outside of the range convolution, also called cyclic or periodic convolution. n = {0, 1, . . . , N − 1}. circular 7.6.2 Circular Convolution Formula What happens when we multiply two DFT's together, where Y [k ] is the DFT of y [n]? (7.27) Y [k ] = F [k ] H [k ] when 0≤k ≤N −1 y [n] 1 N N −1 Using the DFT synthesis formula for y [n] = F [k ] H [k ] ej N kn k=0 N −1 m=0 2π 2π (7.28) And then applying the analysis formula F [k ] = N −1 m=0 f [m] e(−j ) N kn 2π y [n] = = 1 N N −1 k=0 N −1 m=0 f f [m] e(−j ) N kn N −1 k=0 2π H [k ] ej N kn (7.29) 2π [m] 1 N H [k ] ej N k(n−m) where we can reduce the second summation found in the above equation into h [((n − m))N ] = 1 N N −1 k=0 H [k ] e j 2π k(n−m) N N −1 y [n] = m=0 (f [m] h [((n − m))N ]) 0≤n≤N −1 h [n]) in the above, then we get: (7.30) which equals circular convolution! When we have y [n] ≡ (f [n] note: The notation represents cyclic convolution "mod N". 7.6.2.1 Steps for Cyclic Convolution Steps for cyclic convolution are the same as the usual convo, except all index calculations are done "mod N" = "on the wheel" Steps for Cyclic Convolution • Step 1: "Plot" f [m] and h [((−m))N ] 159 (a) Figure 7.23: (b) Step 1 • Step 2: "Spin" rotate the sequence, h [((−m))N ] n notches ACW (counter-clockwise) h [n], clockwise by n steps). to get h [((n − m))N ] (i.e. Simply Figure 7.24: Step 2 • Step 3: Pointwise multiply the f [m] wheel and the h [((n − m))N ] wheel. sum = y [n] • Step 4: Repeat for all 0≤n≤N −1 Example 7.8: Convolve (n = 4) (a) Figure 7.25: (b) Two discrete-time signals to be convolved. 160 CHAPTER 7. DISCRETE FOURIER TRANSFORM • h [((−m))N ] Figure 7.26 Multiply f [m] and sum to yield: y [0] = 3 • h [((1 − m))N ] Figure 7.27 Multiply f [m] and sum to yield: y [1] = 5 • h [((2 − m))N ] Figure 7.28 Multiply f [m] and sum to yield: y [2] = 3 • h [((3 − m))N ] 161 Figure 7.29 Multiply f [m] and sum to yield: y [3] = 1 See Example 7.9 The following demonstration allows you to explore this algorithm for circular convolution. here 12 for instructions on how to use the demo. This is an unsupported media type. To view, please see http://cnx.org/content/m10786/latest/DTCircularConvolution.llb 7.6.2.2 Alternative Algorithm Alternative Circular Convolution Algorithm • • • Step 1: Calculate the DFT of Step 2: Pointwise multiply Step 3: Inverse DFT f [n] which yields F [k ] and calculate the DFT of h [n] which yields H [k ]. Y [k ] = F [k ] H [k ] Y [k ] which yields y [n] Seems like a roundabout way of doing things, calculate the DFT of a sequence. To circularily convolve but it turns out that there are extremely fast ways to 2 N -point sequences: N −1 y [n] = m=0 For each (f [m] h [((n − m))N ]) n : N multiples, N −1 additions N points implies N2 multiplications, N (N − 1) additions implies O N2 complexity. 12 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 162 CHAPTER 7. DISCRETE FOURIER TRANSFORM Solutions to Exercises in Chapter 7 Solution to Exercise 7.1 (p. 150) Aliasing: bk = ej N kn 2π bk+N = ej N (k+N )n = ej N kn ej 2πn = ej N n = bk 2π 2π 2π (7.31) → DTFS coecients are also periodic with period N. Chapter 8 Fast Fourier Transform (FFT) 8.1 DFT: Fast Fourier Transform (DFT) 2 1 We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform computes the spectrum at N equally spaced frequencies from a length- N sequence. An issue that never arises in analog "computation," like that performed by a circuit, is how much work it takes to perform the signal processing operation such as ltering. In computation, this consideration translates to the number of basic computational steps required to perform the needed processing. the The number of steps, known as complexity, becomes equivalent to how long the computation takes (how long must we wait for an answer). Complexity is not so much tied to specic computers or programming languages but to how many steps are required on any computer. Thus, a procedure's stated complexity says that the time taken will be proportional to some function of the amount of data used in the computation and the amount demanded. each real-times-complex multiplication requires two real multiplications, meaning we have numbers requires For example, consider the formula for the discrete Fourier transform. For each frequency we chose, we must multiply each signal value by a complex number and add together the results. For a real-valued signal, 2N multiplications to perform. To add the results together, we must keep the real and imaginary parts separate. Adding N N −1 additions. Consequently, each frequency requires computational steps. As we have dominant termhere the DFT is an N frequencies, the total number of 2N + 2 (N − 1) = 4N − 2 computations is N (4N − 2). basic In complexity calculations, we only worry about what happens as the data lengths increase, and take the 4N 2 termas reecting how much work is involved in making the computation. As multiplicative constants don't matter since we are making a "proportional to" evaluation, we nd the O N2 computational procedure. This notation is read "order N -squared". Thus, if we double the length of the data, we would expect that the computation time to approximately quadruple. Exercise 8.1 tions emerge. (Solution on p. 168.) First of all, the spectra of such signals have conjugate symmetry, meaning that In making the complexity evaluation for the DFT, we assumed the data to be real. Three quesnegative frequency components (k N 3 2 + 1, ..., N + 1 in the DFT ) can be computed from the corresponding positive frequency components. Does this symmetry change the DFT's complexity? = Secondly, suppose the data are complex-valued; what is the DFT's complexity now? Finally, a less important but interesting question is suppose we want of K frequency values instead N; now what is the complexity? 1 This content is available online at 2 "Discrete Fourier Transform", (1) 3 "Discrete Fourier Transform", (1) <http://cnx.org/content/m0504/2.8/>. : Discrete Fourier transform <http://cnx.org/content/m0502/latest/#eqn1> : Discrete Fourier transform <http://cnx.org/content/m0502/latest/#eqn1> 163 164 CHAPTER 8. 4 FAST FOURIER TRANSFORM (FFT) 8.2 The Fast Fourier Transform (FFT) 8.2.1 Introduction The Fast Fourier Transform (FFT) is an ecient O(NlogN) algorithm for calculating DFTs • • • originally discovered by Gauss in the early 1800's rediscovered by Cooley and Tukey at IBM in the 1960's C.S. Burrus, Rice University's very own Dean of Engineering, literally "wrote the book" on fast DFT algorithms. The FFT 5 exploits symmetries in the W matrix to take a "divide and conquer" approach. We won't talk 6 about the actual FFT algorithm here, see these notes idea behind FFT. if you are interested in reading a little more on the 8.2.2 Speed Comparison How much better is O(NlogN) than O( N 2 )? Figure 8.1: This gure shows how much slower the computation time of an O(NlogN) process grows. N N2 N logN 10 100 100 104 200 1000 106 3000 106 1012 6 × 106 109 1018 9 × 109 1 Table 8.1 Say you have a 1 MFLOP machine (a million "oating point" operations per second). Let N = 1million = 106 . An O( An O( N 2 ) algorithm takes 1012 ors → 106 seconds 11.5 N logN ) algorithm takes 6 × 106 Flors → 6 seconds. N = 1million is days. note: not unreasonable. 3 × 106 numbers for each picture. So for two N (f [n] h [n]) directly: O( N 2 ) operations. point sequences Example 8.1 f [n] and 3 megapixel digital camera spits out h [n]. If computing 4 This content is available 5 "Fast Fourier Transform 6 "Fast Fourier Transform online at <http://cnx.org/content/m10783/2.6/>. (FFT)" <http://cnx.org/content/m10250/latest/> (FFT)" <http://cnx.org/content/m10250/latest/> 165 taking FFTs  O(NlogN) multiplying FFTs  O(N) inverse FFTs  O(NlogN). the total complexity is O(NlogN). note: FFT + digital computer were almost completely responsible for the "explosion" of DSP in the 60's. note: Rice was (and still is) one of the places to do research in DSP. 8.3 Deriving the Fast Fourier Transform 7 To derive the FFT, we assume that the signal's duration is a power of two: N = 2l . Consider what happens to the even-numbered and odd-numbered elements of the sequence in the DFT calculation. + · · · + s (N − 2) e(−j ) N s (0) + s (2) e(−j ) N 2π (2+1)k 2π (N −2+1)k N + s (3) e(−j ) N + · · · + s (N − 1) e(−j ) 2π ( N −1)k 2 πk (−j ) (−j ) 2N N 2 2 + ··· + s (N − 2) e s (0) + s (2) e 2π ( N −1)k 2 πk (−j ) 2N (−j ) 2 N s (1) + s (3) e e −(jNπk) 2 + · · · + s (N − 1) e 2 S (k ) = πk (−j ) 2N s (1) e Each term in square brackets has the 2π 2k 2π (N −2)k + = + (8.1) N 2 -length DFT. The rst one is a DFT of the evennumbered elements, and the second of the odd-numbered elements. The rst DFT is combined with the second multiplied by the complex exponential frequency indices form of a e −(j 2πk) N . The half-length transforms are each evaluated at k ∈ {0, . . . , N − 1} . Normally, the number of frequency indices in a DFT calculation range between zero and the transform length minus one. The computational advantage of the FFT comes from e −(j 2πk) N recognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations made in the half-length transforms and combines them through additions and the multiplication by N , which is not periodic over 2 , to rewrite the length-N DFT. Figure 8.2 (Length-8 DFT decomposition) N N2 illustrates this decomposition. As it stands, we now compute two length2 transforms (complexity 2O 4 ), multiply one of them by the complex exponential (complexity O (N ) ), and add the results (complexity O (N ) ). At this point, the total complexity is still dominated by the half-length DFT calculations, but the proportionality coecient has been reduced. Now for the fun. Because N = 2l , each of the half-length transforms can be reduced to two quarter-length transforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left with length-2 transforms. This transform is quite simple, involving only additions. Thus, the rst stage of the FFT has N 2 length-2 transforms (see the bottom part of Figure 8.2 (Length-8 DFT decomposition)). Pairs of these transforms are combined by adding one to the other multiplied by a complex exponential. Each pair N N requires 4 additions and 4 multiplications, giving a total number of computations equaling 8 4 = 2 . This number of computations does not change from stage to stage. Because the number of stages, the number of times the length can be divided by two, equals log2 N , the complexity of the FFT is O (N logN ) . 7 This content is available online at <http://cnx.org/content/m0528/2.7/>. 166 CHAPTER 8. FAST FOURIER TRANSFORM (FFT) Length-8 DFT decomposition (a) (b) The initial decomposition of a length-8 DFT into the terms using even- and odd-indexed inputs marks the rst phase of developing the FFT algorithm. When these half-length transforms are successively decomposed, we are left with the diagram shown in the bottom panel that depicts the length-8 FFT computation. Figure 8.2: Doing an example will make computational savings more obvious. Let's look at the details of a length-8 DFT. As shown on Figure 8.2 (Length-8 DFT decomposition), we rst decompose the DFT into two length4 DFTs, with the outputs added and subtracted together in pairs. Considering Figure 8.2 (Length-8 DFT decomposition) as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTs into the nal calculation because of the periodicity of the DFT output. Examining how pairs of outputs are collected together, we create the basic computational element known as a buttery (Figure 8.3 (Buttery)). 167 Buttery The basic computational element of the fast Fourier transform is the buttery. It takes two complex numbers, represented by a and b, and forms the quantities shown. Each buttery requires one complex multiplication and two complex additions. Figure 8.3: By considering together the computations involving common output frequencies from the two half-length DFTs, we see that the two complex multiplies are related to each other, and we can reduce our computational work even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their outputs, we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 8.2 (Length-8 DFT decomposition)). Although most of the complex multiplies are quite simple (multiplying by e−(jπ) means negating real and imaginary parts), let's count those for purposes of evaluating the complexity as full N 2 = 4 complex multiplies and 2N = 16 additions for each stage and 3N stages, making the number of basic computations 2 log2 N as predicted. complex multiplies. We have log2 N = 3 Exercise 8.2 (Solution on p. 168.) Note that the ordering of the input sequence in the two parts of Figure 8.2 (Length-8 DFT decomposition) aren't quite the same. Why not? How is the ordering determined? Other "fast" algorithms were discovered, all of which make use of how many common factors the transform length N has. In number theory, the number of prime factors a given integer has measures how it is. The numbers 16 and 81 are highly composite (equaling ( 24 and 34 composite respectively), the number 18 is less so 21 32 ), and 17 not at all (it's prime). In over thirty years of Fourier transform algorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. It is so computationally ecient that power-of-two transform lengths are frequently used regardless of what the actual length of the data. 168 CHAPTER 8. FAST FOURIER TRANSFORM (FFT) Solutions to Exercises in Chapter 8 Solution to Exercise 8.1 (p. 163) When the signal is real-valued, we may only need half the spectral values, but the complexity remains unchanged. If the data are complex-valued, which demands retaining all frequency values, the complexity is again the same. When only Solution to Exercise 8.2 (p. 167) K frequencies are needed, the complexity is O (KN). The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has. The ordering is determined by the algorithm. Chapter 9 Convergence 9.1 Convergence of Sequences 9.1.1 What is a Sequence? Denition 9.1: sequence A sequence is a function 1 gn dened on the positive integers 'n'. We often denote a sequence by {gn } |∞ n=1 Example A real number sequence: gn = 1 n Example A vector sequence: gn = sin cos nπ 2 nπ 2 Example A function sequence: 1 if 0 ≤ t < 1 n gn (t) = 0 otherwise note: A function can be thought of as an innite dimensional vector where for each value of 't' we have one dimension 9.1.2 Convergence of Real Sequences Denition 9.2: limit A sequence {gn } |∞ n=1 converges to a limit g∈R if for every >0 there is an integer N such that |gi − g | < 1 This , i≥N content is available online at <http://cnx.org/content/m10883/2.5/>. 169 170 CHAPTER 9. We usually denote a limit by writing CONVERGENCE i→∞ or lim gi = g gi → g The above denition means that no matter how small we make points of the sequence are within distance of , except for a nite number of gi 's, all g. Example 9.1 We are given the following convergent sequence: gn = Intuitively we can assume the following limit: 1 n (9.1) n→∞ lim gn = 0 > 0. n ≥ N we Let us choose have Let us prove this rigorously. Say that we are given a real number where N= 1 , x denotes the smallest integer larger than x. Then for |gn − 0| = Thus, 1 1 ≤ < n N n→∞ lim gn = 0 Example 9.2 Now let us look at the following non-convergent sequence 1 if n = even gn = −1 if n = odd This sequence oscillates between 1 and -1, so it will therefore never converge. 9.1.2.1 Problems For practice, say which of the following sequences converge and give their limits if they exist. 1. 2. 3. 4. 5. 6. gn = n 1 if n = even n gn = −1 if n = odd n 1 if n = power of 10 n gn = 1 otherwise n if n < 105 gn = 1 if n ≥ 105 gn = sin gn = j n n π n 171 9.2 Convergence of Vectors 9.2.1 Convergence of Vectors 2 We now discuss pointwise and norm convergence of vectors. Other types of convergence also exist, and one in particular, uniform convergence (Section 9.3), can also be studied. For this discussion , we will assume that the vectors belong to a normed vector space (Section 15.2). 9.2.1.1 Pointwise Convergence A sequence (Section 9.1) the corresponding {gn } |∞ converges pointwise to the limit g if each element of gn n=1 element in g. Below are few examples to try and help illustrate this idea. converges to Example 9.3 gn = gn [1] gn [2] = 1+ 2− 1 n 1 n First we nd the following limits for our two gn 's: n→∞ lim (gn [1]) = 1 lim (gn [2]) = 2 lim gn = g n→∞ Therefore we have the following, n→∞ pointwise, where g= 1 2 . Example 9.4 gn (t) = t , t∈R n t0 =0 n for all As done above, we rst want to examine the limit n→∞ where lim gn (t0 ) = lim n→∞ t0 ∈ R . Thus n→∞ lim gn = g pointwise where g (t) = 0 t ∈ R. 9.2.1.2 Norm Convergence The sequence (Section 9.1) lim gn − g = 0. Here · is the norm n→∞ (Section 15.2) of the corresponding vector space of gn 's. Intuitively this means the distance between vectors gn and g decreases to 0. converges to {gn } |∞ n=1 g in norm if Example 9.5 gn = 2 This 1+ 2− 1 n 1 n content is available online at <http://cnx.org/content/m10894/2.3/>. 172 CHAPTER 9. CONVERGENCE Let g= 1 2 gn − g = = = 1 n− 1 1 n2 + n2 √ 2 n 1+ 1 2 + 2− 12 n (9.2) Thus Example 9.6 gn (t) = Let n→∞ lim gn − g = 0 Therefore, gn → g in norm. t n if 0≤t≤1 0 otherwise 1 t2 dt 0 n2 t3 1 3n2 |n=0 1 3n2 in norm. g (t) = 0 for all t. gn (t) − g (t) = = = (9.3) Thus n→∞ lim gn (t) − g (t) = 0 Therefore, gn (t) → g (t) 9.2.2 Pointwise vs. Norm Convergence Theorem 9.1: Proof: For Rm , pointwise and norm convergence are equivalent. Pointwise ⇒ Norm gn [i] → g [i] Assuming the above, then m ( gn − g ) = i=1 Thus, 2 (gn [i] − g [i]) 2 n→∞ lim ( gn − g ) 2 = = = n→∞ m i=1 lim m i=1 2 (9.4) n→∞ lim 2 0 Proof: Norm ⇒ Pointwise gn − g → 0 lim m i=1 n→∞ 2 = = 0 m i=1 n→∞ lim 2 (9.5) Since each term is greater than or equal zero, all 'm' terms must be zero. Thus, n→∞ lim 2 = 0 173 forall i. Therefore, gn → g pointwise note: In innite dimensional spaces the above theorem is no longer true. We prove this with counter examples shown below. 9.2.2.1 Counter Examples Example 9.7: Pointwise [U+21CF] Norm We are given the following function: n if 0 < t < 1 n gn (t) = 0 otherwise Then n→∞ lim gn (t) = 0 This means that, gn (t) → g (t) where for all Now, t g (t) = 0. ( gn ) 2 = = ∞ −∞ 1 n (|gn (t) |) dt (9.6) 2 0 n2 dt = n→∞ Since the function norms blow up, they cannot converge to any function with nite norm. Example 9.8: Norm [U+21CF] Pointwise We are given the following function: 1 if 0 < t < 1 n gn (t) = 0 otherwise −1 if 0 < t < 1 n gn (t) = 0 otherwise Then, 1 n if n is even if n is odd gn − g = 0 where 1dt = 1 →0 n g (t) = 0 for all t. Therefore, gn → g in However, at norm not converge pointwise. t = 0, gn (t) oscillates between -1 and 1, and so it does not converge. Thus, gn (t) does 174 CHAPTER 9. CONVERGENCE 9.2.2.2 Problems Prove if the following sequences are pointwise convergent, norm convergent, or both and then state their limits. 1. 2. 0<t 0 if t ≤ 0 e−(nt) if t ≥ 0 gn (t) = 0 if t < 0 gn (t) = 3 1 nt if 9.3 Uniform Convergence of Function Sequences 9.3.1 Uniform Convergence of Function Sequences For this discussion, we will only consider functions with gn where R→R Denition 9.3: Uniform Convergence The sequence (Section 9.1) an integer N such that {gn } |∞ converges n=1 n ≥ N implies uniformly to function g if for every >0 there is |gn (t) − g (t) | ≤ for (9.7) all t ∈ R. The dierence Obviously every uniformly convergent sequence is pointwise (Section 9.2) convergent. between pointwise and uniform convergence is this: If and for every t∈R there is an integer N depending on converges uniformly to g, it is possible for each Example 9.9 1 and t such that (9.7) holds if n ≥ N . If {gn } > 0 to nd one integer N that will do for all t ∈ R. 1 , t∈R n , n≥N {gn } converges pointwise to g, then for every >0 gn (t) = Let >0 be given. Then choose N= . Obviously, |gn (t) − 0| ≤ for all t. Thus, gn (t) converges uniformly to 0. Example 9.10 gn (t) = Obviously for any for all >0 we cannot nd a t , t∈R n single function gn (t) pointwise for which (9.7) holds with g (t) = 0 t. Thus gn is not uniformly convergent. However we do have: gn (t) → g (t) Conclusion: Uniform convergence always implies pointwise convergence, but pointwise conver- gence does not guarantee uniform convergence. 3 This content is available online at <http://cnx.org/content/m10895/2.6/>. 175 9.3.1.1 Problems Rigorously prove if the following functions converge pointwise, uniformly, or both. 1. 2. 3. gn (t) = sin(t) n t gn (t) = en 1 if t > 0 nt gn (t) = 0 if t ≤ 0 176 CHAPTER 9. CONVERGENCE Chapter 10 Discrete Time Fourier Transform (DTFT) 10.1 Discrete Fourier Transformation N −1 1 10.1.1 N-point Discrete Fourier Transform (DFT) X [k ] = n=0 x [n] e(−j ) n kn N −1 2π , k = {0, . . . , N − 1} (10.1) 1 x [n] = N Note that: X [k ] ej k=0 2π n kn , n = {0, . . . , N − 1} (10.2) • X [k ] is the DTFT evaluated at ω = 2π k , k = {0, . . . , N − 1} N • Zero-padding x [n] to M samples prior to the DFT yields an M -point DTFT: uniform sampled version of the N −1 X ej M k = n=0 N −1 2π x [n] e(−j ) M k 2π (10.3) X ej M k = n=0 2π xzp [n] e(−j ) M k 2π X ej • The 2π Mk = Xzp [k ] , k = {0, . . . , M − 1} N -pt sequence: N -pt DFT is sucient to reconstruct the entire DTFT of an N −1 X ejω = n=0 N −1 x [n] e(−j )ωn N −1 (10.4) X ejω = n=0 1 N X [k ] ej N kn e(−j )ωn k=0 2π 1 This content is available online at <http://cnx.org/content/m10421/2.11/>. 177 178 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) N −1 X ejω = k=0 N −1 (X [k ]) 1 N 1 N sin sin N −1 e(−j )(ω− N k)n 2π k=0 ω N −2πk 2 ω N −2πk 2N X ejω = k=0 (X [k ]) e(−j )(ω− N k) 2π N −1 2 1 . D 0 Figure 10.1: 2pi/N 4pi/N 2pi Dirichlet sinc, ωN 1 sin( 2 ) N sin( ω ) 2 • The DFT has a convenient matrix representation. Dening where X [0] X [1] . . . 0 WN 0 WN 1 WN 2 WN . . . 0 WN 2 WN 4 WN . . . W0 N = 0 WN . . . X [N − 1] X = W ( x) respectively. WN = e(−j ) N , 0 x [0] WN . . . 3 x [1] WN . . . . 6 . WN . . . . . . . . . . x [N − 1] 2π (10.5) W has the following properties: ·W ·W · · • For is Vandermonde: the is symmetric: nth W = WT column of W is a polynomial in n WN 1 1 1 √ W is unitary: √W √W N N N 1 W ∗ = W −1 , the IDFT matrix. N H = 1 √W N H 1 √W N =I N 2 log2 N rather than N a power of 2, the FFT can be used to compute the DFT using about N2 operations. N 16 64 256 1024 N 2 log2 N 32 192 1024 5120 N2 256 4096 65536 1048576 179 Table 10.1 10.2 Discrete Fourier Transform (DFT) 2 The discrete-time Fourier transform (and the continuous-time transform as well) can be evaluated when we have an analytic expression for the signal. Suppose we just have a signal, such as the speech signal used in the previous chapter, for which there is no formula. How then would you compute the spectrum? For example, how did we compute a spectrogram such as the one shown in the speech signal example ? The Discrete Fourier Transform (DFT) allows the computation of spectra from discrete-time data. discrete-time we can 3 exactly While in calculate spectra, for analog signals no similar exact spectrum computation exists. For analog-signal spectra, use must build special devices, which turn out in most cases to consist of A/D converters and discrete-time computations. Certainly discrete-time spectral analysis is more exible than continuous-time spectral analysis. The formula for the DTFT 4 is a sum, which conceptually can be easily computed save for two issues. • Signal duration. signal's spectrum. The sum extends over the signal's duration, which must be nite to compute the It is exceedingly dicult to store an innite-length signal in any case, so we'll assume that the signal extends over • Continuous frequency. [0, N − 1]. Subtler than the signal duration issue is the fact that the frequency variable is continuous: It may only need to span one period, like 1 1 2 , 2 or [0, 1], but the DTFT formula as it stands requires evaluating the spectra at frequencies within a period. Let's compute the spectrum k at a few frequencies; the most obvious ones are the equally spaced ones f = K , k ∈ {0, . . . , K − 1}. all − We thus dene the discrete Fourier transform (DFT) to be N −1 S (k ) = n=0 Here, s ( n ) e −( k j 2πnk K ) , k ∈ {0, . . . , K − 1} (10.6) S (k ) is shorthand for S ej 2π K . We can compute the spectrum at as many equally spaced frequencies as we like. Note that you can think about this computationally motivated choice as sampling the spectrum; more about this interpretation later. The issue now is how many frequencies are enough to capture how the spectrum changes with frequency. One way of answering this question is determining an inverse discrete Fourier transform formula: given S (k ), k = {0, . . . , K − 1} s (n) = K −1 k=0 how do we nd j 2πnk K s (n), n = {0, . . . , N − 1}? Presumably, the formula will be of the form S (k ) e . Substituting the DFT formula in this prototype inverse transform yields K −1 N −1 s (n) = k=0 m=0 s (m) e−(j 2πmk K ) ej 2πnk K (10.7) Note that the orthogonality relation we use so often has a dierent character now. K −1 e−(j k=0 2πkm K ) ej 2πkn K K = if m = {n, (n ± K ) , (n ± 2K ) , . . . } 0 otherwise (10.8) 2 This content is available online at <http://cnx.org/content/m10249/2.28/>. 3 "Modeling the Speech Signal", Figure 5: spectrogram <http://cnx.org/content/m0049/latest/#spectrogram> 4 "Discrete-Time Fourier Transform (DTFT)", (1) <http://cnx.org/content/m10247/latest/#eqn1> 180 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) We obtain nonzero value whenever the two indices dier by multiples of K. We can express this result as K l (δ (m − n − lK )). Thus, our formula becomes N −1 ∞ s (n) = m=0 The integers to be a s (m) K l=−∞ (δ (m − n − lK )) (10.9) single n and m both range over {0, . . . , N − 1}. To have an inverse transform, we need the sum unit sample for m, n in this range. If it did not, then s (n) would equal a sum of values, and we would not have a valid transform: Once going into the frequency domain, we could not get back unambiguously! Clearly, the term to l=0 soon). If we evaluate the spectrum at fewer frequencies than the signal's duration, the term corresponding for some values of always provides a unit sample (we'll take care of the factor of K m = n+K will also appear for some values of prototype transform equals is to require K ≥ N: We must have at least as many frequency samples as the signal's duration. s (n) + s (n + K ) m, n = {0, . . . , N − 1}. This n. The only way situation means that our to eliminate this problem In this way, we can return from the frequency domain we entered via the DFT. Exercise 10.1 (Solution on p. 189.) Given the sampling interpretation of the spectrum, When we have fewer frequency samples than the signal's duration, some discrete-time signal values equal the sum of the original signal values. characterize this eect a dierent way. Another way to understand this requirement is to use the theory of linear equations. If we write out the expression for the DFT as a set of linear equations, s (0) + s (1) + · · · + s (N − 1) = S (0) s (0) + s (1) e(−j ) K + · · · + s (N − 1) e(−j ) . . . 2π 2π (N −1) K (10.10) = S (1) s (0) + s (1) e(−j ) we have 2π (K −1) K + · · · + s (N − 1) e(−j ) K ≥ N. 2π (N −1)(K −1) K = S (K − 1) K equations in N unknowns if we want to nd the signal from its sampled spectrum. This require- ment is impossible to fulll if solved. K < N; we must have Our orthogonality relation essentially says that if we have a sucient number of equations (frequency samples), the resulting set of equations can indeed be By convention, the number of DFT frequency values discrete Fourier transform pair consists of K is chosen to equal the signal's duration N. The Discrete Fourier Transform Pair S (k ) = s (n) = N −1 −(j 2πnk ) N n=0 s (n) e 2πnk N −1 1 S (k ) ej N k=0 N (10.11) Example 10.1 Use this demonstration to perform DFT analysis of a signal. This media object is a LabVIEW VI. Please view or download it at <DFTanalysis.llb> Example 10.2 Use this demonstration to synthesize a signal from a DFT sequence. This media object is a LabVIEW VI. Please view or download it at <DFT_Component_Manipulation.llb> 181 10.3 Table of Common Fourier Transforms 5 5 This content is available online at <http://cnx.org/content/m10099/2.10/>. 182 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) Time Domain Signal e −(at) at Frequency Domain Signal 1 a+jω 1 a−jω 2a a2 +ω 2 1 (a+jω )2 n! (a+jω )n+1 Condition a>0 a>0 a>0 a>0 a>0 u (t) e u (−t) e−(a|t|) te −(at) u (t) u (t) te n −(at) δ (t) 1 e jω0 t 1 2πδ (ω ) 2πδ (ω − ω0 ) π (δ (ω − ω0 ) + δ (ω + ω0 )) jπ (δ (ω + ω0 ) − δ (ω − ω0 )) πδ (ω ) + 1 jω 2 jω π 2 (δ (ω − ω0 ) + δ (ω + ω0 )) jω ω0 2 −ω 2 π 2j (δ (ω − ω0 ) − δ (ω + ω0 )) ω0 ω0 2 −ω 2 ω0 (a+jω )2 +ω0 2 a+jω (a+jω )2 +ω0 2 2τ sin(ωτ ) = 2τ sinc (ωt) ωτ cos (ω0 t) sin (ω0 t) u (t) sgn (t) cos (ω0 t) u (t) sin (ω0 t) u (t) e−(at) sin (ω0 t) u (t) e −(at) + + a>0 a>0 cos (ω0 t) u (t) u (t + τ ) − u (t − τ ) ω0 sin(ω0 t) = ω0 sinc (ω0 ) π ω0 t π t t u τ +1 −u τ +1 t t − τ +1 u τ −u t triag 2τ 2 ω0 t ω0 2π sinc 2 t τ t τ u (ω + ω0 ) − u (ω − ω0 ) + τ sinc2 −1 = ω ω0 ωτ 2 +1 ω ω0 u +1 ω ω0 +1 −u ω ω0 ω ω0 ω ω0 + −1 = − triag u −u ω 2ω0 continued on next page 183 e ∞ n=−∞ “ 2” − 2t 2 σ (δ (t − nT )) ω0 n=−∞ (δ (ω − nω0 )) “ 2 2” ω √ − σ2 σ 2πe Table 10.2 ∞ ω0 = 2π T 10.4 Discrete-Time Fourier Transform (DTFT) Discrete-Time Fourier Transform ∞ 6 X (ω ) = n=−∞ x (n) e−(jωn) (10.12) Inverse Discrete-Time Fourier Transform x (n) = 1 2π 2π X (ω ) ejωn dω 0 (10.13) 10.4.1 Relevant Spaces The Discrete-Time Fourier Transform 7 maps innite-length, discrete-time signals in l2 to nite-length (or 2π -periodic), continuous-frequency signals in L 2 . Figure 10.2: Mapping l2 (Z) in the time domain to L2 ([0, 2π)) in the frequency domain. 6 This content is available online at <http://cnx.org/content/m10108/2.12/>. 7 "Discrete-Time Fourier Transform (DTFT)" <http://cnx.org/content/m10247/latest/> 184 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) 8 10.5 Discrete-Time Fourier Transform Properties Discrete-Time Fourier Transform Properties Sequence Domain Linearity Conjugate Symmetry Even Symmetry Odd Symmetry Time Delay Complex Modulation Amplitude Modulation Frequency Domain a1 S1 ej 2πf + a2 S2 ej 2πf S ej 2πf = S e−(j 2πf ) S ej 2πf = S e−(j 2πf ) S ej 2πf = − S e−(j 2πf ) e−(j 2πf n0 ) S ej 2πf S ej 2π(f −f0 ) S (ej 2π(f −f0 ) )+S (ej 2π(f +f0 ) ) 2 S (ej 2π(f −f0 ) )−S (ej 2π(f +f0 ) ) 2j 1 d S ej 2πf −(2jπ ) df j 2π 0 1 2 a1 s1 (n) + a2 s2 (n) s (n) real ∗ s (n) = s (−n) s (n) = − (s (−n)) s (n − n0 ) ej 2πf0 n s (n) s (n) cos (2πf0 n) s (n) sin (2πf0 n) Multiplication by n Sum Value at Origin Parseval's Theorem ns (n) ∞ n=−∞ (s (n)) 2 Se 1 2 s (0) ∞ n=−∞ −( 1 ) 2 S ej 2πf df |S ej 2πf | df 2 (|s (n) |) −( 1 ) 2 Figure 10.3: Discrete-time Fourier transform properties and relations. 10.6 Discrete-Time Fourier Transform Pair to the discrete-time frequency sampled waveform that equals 9 When we obtain the discrete-time signal via sampling an analog signal, the Nyquist frequency corresponds 1 1 2 . To show this, note that a sinusoid at the Nyquist frequency 2Ts has a Sinusoid at Nyquist Frequency 1/2T 1 cos 2π 2Ts nTs = = cos (πn) (−1) n (10.14) −(j 2πn) 1 2 = e−(jπn) 2 equals e correspondence between analog and discrete-time frequency is established: The exponential in the DTFT at frequency = (−1) n , meaning that the Analog, Discrete-Time Frequency Relationship 8 This 9 This fD = fA Ts content is available online at <http://cnx.org/content/m0506/2.6/>. content is available online at <http://cnx.org/content/m0525/2.6/>. (10.15) 185 where gure 10 fD and fA represent discrete-time and analog frequency variables, respectively. The aliasing provides another way of deriving this result. As the duration of each pulse in the periodic sampling narrows, the amplitudes of the signal's spectral repetitions, which are governed by the Fourier signal pTs (t) series coecients of periodic with period , become increasingly equal. Thus, the sampled signal's spectrum becomes 1 1 1 . Thus, the Nyquist frequency corresponds to the frequency Ts 2Ts 2. The inverse discrete-time Fourier transform is easily derived from the following relationship: pTs (t) 11 1 if m = n e−(j 2πf m) e+jπf n df = 0 if m = n −( 1 ) 1 2 2 (10.16) Therefore, we nd that 1 2 −( 1 2 ) S ej 2πf e+j 2πf n df = = = 1 2 −( 1 ) 2 m m s (m) e−(j 2πf m) e+j 2πf n df 1 2 s (m) −( 1 ) 2 e(−(j 2πf ))(m−n) df (10.17) s (n) Fourier Transform Pairs in Discrete Time The Fourier transform pairs in discrete-time are S ej 2πf = n s (n) e−(j 2πf n) (10.18) Fourier Transform Pairs in Discrete Time 1 2 s (n) = −( 1 ) 2 S ej 2πf e+j 2πf n df (10.19) 10.7 DTFT Examples Example 10.3 an u (n) , where 12 Let's compute the discrete-time Fourier transform of the exponentially decaying sequence s (n) = u (n) is the unit-step sequence. Simply plugging the signal's expression into the Fourier transform formula, Fourier Transform Formula S ej 2πf = = ∞ n=−∞ ∞ n=0 an u (n) e−(j 2πf n) ae−(j 2πf ) n (10.20) This sum is a special case of the geometric series. decreases 0 1 to zero: |c0 | = A∆ . Thus, to maintain a mathematically viable Sampling Theorem, the amplitude A must increase as ∆ , T becoming innitely large as the pulse duration decreases. Practical systems use a small value of ∆ , say 0.1Ts and use ampliers to rescale the signal. 12 This content is available online at <http://cnx.org/content/m0524/2.11/>. 10 "The Sampling Theorem", Figure 2: aliasing <http://cnx.org/content/m0050/latest/#alias> 11 Examination of the periodic pulse signal reveals that as ∆ decreases, the value of c , the largest Fourier coecient, 186 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) Geometric Series |a| < 1 ∞ (α n ) = n=0 Thus, as long as 1 , |α | < 1 1−α (10.21) , we have our Fourier transform. S ej 2πf = 1 1 − ae−(j 2πf ) (10.22) Using Euler's relation, we can express the magnitude and phase of this spectrum. |S ej 2πf | = 1 (1 − acos (2πf )) + a2 sin2 (2πf ) 2 (10.23) ∠ S ej 2πf No matter what value of periodic function. dene it. When increases from = − arctan asin (2πf ) 1 − acos (2πf ) (10.24) a we choose, the above formulae clearly demonstrate the periodic Figure 10.4 shows indeed that the spectrum is a nature of the spectra of discrete-time signals. We need only consider the spectrum between a>0 to 1 1 2 and 2 to unambiguously , we have a lowpass spectrum  the spectrum diminishes as frequency − 1 2  with increasing a leading to a greater low frequency content; for we have a highpass spectrum (Figure 10.5). 0 a<0 , 2 |S(ej2πf)| 1 f -2 -1 45 0 ∠S(ej2πf) 1 2 -2 -1 -45 1 2 f Figure 10.4: The spectrum of the exponential signal (a = 0.5) is shown over the frequency range [−2, 2], clearly demonstrating the periodicity of all discrete-time spectra. The angle has units of degrees. 187 Spectral Magnitude (dB) 20 10 0 -10 90 45 0 a = 0.9 a = 0.5 a = –0.5 0.5 f Angle (degrees) a = –0.5 f 0.5 a = 0.5 -90 a = 0.9 -45 Figure 10.5: The spectra of several exponential signals are shown. What is the apparent relationship between the spectra for a = 0.5 and a = −0.5 ? Example 10.4 Analogous to the analog pulse signal, let's nd the spectrum of the length- N pulse sequence. 1 if 0 ≤ n ≤ N − 1 s (n) = 0 otherwise The Fourier transform of this sequence has the form of a truncated geometric series. (10.25) N −1 Se j 2πf = n=0 e−(j 2πf n) (10.26) For the so-called nite geometric series, we know that Finite Geometric Series N +n0 −1 (αn ) = αn0 n=n0 for all values of α . Exercise 10.2 1 − αN 1−α (10.27) (Solution on p. 189.) Derive this formula for the nite geometric series sum. The "trick" is to consider the dierence between the series'; sum and the sum of the series multiplied by Applying this result yields (Figure 10.6.) α . S ej 2πf = = 1−e−(j 2πf N ) 1−e−(j 2πf ) e(−(jπf ))(N −1) sin(πf N ) sin(πf ) (10.28) 188 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) The ratio of sine functions has the generic form of function, dsinc (x) duration. sin(N x) sin(x) , which is known as the j 2πf . Thus, our transform can be concisely expressed as S e = e(−(jπf ))(N −1) dsinc (πf ) discrete-time sinc N , the pulse's . The discrete-time pulse's spectrum contains many ripples, the number of which increase with Spectral Magnitude 10 5 0 180 90 0 -90 -180 f 0.5 Angle (degrees) 0.5 f The spectrum of a length-ten pulse is shown. Can you explain the rather complicated appearance of the phase? Figure 10.6: 189 Solutions to Exercises in Chapter 10 Solution to Exercise 10.1 (p. 180) Solution to Exercise 10.2 (p. 187) α n=n0 This situation amounts to aliasing in the time-domain. N +n0 −1 N +n0 −1 (αn ) − n=n0 (αn ) = αN +n0 − αn0 (10.29) which, after manipulation, yields the geometric sum formula. 190 CHAPTER 10. DISCRETE TIME FOURIER TRANSFORM (DTFT) Chapter 11 Continuous Time Fourier Transform (CTFT) 11.1 Continuous-Time Fourier Transform (CTFT) 11.1.1 Introduction Due to the large number of continuous-time signals that are present, the Fourier series 2 1 provided us the rst glimpse of how me we may represent some of these signals in a general manner: as a superposition of a number of sinusoids. Now, we can look at a way to represent continuous-time nonperiodic signals using the same idea of superposition. Below we will present the we must now nd a way to include Continuous-Time Fourier Transform (CTFT), also referred to as just the Fourier Transform (FT). Because the CTFT now deals with nonperiodic signals, all frequencies in the general equations. 11.1.1.1 Equations Continuous-Time Fourier Transform ∞ F (Ω) = −∞ f (t) e−(j Ωt) dt (11.1) Inverse CTFT f (t) = 1 2π ∞ F (Ω) ej Ωt dΩ −∞ (11.2) warning: Do not be confused by notation - it is not uncommon to see the above formula written slightly dierent. One of the most common dierences among many professors is the way that the exponential is written. Above we used the radial frequency variable Ω in the exponential, where Ω = 2πf , but one will often see professors include the more explicit expression, 3 j 2πf t, in the exponential. Click here for an overview of the notation used in Connexion's DSP modules. The above equations for the CTFT and its inverse come directly from the Fourier series and our understanding of its coecients. For the CTFT we simply utilize integration rather than summation to be able to express the aperiodic signals. This should make sense since for the CTFT we are simply extending the 1 This content is available online at <http://cnx.org/content/m10098/2.10/>. 2 "Classic Fourier Series" <http://cnx.org/content/m0039/latest/> 3 "DSP notation" <http://cnx.org/content/m10161/latest/> 191 192 CHAPTER 11. CONTINUOUS TIME FOURIER TRANSFORM (CTFT) ideas of the Fourier series to include nonperiodic signals, and thus the entire frequency spectrum. Look at the Derivation of the Fourier Transform 4 for a more in depth look at this. 11.1.2 Relevant Spaces The Continuous-Time Fourier Transform maps innite-length, continuous-time signals in length, continuous-frequency signals in the spaces used in Fourier analysis. L2 to innite- L 2 . Review the Fourier Analysis (Section 7.1) for an overview of all Figure 11.1: Mapping L2 (R) in the time domain to L2 (R) in the frequency domain. For more information on the characteristics of the CTFT, please look at the module on Properties of the Fourier Transform (Section 11.2). 11.1.3 Example Problems Exercise 11.1 Find the Fourier Transform (CTFT) of the function (Solution on p. 196.) e−(αt) if t ≥ 0 f (t) = 0 otherwise (11.3) Exercise 11.2 Find the inverse Fourier transform of the square wave dened as (Solution on p. 196.) 1 if |Ω| ≤ M X (Ω) = 0 otherwise (11.4) 11.2 Properties of the Continuous-Time Fourier Transform 4 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 5 This content is available online at <http://cnx.org/content/m10100/2.14/>. 5 This module will look at some of the basic properties of the Continuous-Time Fourier Transform (Section 11.1) (CTFT). The rst section contains a table that illustrates the properties, and the sections following 193 it discuss a few of the more interesting properties in more depth. In the table, click on the operation name to be taken to the properties explanation found later on this page. Look at this module (Section 5.6) for an expanded table of more Fourier transform properties. note: We will be discussing these properties for aperiodic, continuous-time signals but understand that very similar properties hold for discrete-time signals and periodic signals as well. 11.2.1 Table of CTFT Properties Operation Name Addition (Section 11.2.2.1: earity) Scalar Multiplication (SecLin- Signal ( f (t) ) f1 (t) + f2 (t) αf (t) F (t) f (αt) f (t − τ ) f (t) ejφt Transform ( F (ω) ) F1 (ω ) + F2 (ω ) αF (t) 2πf (−ω ) 1 |α| F ω α tion 11.2.2.1: Linearity) Symmetry Symmetry) Time Scaling (Section 11.2.2.3: Time Scaling) Time Shift (Section 11.2.2.4: (Section 11.2.2.2: F (ω ) e−(jωτ ) F (ω − φ) Time Shifting) Modulation (Section (Frequency Shift) 11.2.2.5: Modulation (Frequency Shift)) Convolution in Time (Sec- (f1 (t) , f2 (t)) f1 (t) f2 (t) dn dtn f F1 (t) F2 (t) 1 2π tion 11.2.2.6: Convolution) Convolution in Frequency (Section 11.2.2.6: Convolution) Dierentiation (Section 11.2.2.7: Time Dierentiation) Table 11.1 (F1 (t) , F2 (t)) n (t) (jω ) F (ω ) 11.2.2 Discussion of Fourier Transform Properties After glancing at the above table and getting a feel for the properties of the CTFT, we will now take a little more time to discuss some of the more interesting, and more useful, properties. 11.2.2.1 Linearity The combined addition and scalar multiplication properties in the table above demonstrate the basic property of linearity. What you should see is that if one takes the Fourier transform of a linear combination of signals then it will be the same as the linear combination of the Fourier transforms of each of the individual signals. This is crucial when using a table (Section 10.3) of transforms to nd the transform of a more complicated signal. 194 CHAPTER 11. CONTINUOUS TIME FOURIER TRANSFORM (CTFT) Example 11.1 We will begin with the following signal: z (t) = αf1 (t) + αf2 (t) combination of the terms is unaected by the transform. (11.5) Now, after we take the Fourier transform, shown in the equation below, notice that the linear Z (ω ) = αF1 (ω ) + αF2 (ω ) (11.6) 11.2.2.2 Symmetry Symmetry is a property that can make life quite easy when solving problems involving Fourier transforms. Basically what this property says is that since a rectangular function in time is a sinc function in frequency, then a sinc function in time will be a rectangular function in frequency. This is a direct result of the similarity between the forward CTFT and the inverse CTFT. The only dierence is the scaling by reversal. 2π and a frequency 11.2.2.3 Time Scaling This property deals with the eect on the frequency-domain representation of a signal if the time variable is altered. The most important concept to understand for the time scaling property is that signals that are narrow in time will be broad in frequency and vice versa. The simplest example of this is a delta function, a unit pulse frequency. The table above shows this idea for the general transformation from the time-domain to the frequencydomain of a signal. You should be able to easily notice that these equations show the relationship mentioned previously: if the time variable is increased then the frequency range will be decreased. 6 with a very small duration, in time that becomes an innite-length constant function in 11.2.2.4 Time Shifting Time shifting shows that a shift in time is equivalent to a linear phase shift in frequency. Since the frequency content depends only on the shape of a signal, which is unchanged in a time shift, then only the phase spectrum will be altered. This property can be easily proved using the Fourier Transform, so we will show the basic steps below: Example 11.2 We will begin by letting expression substituted in for z (t) = f (t − τ ). z (t). Z (ω ) = Now let us take the Fourier transform with the previous ∞ f (t − τ ) e−(jωt) dt −∞ (11.7) Through the calculations below, Now let us make a simple change of variables, where the frequency domain. σ = t − τ. you can see that only the variable in the exponential are altered thus only changing the phase in Z (ω ) = = =e ∞ f (σ ) e−(jω(σ+τ )t) dτ −∞ ∞ e−(jωτ ) −∞ f (σ ) e−(jωσ) dσ −(jωτ ) (11.8) F (ω ) 6 "Elemental Signals": Section Pulse <http://cnx.org/content/m0004/latest/#pulsedef> 195 11.2.2.5 Modulation (Frequency Shift) Modulation is absolutely imperative to communications applications. Being able to shift a signal to a dierent frequency, allows us to take advantage of dierent parts of the electromagnetic spectrum is what allows us to transmit television, radio and other applications through the same space without signicant interference. The proof of the frequency shift property is very similar to that of the time shift (Section 11.2.2.4: Time Shifting); however, here we would use the inverse Fourier transform in place of the Fourier transform. Since we went through the steps in the previous, time-shift proof, below we will just show the initial and nal step to this proof: z (t) = 1 2π ∞ F (ω − φ) ejωt dω −∞ (11.9) Now we would simply reduce this equation through another change of variables and simplify the terms. Then we will prove the property expressed in the table above: z (t) = f (t) ejφt (11.10) 11.2.2.6 Convolution Convolution is one of the big reasons for converting signals to the frequency domain, since convolution in time becomes multiplication in frequency. This property is also another excellent example of symmetry between time and frequency. It also shows that there may be little to gain by changing to the frequency domain when multiplication in time is involved. We will introduce the convolution integral here, but if you have not seen this before or need to refresh your memory, then look at the continuous-time convolution (Section 3.2) module for a more in depth explanation and derivation. y (t) = = (f1 (t) , f2 (t)) ∞ −∞ f1 (τ ) f2 (t − τ ) dτ (11.11) 11.2.2.7 Time Dierentiation Since LTI (Section 2.1) systems can be represented in terms of dierential equations, it is apparent with this property that converting to the frequency domain may allow us to convert these complicated dierential equations to simpler equations involving multiplication and addition. This is often looked at in more detail during the study of the Laplace Transform (Section 13.1). 196 CHAPTER 11. CONTINUOUS TIME FOURIER TRANSFORM (CTFT) Solutions to Exercises in Chapter 11 Solution to Exercise 11.1 (p. 192) In order to calculate the Fourier transform, all we need to use is (11.1) (Continuous-Time Fourier Transform), complex exponentials (Section 1.6), and basic calculus. F (Ω) = = = = ∞ f (t) e−(j Ωt) dt −∞ ∞ −(αt) −(j Ωt) e e dt 0 ∞ (−t)(α+j Ω) e dt 0 −1 0 − α+j Ω (11.12) F (Ω) = 1 α + jΩ (11.13) Solution to Exercise 11.2 (p. 192) Here we will use (11.2) (Inverse CTFT) to nd the inverse FT given that t = 0. x (t) = = = M 1 j Ωt dΩ 2π −M e 1 j Ωt |Ω,Ω=ejw 2π e 1 πt sin (M t) (11.14) x (t) = M π sinc Mt π (11.15) Chapter 12 Sampling Theorem 12.1 Sampling 1 12.1.1 Introduction The digital computer can process However, most signals of interest are discrete time signals using extremely exible and powerful algorithms. continuous time, which is how the almost always appear in nature. sampling. This module introduces the idea of translating continuous time problems into discrete time, and you can read on to learn more of the details and importance of Key Questions • • • How do we turn a continuous time signal into a discrete time signal (sampling, A/D)? When can we reconstruct (Section 12.2) a CT signal exactly from its samples (reconstruction, D/A)? Manipulating the DT signal does what to the reconstructed signal? 12.1.2 Sampling Sampling (and reconstruction) are best understood in the frequency domain. We'll start by looking at some examples Exercise 12.1 What CT signal (Solution on p. 217.) f (t) has the CTFT (Section 11.1) shown below? f (t) = 1 2π ∞ F (jw) ejwt dw −∞ Figure 12.1: The CTFT of f (t). 1 This content is available online at <http://cnx.org/content/m10798/2.7/>. 197 198 CHAPTER 12. Hint: SAMPLING THEOREM F (jw) = F1 (jw) ∗ F2 (jw) where the two parts of F (jw) are: (a) Figure 12.2 (b) Exercise 12.2 What DT signal (Solution on p. 217.) fs [n] has the DTFT (Section 10.4) shown below? fs [n] = 1 2π π fs ejw ejwn dw −π Figure 12.3: DTFT that is a periodic (with period = 2π) version of F (jw) in Figure 12.1. 199 Figure 12.4: of f (t) f (t) is the continuous-time signal above and fs [n] is the discrete-time, sampled version 12.1.2.1 Generalization Of course, the results from the above examples can be generalized to where f (t) is bandlimited to [−π, π]. any f (t) with F (jw) = 0, |w| > π, (a) Figure 12.5: (b) F (jw) is the CTFT of f (t). 200 CHAPTER 12. SAMPLING THEOREM (a) Figure 12.6: (b) ` ´ Fs ejw is the DTFT of fs [n]. Fs ejw is a periodic (Section 6.1) (with period 2π ) version of F (jw). Fs ejw is the DTFT of signal sampled at the integers. Conclusion: F (jw) is the CTFT of signal. If f (t) is bandlimited to [−π, π ] then the DTFT of the sampled version fs [n] = f (n) is just a periodic (with period 2π ) version of F (jw). 12.1.3 Turning a Discrete Signal into a Continuous Signal Now, let's look at turning a DT signal into a continuous time signal. Let fs [n] be a DT signal with DTFT Fs ejw (a) Figure 12.7: (b) ` ´ Fs ejw is the DTFT of fs [n]. Now, set ∞ fimp (t) = n=−∞ The CT signal, (fs [n] δ (t − n)) fs [n]. fimp (t), is non-zero only on the integers where there are impulses of height 201 Figure 12.8 Exercise 12.3 What is the CTFT of Now, given the samples (Solution on p. 217.) fimp (t)? a bandlimited to reconstruct (Section 12.2) fs [n] of f (t). [−π, π ] signal, our next step will be to see how we can Figure 12.9: Block diagram showing the very basic steps used to reconstruct f (t). Can we make our results equal f (t) exactly? 12.2 Reconstruction 12.2.1 Introduction 2 The reconstruction process begins by taking a sampled signal, which will be in discrete time, and performing a few operations in order to convert them into continuous-time and, with any luck, into an exact copy of the original signal. A basic method used to reconstruct a integer is to do the following steps: [−π, π ] bandlimited signal from its samples on the • • turn the sample sequence lowpass lter fs [n] into an impulse train ∼ fimp (t) π) fimp (t) to get the reconstruction f (t) (cuto freq. = 2 This content is available online at <http://cnx.org/content/m10788/2.6/>. 202 CHAPTER 12. SAMPLING THEOREM Figure 12.10: Reconstruction block diagram with lowpass lter (LPF). The lowpass lter's impulse response is ∼ g (t). The following equations allow us to reconstruct our signal (Figure 12.11), f (t). ∼ f (t) = g (t) fimp (t) = g (t) ∼ ∞ n=−∞ (fs [n] δ (t − n)) (12.1) = = = f (t) ∞ n=−∞ ∞ n=−∞ (fs [n] (g (t) δ (t − n))) (fs [n] g (t − n)) Figure 12.11 12.2.1.1 Examples of Filters g Example 12.1: Zero Order Hold This type "lter" is one of the most basic types of reconstruction lters. It simply holds the value that is in pulse in (Figure fs [n] for τ seconds. This creates a block or step like function where each value fs [n] is simply dragged over to the next pulse. The equations and illustrations 12.12) depict how this reconstruction lter works with the following g : 1 if 0 < t < τ g (t) = 0 otherwise of the below 203 ∞ fs [n] = n=−∞ (fs [n] g (t − n)) (12.2) (a) Figure 12.12: (b) Zero Order Hold ∼ Question: How does f (t) reconstructed with a zero order hold compare to the original f (t) in the frequency domain? Example 12.2: Nth Order Hold Here we will look at a few quick examples of variances to the Zero Order Hold lter discussed in the previous example. 204 CHAPTER 12. SAMPLING THEOREM (a) (b) (c) Figure 12.13: Nth Order Hold Examples (nth order hold is equal to an nth order B-spline) (a) First Order Hold (b) Second Order Hold (c) ∞ Order Hold 12.2.2 Ultimate Reconstruction Filter Question: What is the ultimate reconstruction lter? Recall that (see Figure 12.14) Our current reconstruction block diagram. Note that each of these signals has its own corresponding CTFT or DTFT. Figure 12.14: 205 If G (jω ) has the following shape (Figure 12.15): Figure 12.15: Ideal lowpass lter then ∼ f (t) = f (t) Therefore, an ideal lowpass lter will give us perfect reconstruction! In the time domain, impulse response g (t) = ∼ sin (πt) πt (fs [n] g (t − n)) t fs [n] sin(πt(−−n)) π ( n) (12.3) f (t) = = ∞ n=−∞ ∞ n=−∞ (12.4) = f (t) 12.2.3 Amazing Conclusions If f (t) is f (t) |t=n bandlimited to [−π, π ], it can be reconstructed perfectly from its samples on the integers fs [n] = (12.5) ∞ f (t) = n=−∞ fs [n] sin (π (t − n)) π (t − n) The above equation for perfect reconstruction deserves a closer look (Section 12.3), which you should continue to read in the following section to get a better understanding of reconstruction. Here are a few things to think about for now: • • sin(π (t−n)) equal at integers other than n? π (t−n) sin(π (t−n)) What is the support of π (t−n) ? What does 12.3 More on Reconstruction 12.3.1 Introduction 3 In the previous module on reconstruction (Section 12.2), we gave an introduction into how reconstruction works and briey derived an equation used to perform perfect reconstruction. Let us now take a closer look 3 This content is available online at <http://cnx.org/content/m10790/2.5/>. 206 CHAPTER 12. SAMPLING THEOREM at the perfect reconstruction formula: ∞ f (t) = n=−∞ We are writing fs sin (π (t − n)) π (t − n) (12.6) f (t) in terms of shifted and scaled sinc functions. sin (π (t − n)) π (t − n) is a n∈ Z But wait . . . . basis (Section 5.1.3: Basis) for the space of [−π, π] bandlimited signals. 12.3.1.1 Derive Reconstruction Formulas What is < sin (π (t − n)) sin (π (t − k )) , >=? π (t − n) π (t − k ) (12.7) This inner product (Section 15.3) can be hard to calculate in the time domain, so let's use Plancharel Theorem (Section 15.12) < ·, · >= 1 2π π e−(jωn) ejωk dω −π (12.8) (a) (b) Figure 12.16 if n=k < sincn , sinck > = = 1 2π π −π e−(jωn) ejωk dω 1 π 1 −(jωn) jωn e dω 2π −π e π 1 jω (k−n) dω 2π −π e 1 sin(π (k−n)) 2π j (k−n) (12.9) if n=k < sincn , sinck > = = = = (12.10) 0 207 note: In (12.10) we used the fact that the integral of sinusoid over a complete interval is 0 to simplify our equation. So, 1 if n = k sin (π (t − n)) sin (π (t − k )) < , >= 0 if n = k π (t − n) π (t − k ) Therefore (12.11) sin (π (t − n)) π (t − n) n∈ Z is an orthonormal basis (Section 15.7.3: Orthonormal Basis) (ONB) for the space of functions. Sampling: [−π, π ] bandlimited Sampling is the same as calculating ONB coecients, which is inner products with sincs 12.3.1.2 Summary One last time for f (t) [−π, π ] Synthesis bandlimited ∞ f (t) = n=−∞ fs [n] sin (π (t − n)) π (t − n) (12.12) Analysis examine the relationships 4 fs [n] = f (t) |t=n between the fourier transforms (CTFT and DTFT) in more depth. (12.13) In order to understand a little more about how we can reconstruct a signal exactly, it will be useful to 12.4 Nyquist Theorem 12.4.1 Introduction 5 Earlier you should have been exposed to the concepts behind sampling (Section 12.1) and the sampling theorem. While learning about these ideas, you should have begun to notice that if we sample at too low of a rate, there is a chance that our original signal will not be uniquely dened by our sampled signal. If this happens, then there is no guarantee that we can correctly reconstruct (Section 12.2) the signal. As a result of this, the Nyquist Theorem was created. Below, we will discuss just what exactly this theorem tells us. 12.4.2 Nyquist Theorem We will let T < equal our sampling period (distance between samples). Then let We have seen that if in radians/sec). f (t) is bandlimited to [−ΩB , ΩB ] 2π T (sampling frequency and we sample with period T < Ωs = π Ωb ⇒ If Theorem 12.1: f (t) 2π Ωs π ΩB ⇒ Ωs > 2ΩB then we can reconstruct f (t) from its samples. Nyquist Theorem ("Fundamental Theorem of DSP") is bandlimited to [−ΩB , ΩB ], we can reconstruct it perfectly from its samples fs [n] = f (nT ) 4 "Examing Reconstruction Relations" <http://cnx.org/content/m10799/latest/> 5 This content is available online at <http://cnx.org/content/m10791/2.6/>. 208 CHAPTER 12. for SAMPLING THEOREM Ωs = 2π T > 2ΩB ΩN = 2ΩB is called the " Nyquist frequency" for f (t). Ωs ≥ 2ΩB ΩB For perfect reconstruction to be possible where Ωs is the sampling frequency and is the highest frequency in the signal. Figure 12.17: Illustration of Nyquist Frequency Example 12.3: Examples: • • Human ear hears frequencies up to 20 kHz Phone line passes frequencies up to 4 kHz → CD sample rate is 44.1 kHz. → phone company samples at 8 kHz. 12.4.2.1 Reconstruction The reconstruction formula in the time domain looks like ∞ f (t) = n=−∞ We can conclude, just as before, that fs [n] sin π T π T (t − nT ) (t − nT ) sin π T is a basis (Section 5.1) for the space of for this basis are calculated by π T (t − nT ) (t − nT ) , n∈Z π T . The expansion coecient [−ΩB , ΩB ] bandlimited functions, ΩB = 2π sampling f (t) at rate T = 2ΩB . note: The basis is also orthogonal. To make it orthonormal (Section 15.8), we need a normalization √ factor of T. 12.4.2.2 The Big Question Exercise 12.4 What if (Solution on p. 217.) Ωs < 2ΩB ? 6 What happens when we sample below the Nyquist rate? [Media Object] 6 This media object is a LabVIEW VI. Please view or download it at <NyquistPlot.llb> 209 12.5 Aliasing 7 12.5.1 Introduction When considering the reconstruction (Section 12.2) of a signal, you should already be familiar with the idea of the Nyquist rate (Section 12.4). This concept allows us to nd the sampling rate that will provide for perfect reconstruction of our signal. If we sample at too low of a rate (below the Nyquist rate), then problems will arise that will make perfect reconstruction impossible - this problem is known as occurs when there is an overlap in the shifted, perioidic copies of our original signal's FT, i.e. spectrum. In the frequency domain, one will notice that part of the signal will overlap with the periodic signals next to it. In this overlap the values of the frequency will be added together and the shape of the signals spectrum will be unwantingly altered. This overlapping, or aliasing, makes it impossible to correctly determine the correct strength of that frequency. Figure 12.18 provides a visual example of this phenomenon: aliasing. Aliasing Figure 12.18: The spectrum of some bandlimited (to W Hz) signal is shown in the top plot. If the sampling interval Ts is chosen too large relative to the bandwidth W , aliasing will occur. In the bottom plot, the sampling interval is chosen suciently small to avoid aliasing. Note that if the signal were not bandlimited, the component spectra would always overlap. 12.5.2 Aliasing and Sampling If we sample too slowly, i.e., Ωs < 2ΩB , T > π ΩB We cannot recover the signal from its samples due to aliasing. 7 This content is available online at <http://cnx.org/content/m10793/2.7/>. 210 CHAPTER 12. SAMPLING THEOREM Example 12.4 Let f1 (t) have CTFT. Figure 12.19: In this gure, note the following equation: ΩB − Ωs 2 =a Let f2 (t) have CTFT. Figure 12.20: The horizontal portions of the signal result from overlap with shifted replicas - showing visual proof of aliasing. Try to sketch and answer the following questions on your own: • • • What does the DTFT of What does the DTFT of f1,s [n] = f1 (nT ) f2,s [n] = f2 (nT ) look like? look like? Do any other signals have the same DTFT as f1,s [n] and f2,s [n]? CONCLUSION: If we sample below the Nyquist frequency, there are many signals that could have produced that given sample sequence. 211 Figure 12.21: These are all equal! Why the term "aliasing"? signal). Because the same sample sequence can represent dierent CT signals (as opposed to when we sample above the Nyquist frequency, then the sample sequence represents a unique CT Figure 12.22: These two signals contain the same four samples, yet are very dierent signals. Example 12.5 f (t) = cos (2πt) 212 CHAPTER 12. SAMPLING THEOREM Figure 12.23: The cosine function, f (t) = cos (2πt), and its CTFT. Case 1: Sample note: Ωs = (8π ) rad ⇒ T = 1 sec. sec 4 Ωs > 2ΩB w Ωs Case 2: Sample note: = 8 3π rad sec 3 ⇒ T = 4 sec. Ωs < 2ΩB When we run the DTFT from Case #2 through the reconstruction steps, we realize that we end up with the following cosine: ∼ f (t) = cos to ensure correct reconstruction from the samples. π t 2 This is a "stretched" out version of our original. Clearly, our sampling rate was not high enough You may have seen some eects of aliasing such as a wagon wheel turning backwards in a western movie. Aliasing in images 8 can result in Moire Patterns. Here is an example of an image that has Moire artifacts 10 9 as a result of scanning at too low a frequency. [Media Object] 12.6 Anti-Aliasing Filters 12.6.1 Introduction 11 The idea of aliasing (Section 12.5) has been described as the problem that occurs if a signal is not sampled (Section 12.1) at a high enough rate (for example, below the Nyquist Frequency (Section 12.4)). But exactly what kind of distortion does aliasing produce? it at <alias.llb> 11 This content is available online at <http://cnx.org/content/m10794/2.5/>. 8 http://ptolemy.eecs.berkeley.edu/eecs20/week13/moire.html 9 http://www.dvp.co.il/lter/moire.html 10 This media object is a LabVIEW VI. Please view or download 213 (a) (b) Figure 12.24 High frequencies in the original signal "fold back" into lower frequencies. High frequencies masquerading as lower frequencies produces constructed signal. warning: We must avoid aliasing anyway we can. highly undesirable artifacts in the re- 12.6.2 Avoiding Aliasing What if it is impractical/impossible to sample at Filter out the frequencies above the following simple steps: 1. Take the CTFT of the signal, Ωs 2 before you sample. Ωs > 2ΩB ? The best way to visualize doing this is to imagine f (t). ωc = fa (t). Ωs 2. 2. Send this signal through a lowpass lter with the following specication, 3. We now have a graph of our signal in the frequency domain with all values of Now, we take the inverse CTFT to get back our continuous time signal, 4. And nally we are ready to sample our signal! |ω | > Ωs 2 equal to zero. Example 12.6 Sample rate for CD = 44.1KHz. highhat) contain frequencies above Many musical instruments (e.g. cannot hear them). 22KHz (even though we Because of this, we can lter the output signal from the instrument before we sample it using the following lter: 214 CHAPTER 12. SAMPLING THEOREM Figure 12.25: This lter will cuto the higher, unnecessary frequencies, where |ωc | > 2π22kHz Now the signal is ready to be sampled! Example 12.7: Another Example Speech bandwidth is > ± (20kHz), but it is perfectly intelligible when lowpass ltered to a ± (4kHz) |ωc | > 2π 4kHz. The signal we receive from this lter only range. Because of this, we can take a normal speech signal and pass it through a lter like the one shown in Figure 12.25, where we now set contains values where Now we can |ω | > 8πk . sample at 16πk = 8kHz  standard telephony rate. 12.7 Discrete Time Processing of Continuous Time Signals 12 Figure 12.26: DSP System How is the CTFT of y(t) related to the CTFT of f(t) (Figure 1)? Let G (jω ) = reconstruction lter freq. response Y (jω ) = G (jω ) Yimp (jω ) where Yimp (jω ) is impulse sequence created from ys [n]. So, Y (jω ) = G (jω ) Ys ejωT = G (jω ) H ejωT Fs ejωT 12 This content is available online at <http://cnx.org/content/m10797/2.10/>. 215 Y (jω ) = G (jω ) H ejωT 1 T ∞ F r =−∞ ∞ j ωF 2πr T ωF 2πr T and 1 Y (jω ) = G (jω ) H ejωT T Now, lets assume that f(t) is bandlimited to lter. Then F r =−∞ Ωs 2 if j , Ωs 2 Y (jω ) = π π − T ,T = − F (jω ) H ejωT G (jω ) is a perfect reconstruction |ω | ≤ π T note: 0 otherwise F (jω ). Y (jω ) has the same "bandlimit" as So, for bandlimited signals, and with a high enough sampling rate and a perfect reconstruction lter (Figure 2) Figure 12.27: FT's of original (analog) signal f(t) and sampled version of f(t) respectively. is equivalent to using an analog LTI lter (Figure 3) 216 CHAPTER 12. SAMPLING THEOREM Figure 12.28: Implementing a discrete time lter (H) in analog where H ejωT if |ω | ≤ Ha (jω ) = 0 otherwise π T So, by being careful we can implement LTI systems for bandlimited signals Important note: Ha (jω ) Ha (jω ) on our computer!!! = lter induced by our system. is LTI only if • h, the DT system, is LTI • F (jω ), the input, is bandlimited and the sample rate is high enough. 217 Solutions to Exercises in Chapter 12 Solution to Exercise 12.1 (p. 197) f (t) = 1 2π ∞ F (jw) ejwt dw −∞ Solution to Exercise 12.2 (p. 198) Since F (jw) = 0 outside of [−2, 2] f (t) = 1 2π 2 F (jw) ejwt dw −2 Also, since we only use one interval to reconstruct fs [n] 2 from its DTFT, we have fs [n] = Since 1 2π fs ejw ejwn dw −2 F (jw) = Fs ejw on [−2, 2] fs [n] = f (t) |t=n i.e. fs [n] is a sampled version of f (t). Solution to Exercise 12.3 (p. 201) ∞ fimp (t) = n=−∞ (fs [n] δ (t − n)) F imp (jw) ∼ = = = = = ∞ f (t) e−(jwt) dt −∞ imp ∞ ∞ −(jwt) dt n=−∞ (fs [n] δ (t − n)) e −∞ ∞ ∞ −(jwt) δ (t − n) e dt n=−∞ (fs [n]) −∞ ∞ −(jwn) n=−∞ (fs [n]) e (12.14) Fs ejw So, the CTFT of note: fimp (t) is equal to the DTFT of fs [n] ∞ −∞ We used the sifting property to show δ (t − n) e−(jwt) dt = e−(jwn) Solution to Exercise 12.4 (p. 208) Go through the steps: (see Figure 12.29) 218 CHAPTER 12. SAMPLING THEOREM Figure 12.29 Finally, what will happen to Fs ejω now? To answer this nal question, we will now need to look into the concept of aliasing (Section 12.5). Chapter 13 Laplace Transform and System Design 13.1 The Laplace Transforms transform uses the more general, 1 The Laplace transform is a generalization of the Continuous-Time Fourier Transform (Section 11.1). However, instead of using complex sinusoids (Section 7.2) of the form ejωt , as the CTFT does, the Laplace est , where s = σ + jω . Although Laplace transforms are rarely solved using integration (tables (Section 13.3) and computers (e.g. Matlab) are much more common), we will provide the bilateral Laplace transform pair here. These dene the forward and inverse Laplace transformations. Notice the similarities between the forward and inverse transforms. This will give rise to many of the same symmetries found in Fourier analysis (Section 7.1). Laplace Transform ∞ F (s) = −∞ f (t) e−(st) dt (13.1) Inverse Laplace Transform f (t) = 1 2πj c+j ∞ F (s) est ds c−j ∞ (13.2) 13.1.1 Finding the Laplace and Inverse Laplace Transforms 13.1.1.1 Solving the Integral Probably the most dicult and least used method for nding the Laplace transform of a signal is solving the integral. Although it is technically possible, it is extremely time consuming. Given how easy the next two methods are for nding it, we will not provide any more than this. The integrals are primarily there in order to understand where the following methods originate from. 13.1.1.2 Using a Computer Using a computer to nd Laplace transforms is relatively painless. and ilaplace, Matlab has two functions, laplace that are both part of the symbolic toolbox, and will nd the Laplace and inverse Laplace This method is generally preferred for more complicated functions. Simpler and transforms respectively. more contrived functions are usually found easily enough by using tables (Section 13.1.1.3: Using Tables). 1 This content is available online at <http://cnx.org/content/m10110/2.13/>. 219 220 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN 13.1.1.3 Using Tables When rst learning about the Laplace transform, tables are the most common means for nding it. With enough practice, the tables themselves may become unnecessary, as the common transforms can become second nature. For the purpose of this section, we will focus on the inverse Laplace transform, since most design applications will begin in the Laplace domain and give rise to a result in the time domain. method is as follows: 1. Write the function you wish to transform, where each of the 2. Invert each 3. Sum up the The H (s), as a sum of other functions, H (s) = m i=1 (Hi (s)) Hi is known from a table (Section 13.3). Hi (s) to get its hi (t). m hi (t) to get h (t) = i=1 (hi (t)) for Example 13.1 Compute h (t) H (s) = 1 s+5 , Re (s) > −5 h (t) = e−(5t) This can be solved directly from the table (Section 13.3) to be Example 13.2 h (t), of H (s) = To solve this, we rst notice that H (s) can also be −(10t) table (Section 13.3) to nd h (t) = 25e Find the time domain representation, 25 s+10 , Re (s) > −10 1 written as 25 s+10 . We can then go to the Example 13.3 We can now extend the two previous examples by nding h (t) for H (s) = −5 described above to yield the result 1 s+5 25 + s+10 , Re (s) > To do this, we take advantage of the additive property of linearity and the three-step method h (t) = e−(5t) + 25e−(10t) 2 For more complicated examples, it may be more dicult to break up the transfer function into parts that exist in a table. In this case, it is often necessary to use partial fraction expansion to get the transfer function into a more usable form. 13.1.2 Visualizing the Laplace Transform With the Fourier transform, we had a and phase). complex-valued function of a purely imaginary variable, F (jω). complex-valued function of a This was something we could envision with two 2-dimensional plots (real and imaginary parts or magnitude However, with Laplace, we have a complex variable. In order to examine the magnitude and phase or real and imaginary parts of this function, we must examine 3-dimensional surface plots of each component. 2 "Partial Fraction Expansion" <http://cnx.org/content/m2111/latest/> 221 real and imaginary sample plots (a) Figure 13.1: (b) Real and imaginary parts of H (s) are now each 3-dimensional surfaces. (a) The Real part of H (s) (b) The Imaginary part of H (s) magnitude and phase sample plots (a) Figure 13.2: (b) Magnitude and phase of H (s) are also each 3-dimensional surfaces. This representation is more common than real and imaginary parts. (a) The Magnitude of H (s) (b) The Phase of H (s) While these are legitimate ways of looking at a signal in the Laplace domain, it is quite dicult to draw and/or analyze. For this reason, a simpler method has been developed. Although it will not be discussed in detail here, the method of Poles and Zeros (Section 13.6) is much easier to understand and is the way both the Laplace transform and its discrete-time counterpart the Z-transform (Section 14.1) are represented graphically. 13.2 Properties of the Laplace Transform 3 This 3 content is available online at <http://cnx.org/content/m10117/2.10/>. 222 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN Property Linearity Time Shifting Frequency Shifting (modulation) Signal αx1 (t) + βx2 (t) x (t − τ ) eηt x (t) Laplace Transform αX1 (s) + βX2 (s) e −(sτ ) Region of Convergence At least ROC1 ( ROC2 s−η X (s) ROC Shifted X (s − η ) ROC must be in the region of convergence) Time Scaling x (αt) (1 − |α|) X (s − α) Scaled ROC ( s−α must be in the region of convergence) Conjugation Convolution Time Dierentiation Frequency Dierentiation Integration in Time x (t) ∗ X (s ) ∗∗ ROC At least At least x1 (t) ∗ x2 (t) d dt x (t) X1 (t) X2 (t) sX (s) d ds X ROC1 ROC ROC2 (−t) x (t) t −∞ (s) ROC At least x (τ ) dτ (1 − s) X (s) ROC Table 13.1 Re (s) > 0 223 13.3 Table of Common Laplace Transforms Signal δ (t) δ (t − T ) u (t) − (u (−t)) tu (t) t u (t) − (t u (−t)) e −(λt) n n 4 Laplace Transform Region of Convergence 1 e −(sT ) 1 s 1 s 1 s2 n! sn+1 n! sn+1 1 s+λ 1 s+λ 1 (s−λ)2 n! (s+λ)n+1 n! (s+λ)n+1 s s2 +b2 b s2 +b2 s+a (s+a)2 +b2 b (s+a)2 +b2 n All All s s Re (s) > 0 Re (s) < 0 Re (s) > 0 Re (s) > 0 Re (s) < 0 Re (s) > −λ Re (s) < −λ Re (s) > −λ Re (s) > −λ Re (s) < −λ Re (s) > 0 Re (s) > 0 Re (s) > −a Re (s) > −a All Table 13.2 u (t) u (−t) u (t) u (t) − e−(λt) te −(λt) n −(λt) te − tn e−(λt) u (−t) cos (bt) u (t) sin (bt) u (t) e −(at) cos (bt) u (t) e−(at) sin (bt) u (t) dn dtn δ (t) s s 13.4 Region of Convergence for the Laplace Transform system's output to converge lie in the 5 With the Laplace transform (Section 13.1), the s-plane represents a set of signals (complex exponentials (Section 1.6)). For any given LTI (Section 2.1) system, some of these signals may cause the output of the system to converge, while others cause the output to diverge ("blow up"). The set of signals that cause the nd this region of convergence for any continuous-time, LTI system. region of convergence (ROC). This module will discuss how to Laplace Transform Recall the denition of the Laplace transform, ∞ H (s) = −∞ h (t) e−(st) dt h (t) = e−(at) u (t), e−((a+s)t) dt 0 we get the equation, (13.3) If we consider a causal (Section 1.1), complex exponential, ∞ ∞ e−(at) e−(st) dt = 0 (13.4) 4 This 5 This content is available online at <http://cnx.org/content/m10111/2.11/>. content is available online at <http://cnx.org/content/m10114/2.9/>. 224 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN Evaluating this, we get −1 s+a t→∞ lim e−((s+a)t) − 1 (13.5) Notice that this equation will tend to innity when this happens, we take one more step by using s=σ lim e−((s+a)t) tends to innity. To understand when t→∞ + jω to realize this equation as (13.6) t→∞ Recognizing that lim e−(jωt) e−((σ+a)t) e−(σ(a)t) e−(jωt) is sinusoidal, it becomes apparent that is going to determine whether this blows up or not. What we nd is that if which will cause it to go to zero as σ+a is positive, the exponential will be to a negative power, t tends to innity. On the other hand, if σ+a is negative or zero, the exponential will not be to a negative power, which will prevent it from tending to zero and the system will not converge. What all of this tells us is that for a causal signal, we have convergence when Condition for Convergence Re (s) > −a (13.7) Although we will not go through the process again for anticausal signals, we could. In doing so, we would nd that the necessary condition for convergence is when Necessary Condition for Anti-Causal Convergence Re (s) < −a (13.8) 13.4.1 Graphical Understanding of ROC Perhaps the best way to look at the region of convergence is to view it in the s-plane. What we observe is that for a single pole, the region of convergence lies to the right of it for causal signals and to the left for anti-causal signals. (a) (b) Figure 13.3: (a) The Region of Convergence for a causal signal. (b) The Region of Convergence for an anti-causal signal. 225 Once we have recognized this, the natural question becomes: What do we do when we have multiple poles? The simple answer is that we take the intersection of all of the regions of convergence of the respective poles. Example 13.4 Find H (s) and state the region of convergence for h (t) = e−(at) u (t) + e−(bt) u (−t) Breaking this up into its two terms, we get transfer functions and respective regions of convergence of H1 (s) = and 1 , Re (s) > −a s+a −1 , Re (s) < −b s+b −b > Re (s) > −a. If (13.9) H2 (s) = (13.10) Combining these, we get a region of convergence of a > b, we can represent this graphically. Otherwise, there will be no region of convergence. Figure 13.4: The Region of Convergence of h (t) if a > b. 226 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN 6 13.5 The Inverse Laplace Transform 13.5.1 To Come In The Transfer Function 7 we shall establish that the inverse Laplace transform of a function h is L−1 (h) (t) = where 1 2π ∞ e(c+yj )t h ((c + yj ) t) dy −∞ (13.11) j≡ √ −1 and the real number c is chosen so that all of the singularities of h lie to the left of the line of integration. 13.5.2 Proceeding with the Inverse Laplace Transform With the inverse Laplace transform one may express the solution of x = Bx + g , as (13.12) x (t) = L−1 (sI − B ) As an example, let us take the rst component of −1 (L {g} + x (0)) namely L {x}, Lx1 (s) = We dene: 0.19 s2 + 1.5s + 0.27 s+ 14 6 (s3 + 1.655s2 + 0.4078s + 0.0039) . Denition 13.1: poles Also called singularities, these are the points s at which Lx1 (s) blows up. These are clearly the roots of its denominator, namely √ −1/100, All four being negative, it suces to take −329/400 ± c=0 73 16 , and − 1/6. (13.13) and so the integration in (13.11) proceeds up the imaginary axis. We don't suppose the reader to have already encountered integration in the complex plane but hope that this example might provide the motivation necessary for a brief overview of such. Before that however we note that MATLAB has digested the calculus we wish to develop. Referring again to b3.m we note that the 8 ilaplace for details command produces −t −t 6 x1 (t) = 211.35e 100 − (0.0554t3 + 4.5464t2 + 1.085t + 474.19) e √ √ −(329t) 73 73 e 400 262.842cosh 16 t + 262.836sinh 16 t 6 This content is available online at <http://cnx.org/content/m10170/2.8/>. 7 "Eigenvalue Problem: The Transfer Function" <http://cnx.org/content/m10490/latest/> 8 http://www.caam.rice.edu/∼caam335/cox/lectures/b3.m + 227 Figure 13.5: The 3 potentials associated with the RC circuit model gure9 . The other potentials, see the gure above, possess similar expressions. Please note that each of the poles of L {x1 } appear as exponents in degrees is determined by the order of the respective pole. 10 x1 and that the coecients of the exponentials are polynomials whose 13.6 Poles and Zeros 13.6.1 Introduction It is quite dicult to qualitatively analyze the Laplace transform (Section 13.1) and Z-transform (Section 14.1), since mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's Z-domain, of 11 poles and zeros to try to gain a qualitative idea of what a system does. Given a continuous-time transfer function in the Laplace domain, H (s), or a discrete-time one in the H (z ), a zero is any value of s or z such that the transfer function is zero, and a pole is any value s or z Denition 13.2: zeros 1. The value(s) for such that the transfer function is innite. To dene them precisely: z where the numerator of the transfer function equals zero denominator of the transfer function equals zero 2. The complex frequencies that make the overall gain of the lter transfer function zero. Denition 13.3: poles 1. The value(s) for z where the 2. The complex frequencies that make the overall gain of the lter transfer function innite. 9 "Nerve Fibers and the Dynamic Strang Quartet", Figure 1: An RC model of a nerve ber <http://cnx.org/content/m10168/latest/#RC_model_g> 10 This content is available online at <http://cnx.org/content/m10112/2.12/>. 11 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 228 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN 13.6.2 Pole/Zero Plots When we plot these in the appropriate s- or z-plane, we represent zeros with "o" and poles with "x". Refer to this module (Section 14.7) for a detailed looking at plotting the poles and zeros of a z-transform on the Z-plane. Example 13.5 Find the poles and zeros for the transfer function s-plane. H (s) = s2 +6s+8 and plot the results in the s2 +2 The rst thing we recognize is that this transfer function will equal zero whenever the top, s2 + 6s + 8, equals zero. To nd where this equals zero, we factor this to get, (s + 2) (s + 4). This yields zeros at s = −2 and s = −4. Had this function been more complicated, it might have been necessary to use the quadratic formula. For poles, we must recognize that the transfer function will be innite whenever the bottom part is zero. That is when yields √ s+j 2 2 to √ s + 2 is zero. To nd this, we again look√ factor the equation. √ s − j 2 . This yields purely imaginary roots of +j 2 and − j 2 This Plotting this gives Figure 13.6 (Pole and Zero Plot) Pole and Zero Plot Figure 13.6: Sample pole-zero plot Now that we have found and plotted the poles and zeros, we must ask what it is that this plot gives us. Basically what we can gather from this is that the magnitude of the transfer function will be larger when it is closer to the poles and smaller when it is closer to the zeros. (Section 3.4). This provides us with a qualitative understanding of what the system does at various frequencies and is crucial to the discussion of stability 229 13.6.3 Repeated Poles and Zeros It is possible to have more than one pole or zero at any given point. For instance, the discrete-time transfer function H (z ) = z 2 will have two zeros at the origin and the continuous-time function H (s) = 1 s25 will have 25 poles at the origin. 13.6.4 Pole-Zero Cancellation An easy mistake to make with regards to poles and zeros is to think that a function like same as s + 3. known as pole-zero cancellation. In theory they are equivalent, as the pole and zero at s=1 (s+3)(s−1) is the s−1 cancel each other out in what is However, think about what may happen if this were a transfer function of a system that was created with physical circuits. In this case, it is very unlikely that the pole and zero would remain in exactly the same place. A minor temperature change, for instance, could cause one of them to move just slightly. If this were to occur a tremendous amount of volatility is created in that area, since there is a change from innity at the pole to zero at the zero in a very small range of signals. This is generally a very bad way to try to eliminate a pole. A much better way is to use to a better place. control theory to move the pole 230 CHAPTER 13. LAPLACE TRANSFORM AND SYSTEM DESIGN Chapter 14 Z-Transform and Digital Filtering 14.1 The Z Transform: Denition The 1 14.1.1 Basic Denition of the Z-Transform z-transform of a sequence is dened as ∞ X (z ) = n=−∞ Sometimes this equation is referred to as the as x [n] z −n (14.1) bilateral z-transform. ∞ At times the z-transform is dened X (z ) = n=0 which is known as the x [n] z −n (14.2) unilateral z-transform. ∞ There is a close relationship between the z-transform and the signal, which is dened as Fourier transform of a discrete time Xe Notice that that when the jω = n=−∞ x [n] e−(jωn) e−(jωn) (14.3) z −n is replaced with the z-transform reduces to the Fourier Transform. When the Fourier Transform exists, z=e jω , which is to have the magnitude of z equal to unity. 14.1.2 The Complex Plane In order to get further insight into the relationship between the Fourier Transform and the Z-Transform it is useful to look at the complex plane or z-plane. Take a look at the complex plane: 1 This content is available online at <http://cnx.org/content/m10549/2.9/>. 231 232 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Z-Plane Figure 14.1 The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable z. The position on the complex plane is given by rejω , and the angle from the positive, real axis around the plane is denoted by only where ω . X (z ) is dened everywhere on this plane. |z | = 1, which is referred to as the unit circle. So for X ejω on the other hand is dened example, ω = 1 at z = 1 and ω = π at z = −1. This is useful because, by representing the Fourier transform as the z-transform on the unit circle, the periodicity of Fourier transform is easily seen. 14.1.3 Region of Convergence The region of convergence, known as the converges. Since the z-transform is a Stated dierently, ROC, ∞ is important to understand because it denes the region where the z-transform exists. The ROC for a given power series, n=−∞ x [n] , is dened as the range of z for which the z-transform −n it converges when x [n] z is absolutely summable. (14.4) |x [n] z −n | < ∞ must be satised for convergence. transforms of This is best illustrated by looking at the dierent ROC's of the z- αn u [n] and αn u [n − 1]. Example 14.1 For x [n] = α n u [n] (14.5) 233 Figure 14.2: x [n] = αn u [n] where α = 0.5. X (z ) = = = = ∞ −n ) n=−∞ (x [n] z ∞ n −n ) n=−∞ (α u [n] z ∞ n −n ) n=0 (α z ∞ −1 n αz n=0 (14.6) This sequence is an example of a right-sided exponential sequence because it is nonzero for It only converges when n ≥ 0. |αz −1 | < 1. When it converges, X (z ) = = 1 1−αz −1 z z −α (14.7) If |αz −1 | ≥ 1, then the series, ∞ n=0 αz −1 n does not converge. Thus the ROC is the range of (14.8) values where |αz −1 | < 1 or, equivalently, |z | > |α| (14.9) 234 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.3: ROC for x [n] = αn u [n] where α = 0.5 Example 14.2 For x [n] = (− (αn )) u [−n − 1] (14.10) 235 Figure 14.4: x [n] = (− (αn )) u [−n − 1] where α = 0.5. X (z ) = = = = = = ∞ −n ) n=−∞ (x [n] z ∞ n n=−∞ ((− (α )) u [−n −1 n −n − ) n=−∞ (α z −n −1 α−1 z − n=−∞ n ∞ − α −1 z n=1 n ∞ 1 − n=0 α−1 z − 1] z −n ) (14.11) The ROC in this case is the range of values where |α−1 z | < 1 or, equivalently, (14.12) |z | < |α| If the ROC is satised, then (14.13) X (z ) = = 1− z z −α 1 1−α−1 z (14.14) 236 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.5: ROC for x [n] = (− (αn )) u [−n − 1] 14.2 Table of Common z-Transforms also species the region of convergence (Section 14.3). note: 2 The table below provides a number of unilateral and bilateral z-transforms (Section 14.1). The table The notation for z found in the table below may dier from that found in other tables. For example, the basic z-transform of which are equivalent: u [n] can be written as either of the following two expressions, z 1 = z−1 1 − z −1 (14.15) 2 This content is available online at <http://cnx.org/content/m10119/2.14/>. 237 Signal δ [n − k ] u [n] − (u [−n − 1]) nu [n] n2 u [n] n u [n] (− (αn )) u [−n − 1] α u [n] nα u [n] n α u [n] Qm k=1 (n−k +1) α n u [n] αm m! Z-Transform z −k z z −1 z z −1 z (z −1)2 z (z +1) (z −1)3 z (z 2 +4z +1) (z −1)4 z z −α z z −α αz (z −α)2 αz (z +α) (z −α)3 z (z −α)m+1 z (z −γ cos(α)) z 2 −(2γ cos(α))z +γ 2 zγ sin(α) z 2 −(2γ cos(α))z +γ 2 ROC Allz |z | > 1 |z | < 1 |z | > 1 |z | > 1 |z | > 1 |z | < |α| |z | > |α| |z | > |α| |z | > |α| 3 n n 2n γ n cos (αn) u [n] γ n sin (αn) u [n] |z | > |γ | |z | > |γ | Table 14.1 14.3 Region of Convergence for the Z-transform 14.3.1 The Region of Convergence The region of convergence, known as the 3 ROC, where the z-transform (Section 14.1) exists. The z-transform of a sequence is dened as ∞ is important to understand because it denes the region X (z ) = n=−∞ The ROC for a given z-transform is a x [n] z −n z for which the z-transform converges. (14.16) power series, it converges when x [n] z −n is absolutely summable. ∞ x [n] , is dened as the range of Since the Stated dierently, |x [n] z −n | < ∞ n=−∞ must be satised for convergence. (14.17) 14.3.2 Properties of the Region of Convergencec The Region of Convergence has a number of properties that are dependent on the characteristics of the signal, x [n]. By denition a pole is a where • The ROC cannot contain any poles. must be nite for all X (z ) is innite. Since X (z ) z for convergence, there cannot be a pole in the ROC. 3 This content is available online at <http://cnx.org/content/m10622/2.5/>. 238 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING • If x [n] is a nite-duration sequence, then the ROC is the entire z-plane, except possibly z = 0 or |z | = ∞. A nite-duration sequence is a sequence that is nonzero in a nite interval When n1 ≤ n ≤ n2 . As long as each value of x [n] is nite then the sequence will be absolutely summable. n2 > 0 there will be a z −1 term and thus the ROC will not include z = 0. When n1 < 0 then the sum will be innite and thus the ROC will not include |z | = ∞. On the other hand, when n2 ≤ 0 then the ROC will include z = 0, and when n1 ≥ 0 the ROC will include |z | = ∞. With these constraints, the only signal, then, whose ROC is the entire z-plane is x [n] = cδ [n]. Figure 14.6: An example of a nite duration sequence. The next properties apply to innite duration sequences. when As noted above, the z-transform converges |X (z ) | < ∞. So we can write ∞ ∞ ∞ |X (z ) | = | n=−∞ x [n] z −n |≤ n=−∞ |x [n] z −n |= n=−∞ |x [n] |(|z |) −n (14.18) We can then split the innite sum into positive-time and negative-time portions. So |X (z ) | ≤ N (z ) + P (z ) where (14.19) −1 N (z ) = n=−∞ and |x [n] |(|z |) −n (14.20) ∞ P (z ) = n=0 In order for |x [n] |(|z |) −n (14.21) |X (z ) | to be nite, |x [n] | must be bounded. Let us then set |x (n) | ≤ C1 r1 n (14.22) 239 for n<0 and |x (n) | ≤ C2 r2 n for (14.23) n≥0 From this some further properties can be derived: • If x [n] is a right-sided sequence, then the ROC extends outward from the outermost pole in X (z ). A right-sided sequence is a sequence where x [n] = 0 for n < n1 < ∞. Looking at the positive-time portion from the above derivation, it follows that ∞ ∞ P (z ) ≤ C2 n=0 Thus in order for this sum to converge, of the form r2 n (|z |) |z | > r2 , −n = C2 n=0 r2 |z | n (14.24) and therefore the ROC of a right-sided sequence is |z | > r2 . Figure 14.7: A right-sided sequence. 240 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.8: The ROC of a right-sided sequence. • If x [n] is a left-sided sequence, then the ROC extends inward from the innermost pole in X (z ). A right-sided sequence is a sequence where x [n] = 0 for n > n2 > −∞. Looking at the negative-time portion from the above derivation, it follows that −1 −1 N (z ) ≤ C1 n=−∞ r1 n (|z |) −n = C1 n=−∞ r1 |z | n ∞ = C1 k=1 |z | r1 k (14.25) Thus in order for this sum to converge, the form |z | < r1 , and therefore the ROC of a left-sided sequence is of |z | < r1 . Figure 14.9: A left-sided sequence. 241 Figure 14.10: The ROC of a left-sided sequence. • If x [n] is a two-sided sequence, the ROC will be a ring in the z-plane that is bounded on the interior and exterior by a pole. A two-sided sequence is an sequence with innite duration in the positive and negative directions. From the derivation of the above two properties, it follows that if r2 < |z | < r2 converges, then both the positive-time and negative-time portions converge and X (z ) converges as well. Therefore the ROC of a two-sided sequence is of the form r2 < |z | < r2 . thus Figure 14.11: A two-sided sequence. 242 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.12: The ROC of a two-sided sequence. 14.3.3 Examples To gain further insight it is good to look at a couple of examples. Example 14.3 Lets take x1 [n] = The z-transform of 1 2 n u [n] + 1 4 |z | > n u [n] 1 2. (14.26) 1n z u [n] is z− 1 with an ROC at 2 2 Figure 14.13: The ROC of ` 1 ´n 2 u [n] 243 The z-transform of −1 n z u [n] is z+ 1 with an ROC at 4 4 |z | > −1 4. Figure 14.14: The ROC of ` −1 ´n 4 u [n] Due to linearity, X1 [z ] = = z z + z+ 1 z− 1 2 4 1 2z (z − 8 ) (14.27) ( z− 1 2 )( z+ 1 4 ) and By observation it is clear that there are two zeros, at Following the obove properties, the ROC is 0 |z | > 1 2. 1 −1 1 8 , and two poles, at 2 , and 4 . 244 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.15: The ROC of x1 [n] = ` 1 ´n 2 u [n] + ` −1 ´n 4 u [n] Example 14.4 Now take x2 [n] = The z-transform and ROC of z-transorm of −1 4 n u [n] − 1 2 n u [−n − 1] (14.28) − 1n 2 u [−n −1 n u [n] was shown in the example above (Example 14.3). The 4 1 z − 1] is z− 1 with an ROC at |z | > 2 . 2 245 Figure 14.16: The ROC of − ` `` 1 ´n ´´ 2 u [−n − 1] Once again, by linearity, X2 [z ] = = z z + z− 1 z+ 1 4 2 1 z (2z − 8 ) (14.29) (z+ 1 )(z− 1 ) 4 2 0 and By observation it is again clear that there are two zeros, at in ths case though, the ROC is |z | < 1 2. 1 1 −1 16 , and two poles, at 2 , and 4 . 246 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Figure 14.17: The ROC of x2 [n] = ` −1 ´n 4 u [n] − ` 1 ´n 2 u [−n − 1]. 14.4 Inverse Z-Transform 4 When using the z-transform (Section 14.1) ∞ X (z ) = n=−∞ it is often useful to be able to nd x [n] z −n (14.30) x [n] given X (z ). There are at least 4 dierent methods to do this: 1. Inspection (Section 14.4.1: Inspection Method) 2. Partial-Fraction Expansion (Section 14.4.2: Partial-Fraction Expansion Method) 3. Power Series Expansion (Section 14.4.3: Power Series Expansion Method) 4. Contour Integration (Section 14.4.4: Contour Integration Method) 14.4.1 Inspection Method This "method" is to basically become familiar with the z-transform pair tables (Section 14.2) and then "reverse engineer". Example 14.5 When given X (z ) = with an ROC (Section 14.3) of z z−α |z | > α 4 This content is available online at <http://cnx.org/content/m10651/2.4/>. 247 we could determine "by inspection" that x [n] = α n u [n] 14.4.2 Partial-Fraction Expansion Method When dealing with linear time-invariant systems the z-transform often in the form X (z ) = = B (z ) A(z PM) Pk=0 N (bk z−k ) z −k ) (14.31) k=0 (ak This can also expressed as X (z ) = where If a0 b0 and M k=1 N k=1 1 − ck z −1 (1 − dk z −1 ) (14.32) ck represents the nonzero zeros of X (z ) M < N then X (z ) can be represented as dk represents the nonzero poles. N X (z ) = k=1 Ak 1 − dk z −1 (14.33) This form allows for easy inversions of each term of the sum using the inspection method (Section 14.4.1: Inspection Method) and the transform table (Section 14.2). Thus if the numerator is a polynomial then it is necessary to use partial-fraction expansion expressed as 5 to put X (z ) in the above form. If M ≥N then X (z ) can be M −N X (z ) = r =0 Br z −r + N −1 ' −k k=0 bk z N −k ) k=0 (ak z (14.34) Example 14.6 Find the inverse z-transform of X (z ) = where the ROC is 1 + 2z −1 + z −2 1 + (−3z −1 ) + 2z −2 so we have to use long division to get |z | > 2. In this case M = N = 2, X (z ) = Next factor the denominator. 1 7 −1 1 2 + 2z + 2 1 + (−3z −1 ) + 2z −2 X (z ) = 2 + Now do partial-fraction expansion. (−1) + 5z −1 (1 − 2z −1 ) (1 − z −1 ) X (z ) = |z | > 2, 9 1 A1 A2 1 −4 2 + + =+ + 2 1 − 2z −1 1 − z −1 2 1 − 2z −1 1 − z −1 Now each term can be inverted using the inspection method and the z-transform table. Thus, since the ROC is x [n] = 5 "Partial 1 9 δ [n] + 2n u [n] + (−4u [n]) 2 2 Fraction Expansion" <http://cnx.org/content/m2111/latest/> 248 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING 14.4.3 Power Series Expansion Method When the z-transform is dened as a power series in the form ∞ X (z ) = n=−∞ then each term of the sequence of x [n] z −n (14.35) x [n] can be determined by looking at the coecients of the respective power z −n . Example 14.7 Now look at the z-transform of a nite-length sequence. z 2 1 + 2z −1 1 2 1 1 − 2 z −1 X (z ) = 1 + z −1 = z2 + 5 z + 2 is clear that + − z −1 X (z ). (14.36) In this case, since there were no poles, we multiplied the factors of Now, by inspection, it 1 5 x [n] = δ [n + 2] + δ [n + 1] + δ [n] + (− (δ [n − 1])) 2 2 . One of the advantages of the power series expansion method is that many functions encountered in engineering problems have their power series' tabulated. Thus functions such as log, sin, exponent, sinh, etc, can be easily inverted. Example 14.8 Suppose X (z ) = logn 1 + αz −1 Noting that ∞ logn (1 + x) = n=1 Then −1n+1 xn n ∞ X (z ) = n=1 Therefore −1n+1 αn z −n n n≥1 X (z ) = −1n+1 αn if n 0 if n ≤ 0 14.4.4 Contour Integration Method Without going in to much detail x [n] = where 1 2πj r X (z ) z n−1 dz X (z ) (14.37) r is a counter-clockwise contour in the ROC of encircling the origin of the z-plane. To further expand on this method of nding the inverse requires the knowledge of complex variable theory and thus will not be addressed in this module. 249 14.5 Rational Functions 14.5.1 Introduction 6 When dealing with operations on polynomials, the term particular relationship between two polynomials. rational function is a simple way to describe a Denition 14.1: rational function Example For any two polynomials, A and B, their quotient is called a rational function. Below is a simple example of a basic rational function, f (x). Note that the numerator and denominator can be polynomials of any order, but the rational function is undened when the denominator equals zero. f (x) = 2x2 x2 − 4 +x−3 (14.38) If you have begun to study the Z-transform (Section 14.1), you should have noticed by now they are all rational functions. Below we will look at some of the properties of rational functions and how they can be used to reveal important characteristics about a z-transform, and thus a signal or LTI system. 14.5.2 Properties of Rational Functions In order to see what makes rational functions special, let us look at some of their basic properties and characteristics. If you are familiar with rational functions and basic algebraic properties, skip to the next section (Section 14.5.3: Rational Functions and the Z-Transform) to see how rational functions are useful when dealing with the z-transform. 14.5.2.1 Roots To understand many of the following characteristics of a rational function, one must begin by nding the roots of the rational function. In order to do this, let us factor both of the polynomials so that the roots can be easily determined. Like all polynomials, the roots will provide us with information on many key properties. The function below shows the results of factoring the above rational function, (14.38). f ( x) = (x + 2) (x − 2) (2x + 3) (x − 1) (14.39) Thus, the roots of the rational function are as follows: Roots of the numerator are: Roots of the denominator are: note: {−2, 2} {−3, 1} In order to understand rational functions, it is essential to know and understand the roots that make up the rational function. 14.5.2.2 Discontinuities Because we are dealing with division of two polynomials, we must be aware of the values of the variable that will cause the denominator of our fraction to be zero. When this happens, the rational function becomes undened, i.e. we have a discontinuity in the function. Because we have already solved for our roots, it is very easy to see when this occurs. When the variable in the denominator equals any of the roots of the denominator, the function becomes undened. 6 This content is available online at <http://cnx.org/content/m10593/2.7/>. 250 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Example 14.9 Continuing to look at our rational function above, (14.38), we can see that the function will have discontinuities at the following points: x = {−3, 1} In respect to the Cartesian plane, we say that the discontinuities are the values along the x-axis where the function in undened. These discontinuities often appear as the values where the function is undened. vertical asymptotes on the graph to represent 14.5.2.3 Domain Using the roots that we found above, the Denition 14.2: domain Example domain of the rational function can be easily dened. The group, or set, of values that are dened by a given function. Using the rational function above, (14.38), the domain can be dened as any real number x where x does not equal 1 or negative 3. Written out mathematical, we get the following: { x ∈ R |x = −3 and x = 1} (14.40) 14.5.2.4 Intercepts The x-intercept is dened as the point(s) where f (x), i.e. function will have an x-intercept wherever The to zero and solving the rational function. the output of the rational functions, equals zero. Because we have already found the roots of the equation this process is very simple. From algebra, we know that the output will be zero whenever the numerator of the rational function is equal to zero. Therefore, the y-intercept occurs whenever x equals zero. x equals one of the roots of the numerator. This can be found by setting all the values of x equal 14.5.3 Rational Functions and the Z-Transform As we have stated above, all z-transforms can be written as rational functions, which have become the most common way of representing the z-transform. Because of this, we can use the properties above, especially those of the roots, in order to reveal certain characteristics about the signal or LTI system described by the z-transform. Below is the general form of the z-transform written as a rational function: X (z ) = b0 + b1 z −1 + · · · + bM z −M a0 + a1 z −1 + · · · + aN z −N (14.41) If you have already looked at the module about Understanding Pole/Zero Plots and the Z-transform (Section 14.7), you should see how the roots of the rational function play an important role in understanding the z-transform. The equation above, (14.41), can be expressed in factored form just as was done for the simple rational function above, see (14.39). Thus, we can easily nd the roots of the numerator and denominator of the z-transform. The following two relationships become apparent: Relationship of Roots to Poles and Zeros • • The roots of the numerator in the rational function will be the The roots of the denominator in the rational function will be the zeros of the z-transform poles of the z-transform 251 14.5.4 Conclusion Once we have used our knowledge of rational functions to nd its roots, we can manipulate a z-transform in a number of useful ways. We can apply this knowledge to representing an LTI system graphically through a Pole/Zero Plot (Section 14.7), or to analyze and design a digital lter through Filter Design from the Z-Transform (Section 14.8). 14.6 Dierence Equation 14.6.1 Introduction 7 One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coecient dierence equation (LCCDE) serves as a way to express just this relationship in a discrete-time system. manipulating a system. Writing the sequence of inputs and outputs, which represent the characteristics of the LTI system, as a dierence equation help in understanding and Denition 14.3: dierence equation An equation that shows the relationship between consecutive values of a sequence and the dierences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs. Example y [n] + 7y [n − 1] + 2y [n − 2] = x [n] − 4x [n − 1] (14.42) 14.6.2 General Formulas from the Dierence Equation As stated briey in the denition above, a dierence equation is a very useful tool in describing and calculating the output of the system described by the formula for a given sample equation is its ability to help easily nd the transform, from the dierence equation. n. The key property of the dierence H (z ), of a system. In the following two subsections, we will look at the general form of the dierence equation and the general conversion to a z-transform directly 14.6.2.1 Dierence Equation The general form of a linear, constant-coecient dierence equation (LCCDE), is shown below: N M (ak y [n − k ]) = k=0 k=0 (bk x [n − k ]) (14.43) We can also write the general form to easily express a recursive output, which looks like this: N M y [n] = − k=1 From this equation, note that of (ak y [n − k ]) + k=0 (bk x [n − k ]) (14.44) N represents the order of the dierence equation and corresponds to the memory of the system being initial conditions, must be known. y [n − k ] represents the outputs and x [n − k ] represents the inputs. The value represented. Because this equation relies on past values of the output, in order to compute a numerical solution, certain past outputs, referred to as the 7 This content is available online at <http://cnx.org/content/m10595/2.5/>. 252 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING 14.6.2.2 Conversion to Z-Transform Using the above formula, (14.43), we can easily generalize the equation. Below are the steps taken to convert any dierence equation into its transfer function, i.e. 8 transfer function, H (z ), for any dierence zof all the terms in (14.43). Then we use transform. The rst step involves taking the Fourier Transform the linearity property to pull the transform inside the summation and the time-shifting property of the z-transform to change the time-shifting terms to exponentials. Once this is done, we arrive at the following equation: a0 = 1. N M Y (z ) = − k=1 ak Y (z ) z −k H (z ) = = Y (z ) X (z ) PM + k=0 bk X (z ) z −k (14.45) −k ) k=0 (bk z PN 1+ k=1 (ak z −k ) (14.46) 14.6.2.3 Conversion to Frequency Response Once the z-transform has been calculated from the dierence equation, we can go one step further to dene the frequency response of the system, or lter, that is being represented by the dierence equation. note: Remember that the reason we are dealing with these formulas is to be able to aid us in lter design. A LCCDE is one of the easiest ways to represent FIR lters. By being able to nd the frequency response, we will be able to look at the basic properties of any lter represented by a simple LCCDE. Below is the general formula for the frequency response of a z-transform. The conversion is simple a matter of taking the z-transform formula, H (z ), and replacing every instance of z with ejw . (14.47) H (w ) = = H (z ) |z,z=ejw PM −(jwk) ) k=0 (bk e PN ak e−(jwk) ) k=0 ( Once you understand the derivation of this formula, look at the module concerning Filter Design from the Z-Transform (Section 14.8) for a look into how all of these ideas of the Z-transform (Section 14.1), Dierence Equation, and Pole/Zero Plots (Section 14.7) play a role in lter design. 14.6.3 Example Example 14.10: Finding Dierence Equation Below is a basic example showing the opposite of the steps above: given a transfer function one can easily calculate the systems dierence equation. H (z ) = (z + 1) z−1 z+ 2 2 3 4 (14.48) Given this transfer function of a time-domain lter, we want to nd the dierence equation. To begin with, expand both polynomials and divide them by the highest order z. H (z ) = = = (z +1)(z +1) 1 3 (z− 2 )(z+ 4 ) z 2 +2z +1 z 2 +2z +1− 3 8 1+2z −1 +z −2 1+ 1 z −1 − 3 z −2 4 8 (14.49) 8 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 253 From this transfer function, the coecients of the two polynomials will be our form of the transfer function, we can easily write the dierence equation: ak and bk values found in the general dierence equation formula, (14.43). Using these coecients and the above 3 1 x [n] + 2x [n − 1] + x [n − 2] = y [n] + y [n − 1] − y [n − 2] 4 8 recursive nature of the system. (14.50) In our nal step, we can rewrite the dierence equation in its more common form showing the y [n] = x [n] + 2x [n − 1] + x [n − 2] + −1 3 y [n − 1] + y [n − 2] 4 8 (14.51) 14.6.4 Solving a LCCDE In order for a linear constant-coecient dierence equation to be useful in analyzing a LTI system, we must be able to nd the systems output based upon a known input, common methods exist for solving a LCCDE: the of these methods. direct method x (n), and a set of initial conditions. Two and the indirect method, the later being based on the z-transform. Below we will briey discuss the formulas for solving a LCCDE using each 14.6.4.1 Direct Method The nal solution to the output based on the direct method is the sum of two parts, expressed in the following equation: y (n) = yh (n) + yp (n) The rst part, to as (14.52) particular solution. yh (n), is referred to as the homogeneous solution and the second part, yh (n), is referred The following method is very similar to that used to solve many dierential equations, so if you have taken a dierential calculus course or used dierential equations before then this should seem very familiar. 14.6.4.1.1 Homogeneous Solution We begin by assuming that the input is zero, dierence equation: x (n) = 0. Now we simply need to solve the homogeneous N (ak y [n − k ]) = 0 k=0 (14.53) In order to solve this, we will make the assumption that the solution is in the form of an exponential. We will use lambda, λ, to represent our exponential terms. We now have to solve the following equation: N ak λn−k = 0 k=0 (14.54) We can expand this equation out and factor out all of the lambda terms. This will give us a large polynomial in parenthesis, which is referred to as the the equation will be as follows: characteristic polynomial. n n The roots of this polynomial will be the key to solving the homogeneous equation. If there are all distinct roots, then the general solution to yh (n) = C1 (λ1 ) + C2 (λ2 ) + · · · + CN (λN ) n (14.55) 254 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING However, if the characteristic equation contains multiple roots then the above general solution will be slightly dierent. Below we have the modied version for an equation where λ1 has K n multiple roots: yh (n) = C1 (λ1 ) + C1 n(λ1 ) + C1 n2 (λ1 ) + · · · + C1 nK −1 (λ1 ) + C2 (λ2 ) + · · · + CN (λN ) n n n n n (14.56) 14.6.4.1.2 Particular Solution The particular solution, yp (n), will be any solution that will solve the general dierence equation: N M (ak yp (n − k )) = k=0 In order to solve, our guess for the solution to the dierence equation and solve it out. (bk x (n − k )) k=0 (14.57) yp (n) will take on the form of the input, x (n). After guessing at a solution to the above equation involving the particular solution, one only needs to plug the solution into 14.6.4.2 Indirect Method The indirect method utilizes the relationship between the dierence equation and z-transform, discussed earlier (Section 14.6.2: General Formulas from the Dierence Equation), to nd a solution. The basic idea is to convert the dierence equation into a z-transform, as described above (Section 14.6.2.2: Conversion to Z-Transform), to get the resulting output, expansion, we can arrive at the solution. Y (z ). Then by inverse transforming this and using partial-fraction 14.7 Understanding Pole/Zero Plots on the Z-Plane 14.7.1 Introduction to Poles and Zeros of the Z-Transform 9 Once the Z-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many dening characteristics. Z-transform will have the below structure, based on Rational Functions (Section 14.5): The X (z ) = P (z ) Q (z ), P (z ) Q (z ) (14.58) The two polynomials, Transform. and allow us to nd the poles and zeros (Section 13.6) of the Z- Denition 14.4: zeros 1. The value(s) for z where P (z ) = 0. 2. The complex frequencies that make the overall gain of the lter transfer function zero. Denition 14.5: poles 1. The value(s) for z where Q (z ) = 0. 2. The complex frequencies that make the overall gain of the lter transfer function innite. Example 14.11 Below is a simple transfer function with the poles and zeros shown below it. H (z ) = 9 This z+1 z−1 z+ 2 3 4 content is available online at <http://cnx.org/content/m10556/2.8/>. 255 The zeros are: The poles are: {−1} 1 2, − 3 4 14.7.2 The Z-Plane Once the poles and zeros have been found for a given Z-Transform, they can be plotted onto the Z-Plane. The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable The position on the complex plane is given by plane is denoted by z. rejθ and the angle from the positive, real axis around the θ. When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros by an "o". The below gure shows the Z-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section. Z-Plane Figure 14.18 14.7.3 Examples of Pole/Zero Plots This section lists several examples of nding the poles and zeros of a transfer function and then plotting them onto the Z-Plane. Example 14.12: Simple Pole/Zero Plot H (z ) = The zeros are: The poles are: z z− 1 2 z+ 3 4 {0} 1 2, − 3 4 256 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Pole/Zero Plot Using the zeros and poles found from the transfer function, the one zero is mapped to `´ 1 zero and the two poles are placed at 2 and − 3 4 Figure 14.19: Example 14.13: Complex Pole/Zero Plot H (z ) = The zeros are: The poles are: z− 1 2 (z − j ) (z + j ) 1 − 2j z − 1 + 1j 2 2 {j, −j } −1, 1 + 1 j, 1 − 1 j 2 2 2 2 Pole/Zero Plot Figure 14.20: ±j , and the poles are placed at −1, Using the zeros and poles found from the transfer function, the zeros are mapped to 1 1 + 1 j and 1 − 2 j 2 2 2 257 MATLAB Z-Plane. - If access to MATLAB is readily available, then you can use its functions to easily create pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the % Set up vector for zeros z = [j ; -j]; % Set up vector for poles p = [-1 ; .5+.5j ; .5-.5j]; figure(1); zplane(z,p); title('Pole/Zero Plot for Complex Pole/Zero Plot Example'); 14.7.4 Pole/Zero Plot and Region of Convergence The region of convergence (ROC) for X (z ) in the complex Z-plane can be determined from the pole/zero plot. Although several regions of convergence may be possible, where each one corresponds to a dierent impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot. Filter Properties from ROC • • If the ROC extends outward from the outermost pole, then the system is If the ROC includes the unit circle, then the system is stable. causal. Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot (Example 14.12: Simple Pole/Zero Plot) discussed earlier. The shaded region indicates the ROC chosen for the lter. From this gure, we can see that the lter will be both causal and stable since the above listed conditions are both met. Example 14.14 H (z ) = z z− 1 2 z+ 3 4 258 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Region of Convergence for the Pole/Zero Plot Figure 14.21: The shaded area represents the chosen ROC for the transfer function. 14.7.5 Frequency Response and the Z-Plane The reason it is helpful to understand and create these pole/zero plots is due to their ability to help us easily design a lter. Based on the location of the poles and zeros, the magnitude response of the lter can be quickly understood. Also, by starting with the pole/zero plot, one can design a lter and obtain its transfer function very easily. Refer to this module (Section 14.8) for information on the relationship between the pole/zero plot and the frequency response. 14.8 Filter Design using the Pole/Zero Plot of a Z-Transform 14.8.1 Estimating Frequency Response from Z-Plane 10 One of the motivating factors for analyzing the pole/zero plots is due to their relationship to the frequency response of the system. Based on the position of the poles and zeros, one can quickly determine the frequency response. This is a result of the correspondence between the frequency response and the transfer function evaluated on the unit circle in the pole/zero plots. dened as: The frequency response, or DTFT, of the system is H (w) = = H (z ) |z,z=ejw PM −(jwk) ) k=0 (bk e PN (ak e−(jwk) ) k=0 (14.59) Next, by factoring the transfer function into poles and zeros and multiplying the numerator and denominator by ejw we arrive at the following equations: H (w) = | 10 This b0 | a0 M k=1 N k=1 |ejw − ck | (|ejw − dk |) (14.60) content is available online at <http://cnx.org/content/m10548/2.9/>. 259 From (14.60) we have the frequency response in a form that can be used to interpret physical characteristics about the lter's frequency response. The numerator and denominator contain a product of terms of the form |ejw − h|, where h is either a zero, denoted by ck or a pole, denoted by to represent the term and its parts on the complex plane. The pole or zero, its location anywhere on the complex plane and circle. The vector connecting these two points, the unit circle dependent on the value of dk . Vectors are commonly used h, is a vector from the origin to ejw is a vector from the origin to its location on the unit |ejw − h|, connects the pole or zero location to a place on w goes from w. From this, we can begin to understand how the magnitude of the frequency response is a ratio of the distances to the poles and zero present in the z-plane as zero to pi. These characteristics allow us to interpret |H (w) | as follows: |H (w) | = | b0 | a0 ”distances from zeros” ”distances from poles” (14.61) In conclusion, using the distances from the unit circle to the poles and zeros, we can plot the frequency response of the system. As specify how one should draw While moving around the unit circle... response is zero at that point. w goes from 0 to 2π , the following two properties, taken from the above equations, |H (w) |. 1. if close to a zero, then the magnitude is small. If a zero is on the unit circle, then the frequency 2. if close to a pole, then the magnitude is large. If a pole is on the unit circle, then the frequency response goes to innity at that point. 14.8.2 Drawing Frequency Response from Pole/Zero Plot Let us now look at several examples of determining the magnitude of the frequency response from the pole/zero plot of a z-transform. If you have forgotten or are unfamiliar with pole/zero plots, please refer back to the Pole/Zero Plots (Section 14.7) module. Example 14.15 In this rst example we will take a look at the very simple z-transform shown below: H (z ) = z + 1 = 1 + z −1 H (w) = 1 + e−(jw) For this example, some of the vectors represented by of the frequency response changes as |ejw − h|, for random values of w, are explicitly 0 to drawn onto the complex plane shown in the gure below. These vectors show how the amplitude w goes from of the terms in (14.60) above. One can see that when the frequency response will have its largest amplitude vectors decrease as does the amplitude of 2π , and also show the physical meaning w = 0, the vector is the longest and thus here. As w approaches π , the length of the |H (w) |. Since there are no poles in the transform, there is only this one vector term rather than a ratio as seen in (14.60). 260 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING (a) Pole/Zero Plot Figure 14.22: (b) Frequency Response: |H(w)| The rst gure represents the pole/zero plot with a few representative vectors graphed while the second shows the frequency response with a peak at +2 and graphed between plus and minus π. Example 14.16 For this example, a more complex transfer function is analyzed in order to represent the system's frequency response. H (z ) = z z− 1 2 = 1 1 − 1 z −1 2 H (w ) = 1 1 − 1 e−(jw) 2 The Figure 14.23(a) Figure 14.23(b) Below we can see the two gures described by the above equations. (Pole/Zero Plot) represents the basic pole/zero plot of the z-transform, and statements in the previous section, we can see that when it is at this value of H (w ). (Frequency Response: |H(w)|) shows the magnitude of the frequency response. From the formulas w=0 the frequency will peak since w that the pole is closest to the unit circle. The ratio from (14.60) helps us see the mathematics behind this conclusion and the relationship between the distances from the unit circle and the poles and zeros. As w moves from 0 to π, we see how the zero begins to mask the eects of the pole and thus force the frequency response closer to 0. 261 (a) Pole/Zero Plot Figure 14.23: (b) Frequency Response: |H(w)| The rst gure represents the pole/zero plot while the second shows the frequency response with a peak at +2 and graphed between plus and minus π. 262 CHAPTER 14. Z-TRANSFORM AND DIGITAL FILTERING Chapter 15 Appendix: Hilbert Spaces and Orthogonal Expansions 15.1 Vector Spaces 15.1.1 Introduction Denition 15.1: Vector space A linear vector space 1 α (where α∈R or S is a collection of "vectors" such that (1) if f1 ∈ S ⇒ αf1 ∈ S α ∈ C) and (2) if f1 ∈ S , f2 ∈ S , then f1 + f2 ∈ S for all scalars To dene an abstract linear vector space, we need • • • • A set of things called "vectors" (X ) A set of things called "scalars" (A) A vector addition operator (+) A scalar multiplication operator (∗) The operators need to have all the properties of given below. Closure is usually the most important to show. 15.1.2 Vector Spaces If the scalars α are real, S is called a If the scalars α are complex, S is called a real vector space. complex vector space. S a If the "vectors" in S are functions of a continuous variable, we sometimes call linear function space 15.1.2.1 Properties We dene a set 1. 2. V to be a vector space if x + y = y + x for each x and y in V x + (y + z) = (x + y) + z for each x, y, and 3. There is a unique "zero vector" such that 4. For each 5. 6. x in V there is a unique vector 1x = x (c1 c2 ) x = c1 (c2 x) for each x in V and c1 z in V x + 0 = x for each x in V −x such that x + (−x) = 0. and c2 in C. 1 This content is available online at <http://cnx.org/content/m10767/2.5/>. 263 264 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 7. 8. c (x + y) = cx + cy for each x and y in V and c in C. (c1 + c2 ) x = c1 x + c2 x for each x in V and c1 and c2 in C. 15.1.2.2 Examples • Rn = real vector space • Cn = complex vector space ∞ • L1 (R) = f (t) | −∞ |f (t) |dt < ∞ •L ∞ is a vector space is a vector space is a vector space (R) = { f (t) |f (t) is bounded} f (t) | ∞ −∞ 2 • L (R) = 2 (|f (t) |) dt < ∞ = finite energy signals • L2 ([0, T ]) = finite energy functions on interval [0, T] • 1 (Z), 2 (Z), ∞ (Z) are vector spaces • The collection of functions piecewise constant between the integers is a vector space Figure 15.1 • R+ 2 x 0 |x0 > 0 and x1 > 0 = x1 is is not a vector space. 1 1 ∈ R+ 2 , but α 1 1 ∈ / R+ 2 , α < 0 • D = {z ∈ √ , |z | ≤ 1 } C |z1 + z2 | = 2 > 1 note: not a vector space. z1 = 1 ∈ D, z2 = j ∈ D, but z1 + z2 ∈ D, / Vector spaces can be collections of functions, collections of sequences, as well as collections of traditional vectors (i.e. nite lists of numbers) 265 15.2 Norms the concepts of 2 15.2.1 Introduction Much of the language in this section will be familiar to you - you should have previously been exposed to • • • inner products (Section 15.3) orthogonality basis expansions (Section 15.8) in the context of time signals). Rn . We're going to take what we know about vectors and apply it to functions (continuous 15.2.2 Norms The norm of a vector is a real number that represents the "size" of the vector. Example 15.1 In R2 , we can dene a norm to be a vectors geometric length. Figure 15.2 x = (x0 , x1 ) T , norm Mathematically, a x= norm · √ x0 2 + x1 2 is just a function (taking a vector and returning a real number) that satises three rules. To be a norm, · must satisfy: 1. the norm of every vector is positive 3. Triangle Property: x >0 , x∈S + y for all vectors 2. scaling a vector scales the norm by the same amount x+y ≤ x αx = |α| x x, y. "The for all vectors x and scalars α "size" of the sum of two vectors is less than or equal to the sum of their sizes" A vector space (Section 15.1) with a well dened norm is called a linear space. 2 This normed vector space or normed content is available online at <http://cnx.org/content/m10768/2.5/>. 266 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 15.2.2.1 Examples Example 15.2 Rn (or x0 , x = n−1 i=0 Cn ), x 1 x= ... xn−1 1 (|xi |), Rn with this norm is called 1 ([0, n − 1]). Figure 15.3: Collection of all x ∈ R2 with x 1 =1 Example 15.3 Rn (or Cn ), with norm x 2 = n−1 i=0 (|xi |) 2 1 2 , Rn is called 2 ([0, n − 1]) (the usual "Eu- clidean"norm). Figure 15.4: Collection of all x ∈ R2 with x 2 =1 267 Example 15.4 Rn (or Cn , with norm x ∞ = maxi {|xi |} is called ∞ ([0, n − 1]) Figure 15.5: x ∈ R2 with x ∞ =1 15.2.2.2 Spaces of Sequences and Functions We can dene similar norms for spaces of sequences and functions. Discrete time signals = sequences of numbers x [n] = {. . . , x−2 , x−1 , x0 , x1 , x2 , . . . } • • • x (n) x (n) x (n) x (n) 1 2 p = = ∞ i=−∞ (|x [i] |), x [n] ∈ (|x [i] |) 2 1 p 1 2 1 , (Z) ⇒ 2 p x 1 <∞ x p 2 ∞ i=−∞ ∞ i=−∞ x [n] ∈ (Z) ⇒ <∞ = ((|x [i] |) ) , x [n] ∈ (Z) ⇒ x = sup|x [i] |, x [n] ∈ ∞ (Z) ⇒ x ∞ < ∞ ∞ i p <∞ For continuous time functions: • • f (t) p = ∞ −∞ (|f (t) |) dt f (t) p p 1 p , f (t) ∈ Lp (R) ⇒ (|f (t) |) dt p 1 p f (t) p <∞ f (t) p (On the interval) 3 = T 0 , f (t) ∈ Lp ([0, T ]) ⇒ <∞ [Media Object] 3 This media object is a LabVIEW VI. Please view or download it at <NormCalc.llb> 268 CHAPTER 15. 4 APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 15.3 Inner Products You may have run across 15.3.1 Denition: Inner Product inner products, also called dot products, on Rn before in some of your math or science courses. If not, we dene the inner product as follows, given we have some Denition 15.2: inner product x ∈ Rn and y ∈ Rn The inner product is dened mathematically as: < x, y > = yT x x0 x1 . . . (15.1) = y0 y1 ... y n−1 x n−1 = n−1 i=0 (xi yi ) 15.3.1.1 Inner Product in 2-D If we have x ∈ R2 and y ∈ R2 , then we can write the inner product as < x , y >= x where y cos (θ) (15.2) θ is the angle between x and y. Figure 15.6: General plot of vectors and angle referred to in above equations. Geometrically, the inner product tells us about the Example 15.5 For example, if strength of x in the direction of y. cos (θ) x = 1, then < x , y >= y 4 This content is available online at <http://cnx.org/content/m10755/2.7/>. 269 Figure 15.7: Plot of two vectors from above example. The following characteristics are revealed by the inner product: • < x, y > measures the length of the projection of y onto x. • < x, y > is maximum (for given x , y ) when x and y are in the same cos (θ) = 1). • < x, y > is zero when cos (θ) = 0 ⇒ θ = 90 ◦ , i.e. x and y are orthogonal. direction ( θ=0⇒ 15.3.1.2 Inner Product Rules In general, an inner product on a complex vector space is just a function (taking two vectors and returning a complex number) that satises certain rules: • • • • Conjugate Symmetry: < x, y >= < x, y >∗ Scaling: < α x , y >= α < x , y > Additivity: < x + y, z >=< x, z > + < y, z > "Positivity": < x, x >> 0 , x = 0 Denition 15.3: orthogonal We say that x and y are orthogonal if: < x , y >= 0 [Media Object] 5 15.4 Hilbert Spaces 15.4.1 Hilbert Spaces A vector space which is also a 6 S normed linear space. with a valid inner product (Section 15.3) dened on it is called an A inner product space, Hilbert space is an inner product space that is complete with 5 This media object is a LabVIEW VI. Please view or download it at <InnerProductCalc.llb> 6 This content is available online at <http://cnx.org/content/m10840/2.5/>. 270 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 7 respect to the norm dened using the inner product. Hilbert spaces are named after David Hilbert developed this idea through his studies of integral equations. product as: , who We dene our valid norm using the inner (15.3) x= √ < x, x > Hilbert spaces are useful in studying and generalizing the concepts of Fourier expansion, Fourier transforms, and are very important to the study of quantum mechanics. Hilbert spaces are studied under the functional analysis branch of mathematics. 15.4.1.1 Examples of Hilbert Spaces Below we will list a few examples of Hilbert spaces . You can verify that these are valid inner products at home. 8 • For Cn , < x, y >= yT x = y0 ∗ y1 ∗ ... yn−1 ∗ x0 x1 . . . n−1 (xi yi ∗ ) = i=0 x n−1 • Space of nite energy complex functions: L 2 ( R) ∞ < f , g >= −∞ f (t) g (t) dt ∗ • Space of square-summable sequences: 2 (Z) ∞ < x , y >= i=−∞ x [i] y [i] ∗ 15.5 Cauchy-Schwarz Inequality 15.5.1 Introduction Recall in 9 R2 , < x, y >= x y cos (θ) | < x, y > | ≤ x y (15.4) The same relation holds for inner product spaces (Section 15.3) in general... 15.5.1.1 Cauchy-Schwarz Inequality Denition 15.4: Cauchy-Schwarz Inequality For x, y in an inner product space | < x, y > | ≤ x with equality holding pendence), i.e. y if and only if x = αy for some scalar x and y α. are linearly dependent (Section 5.1.1: Linear Inde- 7 http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Hilbert.html 8 "Hilbert Spaces" <http://cnx.org/content/m10434/latest/> 9 This content is available online at <http://cnx.org/content/m10757/2.6/>. 271 15.5.2 Matched Filter Detector Also referred to as Cauchy-Schwarz's "Killer App." 15.5.2.1 Concept behind Matched Filter If we are given two vectors, This tells us: f and g, then the Cauchy-Schwarz Inequality (CSI) is maximized when f = αg . • f is in the same "direction" as g • if f and g are functions, f = αg means f and g have the same shape. For example, say we are in a situation where we have a set of signals, dened as and we want to be able to tell which, if any, of these signals resemble another given signal Strategy: {g1 (t) , g2 (t) , . . . , gk (t)}, f (t). will be In order to nd the signal(s) that resembles f (t) we will take the inner products. If large. gi (t) resembles f (t), then the absolute value of the inner product, | < f (t) , gi (t) > |, Detector. This idea of being able to measure and rank the "likeness" of two signals leads us to the Matched Filter 15.5.2.2 Comparing Signals The simplest use of the Matched Filter would be to take a set of "candidate" signals, say our set of {g1 (t) , g2 (t) , . . . , gk (t)}, and try to match it to a "template" signal, f (t). For example say we are given the below template (Figure 15.8 (Template Signal)) and candidate signals (Figure 15.9 (Candidate Signals)): Template Signal Figure 15.8: Our signal we wish to nd match of. 272 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Candidate Signals (a) Figure 15.9: (b) Clearly by looking at these we can see which signal will provide the better match to our template signal. Now if our only question was which function was a closer match to with the answer based on inspection - f (t) then we can easily come up g2 (t). However, this will not always be the case. Also, we may want to develop a method, or algorithm, that could automate these comparisons. Or perhaps we wish to have a quantitative value expressing just how similar the signals are. To address these issues, we will lay out a more formal approach to comparing the signals, which will, as mentioned above, be based on the inner product. In order to see which of our candidate signals, the following steps: g1 (t) or g2 (t), best resembles f (t) we need to perform • • • Normalize the gi (t) f (t) Take the inner product with Find the biggest! Or, putting it mathematically: Best candidate = argmax i | < f , gi > | gi (15.5) 15.5.2.3 Finding a Pattern Extending these thoughts of using the Matched Filter to nd similarities among signals, we can use the same idea to search for a pattern in a long signal. The idea is simply to repeatedly perform the same calculation as we did previously; however, now instead of calculating on dierent signals we will simply perform the inner product with dierent shifted versions of our "pattern" signal. For example, say we have the following two signals - a pattern signal (Figure 15.10 (Pattern Signal)) and long signal (Figure 15.11 (Long Signal)). 273 Pattern Signal Figure 15.10: The pattern we are looking for in a our long signal having a length T . Long Signal Figure 15.11: Here is the long signal that contains a piece that resembles our pattern signal. Here we will look at two shifts of our pattern signal, shifting the signal by yield the following calculations and results: s1 and s2 . These two possibilities • Shift of s1 : s1 +T s1 g (t) f (t − s1 ) dt (|g (t) |) dt 2 s1 +T s1 = ”large” (15.6) • Shift of s2 : s2 +T s2 g (t) f (t − s2 ) dt (|g (t) |) dt 2 s2 +T s2 = ”small” (15.7) Therefore, we can dene a generalized equation for our matched lter: m (s) = matched filter s+T s (15.8) m (s) = g (t) f (t − s) dt g (t) |L2 ([s,s+T ]) (15.9) 274 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS where the numerator in (15.9) is the convolution of signals, we dene some g (t) ∗ f (−t). Now in order to decide whether or not the result from our matched lter detector is high enough to indicate an acceptable match between the two threshold. If m (s0 ) ≥ threshold then we have a match at location s0 . 15.5.2.4 Practical Examples 15.5.2.4.1 Image Detection In 2-D, this concept is used to match images together, such as verifying ngerprints for security or to match photos of someone. For example, this idea could be used for the ever-popular "Where's Waldo?" books! If we are given the below template (Figure 15.12(a)) and piece of a "Where's Waldo?" book (Figure 15.12(b)), (a) (b) Figure 15.12: Example of "Where's Waldo?" picture. Our Matched Filter Detector can be implemented to nd a possible match for Waldo. then we could easily develop a program to nd the closest resemblance to the image of Waldo's head in the larger picture. We would simply implement our same match lter algorithm: take the inner products at each shift and see how large our resulting answers are. This idea was implemented on this same picture for a Signals and Systems Project 10 at Rice University (click the link to learn more). 15.5.2.4.2 Communications Systems Matched lter detector are also commonly used in Communications Systems 11 . In fact, they are the optimal detectors in Gaussian noise. Signals in the real-world are often distorted by the environment around them, so there is a constant struggle to develop ways to be able to receive a distorted signal and then be able to lter it in some way to determine what the original signal was. Matched lters provide one way to compare a 10 http://www.owlnet.rice.edu/∼elec301/Projects99/waldo/process.html 11 "Structure of Communication Systems" <http://cnx.org/content/m0002/latest/> 275 received signal with two possible original ("template") signals and determine which one is the closest match to the received signal. For example, below we have a simplied example of Frequency Shift Keying the following coding for '1' and '0': 12 (FSK) where we having Figure 15.13: Frequency Shift Keying for '1' and '0'. Based on the above coding, we can create digital signals based on 0's and 1's by putting together the above two "codes" in an innite number of ways. For this example we will transmit a basic 3-bit number, 101, which is displayed in Figure 15.14: Figure 15.14: The bit stream "101" coded with the above FSK. Now, the signal picture above represents our original signal that will be transmitted over some communication system, which will inevitably pass through the "communications channel," the part of the system that will distort and alter our signal. As long as the noise is not too great, our matched lter should keep us from having to worry about these changes to our transmitted signal. Once this signal has been received, we will pass the noisy signal through a simple system, similar to the simplied version shown in Figure 15.15: 12 "Frequency Shift Keying" <http://cnx.org/content/m0545/latest/> 276 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Figure 15.15: Block diagram of matched lter detector. Figure 15.15 basically shows that our noisy signal will be passed in (we will assume that it passes in one "bit" at a time) and this signal will be split and passed to two dierent matched lter detectors. Each one will compare the noisy, received signal to one of the two codes we dened for '1' and '0.' Then this value will be passed on and whichever value is higher (i.e. whichever FSK code signal the noisy signal most resembles) will be the value that the receiver takes. For example, the rst bit that will be sent through will be a '1' so the upper level of the block diagram will have a higher value, thus denoting that a '1' was sent by the signal, even though the signal may appear very noisy and distorted. 15.5.3 Proof of CSI Here will look at the proof of our Cauchy-Schwarz Inequality (CSI) for a Theorem 15.1: For real vector space. CSI for Real Vector Space and f ∈ Hilbert Space S g ∈ Hilbert Space S, show: | < f, g > | ≤ f g (15.10) Proof: • with equality if and only if g = αf . g 2 If g = αf , show | < f, g > | = f | < f, g > | = | < f, αf > | = |α|| < f, f > | = |α|( f ) | < f, g > | = f • If (|α| f )= f g This veries our above statement of the CSI! g = αf , show | < f, g > | < f 2 g where we have βf + g = 0 , β ∈ R 0 < ( β f + g ) =< βf + g, βf + g >= β 2 < f, f > +2β < f, g > + < g, g > = β 2 ( f ) + 2β < f, g > +( g ) And we get a quadratic in BLAH  2 2 β. Visually, the quadratic polynomial in β > 0 for all β . Also, note that this polynomial has no real roots and the discriminant is less than 0. - BLAH BLAH aβ 2 + bβ + c 277 has discriminant β 2 − 4ac where we have: a=( f ) 2 b = 2 < f, g > c=( g ) 2 2 Therefore, we can plug this values into the above polynomials discriminant to get: 4(| < f, g > |) − 4( f ) ( g ) < 0 | < f, g > | < f Question: 2 2 (15.11) g (15.12) And nally we have proven the Cauchy-Schwarz Inequality formula for real vectors spaces. What changes do we have to make to the proof for a complex vector space? (try to gure this out at home) 15.6 Common Hilbert Spaces 15.6.1 Common Hilbert Spaces 13 Below we will look at the four most common Hilbert spaces (Section 15.3) that you will have to deal with when discussing and manipulating signals and systems. 15.6.1.1 Rn (reals scalars) and Cn (complex scalars), also called 2 ([0, n − 1]) x0 x 1 x= ... xn−1 are as follows: is a list of numbers (nite sequence). The inner product (Section 15.3) for our two spaces • Inner product Rn : < x, y > = = yT x n−1 i=0 ∗ (xi yi ) (15.13) • Inner product Cn : < x, y > = = yT x n−1 i=0 (xi yi ∗ ) (15.14) Model for: Discrete time signals on the interval x0 x 1 ... xn−1 13 This [0, n − 1] or periodic (with period n) discrete time signals. content is available online at <http://cnx.org/content/m10759/2.6/>. 278 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Figure 15.16 15.6.1.2 f ∈ L2 ([a, b]) Inner Product is a nite energy function on [a, b] b < f, g >= a Model for: continuous time signals on the interval signals f (t) g (t) dt ∗ (15.15) [a, b] or periodic (with period T = b − a) continuous time 15.6.1.3 x∈ innite sequence of numbers that's square-summable Inner product 2 (Z) is an ∞ < x, y >= i=−∞ Model for: discrete time, non-periodic signals x [i] y [i] ∗ (15.16) 15.6.1.4 f ∈ L2 (R) nite energy function on all of R. Inner product ∞ is a < f, g >= −∞ Model for: continuous time, non-periodic signals f (t) g (t) dt ∗ (15.17) 15.6.2 Associated Fourier Analysis Each of these 4 Hilbert spaces has a type of Fourier analysis associated with it. • L2 ([a, b]) → Fourier series 279 • 2 ([0, n − 1]) → Discrete Fourier Transform • L2 (R) → Fourier Transform • 2 (Z) → Discrete Time Fourier Transform But all 4 of these are based on the same principles (Hilbert space). Important note: Not all normed spaces are Hilbert spaces For example: this norm, L1 (R), f 1 = |f (t) |dt. i.e. a < ·, · > such that Try as you might, you can't nd an inner product that induces < f, f > = = In fact, of all the (|f (t) |) dt (f 2 1) 2 2 (15.18) Lp (R) spaces, L2 (R) is the only one that is a Hilbert space. Figure 15.17 Hilbert spaces are by far the nicest. If you use or study orthonormal basis expansion (Section 15.8) then you will start to see why this is true. 280 CHAPTER 15. 14 APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 15.7 Types of Basis 15.7.1 Normalized Basis Denition 15.5: Normalized Basis a basis (Section 5.1.3: Basis) {bi } where each bi has unit norm (15.19) bi = 1 , i ∈ Z note: basis applies only to normed spaces (Section 15.2). Example 15.6 We are given the following basis: The concept of basis applies to all vector spaces (Section 15.1). The concept of normalized 1 bi You can always normalize a basis: just multiply each basis vector by a constant, such as 1 1 {b0 , b1 } = , 1 −1 Normalized with 2 norm: 1 1 b0 = √ 2 1 ∼ 1 1 b1 = √ 2 −1 ∼ Normalized with 1 norm: 1 1 2 1 ∼ 1 1 b1 = 2 −1 ∼ b0 = 15.7.2 Orthogonal Basis Denition 15.6: Orthogonal Basis a basis {bi } in which the elements are mutually orthogonal < bi , bj >= 0 , i = j note: The concept of orthogonal basis applies only to Hilbert Spaces. 14 This content is available online at <http://cnx.org/content/m10772/2.6/>. 281 Example 15.7 Standard basis for R2 , also referred to as ([0, 1]): 1 b0 = 0 b1 = 0 1 2 1 < b0 , b1 >= i=0 (b0 [i] b1 [i]) = 1 × 0 + 0 × 1 = 0 Example 15.8 Now we have the following basis and relationship: 1 1 , = {h0 , h1 } 1 −1 < h0 , h1 >= 1 × 1 + 1 × (−1) = 0 15.7.3 Orthonormal Basis Pulling the previous two sections (denitions) together, we arrive at the most important and useful basis type: Denition 15.7: Orthonormal Basis a basis that is both normalized and orthogonal bi = 1 , i ∈ Z < bi , bj > , i = j Notation: We can shorten these two statements into one: < bi , bj >= δij where 1 if i = j δij = 0 if i = j is referred to as the Kronecker delta function (Section 1.5) and is also often written as Where δij δ [i − j ]. 282 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Example 15.9: Orthonormal Basis Example #1 1 0 , {b0 , b2 } = 0 1 Example 15.10: Orthonormal Basis Example #2 1 1 {b0 , b2 } = , 1 −1 Example 15.11: Orthonormal Basis Example #3 1 1 1 1 {b0 , b2 } = √ , √ 2 2 1 −1 15.7.3.1 Beauty of Orthonormal Bases Orthonormal bases are very easy to deal with! If {bi } is an orthonormal basis, we can write for any x (15.20) x= i It is easy to nd the (αi bi ) αi : < x, bi > = = < k (αk bk ) , bi > k (αk < bk , bi >) (15.21) where in the above equation we can use our knowledge of the delta function to reduce this equation: < bk , bi >= δik 1 if i = k = 0 if i = k (15.22) < x, bi >= αi Therefore, we can conclude the following important equation for x: (15.23) x= i The (< x, bi > bi ) bi 's) αi 's are easy to compute (no interaction between the Example 15.12 Given the following basis: 1 1 1 1 {b0 , b1 } = √ , √ 2 2 1 −1 represent x= 3 2 283 Example 15.13: Slightly Modied Fourier Series We are given the basis 1 √ ejω0 nt |∞ −∞ n= T on L2 ([0, T ]) where T= 2π ω0 . ∞ f (t) = n=−∞ 1 < f, ejω0 nt > ejω0 nt √ T L2 as Where we can calculate the above inner product in 1 < f, ejω0 nt >= √ T T 0 1 ∗ f (t) ejω0 nt dt = √ T T f (t) e−(jω0 nt) dt 0 15.7.3.2 Orthonormal Basis Expansions in a Hilbert Space Let {bi } be an orthonormal basis for a Hilbert space H. Then, for any x∈H we can write (15.24) x= i where (αi bi ) αi =< x, bi >. x in term of the • "Analysis": decomposing bi αi =< x, bi > (15.25) • "Synthesis": building x up out of a weighted combination of the bi (15.26) x= i [Media Object] 15 (αi bi ) 15.8 Orthonormal Basis Expansions 15.8.1 Main Idea 16 When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully by now you have been exposed to the concept of eigenvectors (Section 5.2) and there use in decomposing a signal into one of its possible basis. By doing this we are able to simplify our calculations of signals and systems through eigenfunctions of LTI systems (Section 5.5). Now we would like to look at an alternative way to represent signals, through the use of basis. orthonormal We can think of orthonormal basis as a set of building blocks we use to construct functions. We will build up the signal/vector as a weighted sum of basis elements. Example 15.14 ejω0 nt for all −∞ < n < T In our Fourier series (Section 6.2) equation, f (t) = representation of 1 The complex sinusoids √ ∞ form an orthonormal basis for L2 ([0, T ]). ∞ jω0 nt , the {cn } are just another n=−∞ cn e f (t). 15 This media object is a LabVIEW VI. Please view or download it at <ONB.llb> 16 This content is available online at <http://cnx.org/content/m10760/2.5/>. 284 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS note: For signals/vectors in a Hilbert Space, the expansion coecients are easy to nd. 15.8.2 Alternate Representation Recall our denition of a 1. The 2. The basis: A set of vectors {bi } in a vector space S is a basis if bi bi are linearly independent. span (Section 5.1.2: Span) S. That is, we can nd {αi }, where αi ∈ C (scalars) such that (15.27) x= i where (αi bi ) , x ∈ S and x is a vector in S, α is a scalar in C, b is a vector in S. Condition Condition 2 in the above denition says we can 1 ensures that the decomposition is note: decompose any vector in terms of the {bi }. unique (think about this at home). x. The {αi } provide an alternate representation of Example 15.15 Let us look at simple example in R2 , where we have the following vector: x= T T 1 2 Standard Basis: {e0 , e1 } = (1, 0) , (0, 1) x = e0 + 2 e1 Alternate Basis: {h0 , h1 } = (1, 1) , (1, −1) T T x= In general, given a basis −1 3 h0 + h1 2 2 how do we nd the {b0 , b1 } and a vector x ∈ R2 , α0 and α1 such that (15.28) x = α0 b0 + α1 b1 15.8.3 Finding the Alphas Now let us address the question posed above about nding (15.28) so that we can stack our αi 's in general for R2 . We start by rewriting bi 's as columns in a 2×2 matrix. x = α0 . . . b0 . . . + α1 b1 (15.29) x = b0 . . . α0 b1 α1 . . . (15.30) 285 Example 15.16 Here is a simple example, which shows a little more detail about the above equations. x [0] x [1] = α0 = b0 [0] b0 [1] + α1 b1 [0] b1 [1] (15.31) α0 b0 [0] + α1 b1 [0] α0 b0 [1] + α1 b1 [1] b0 [0] b1 [0] b0 [1] b1 [1] x [0] x [1] = α0 α1 (15.32) 15.8.3.1 Simplifying our Equation To make notation simpler, we dene the following two items from the above equations: • Basis Matrix: . . . . . . B = b0 . . . b1 . . . • Coecient Vector: α= α0 α1 This gives us the following, concise equation: x = Bα which is equivalent to (15.33) x= Example 15.17 Given a standard basis, 1 i=0 (αi bi ). 1 0 , , 0 1 then we have the following basis matrix: B= 0 1 1 0 To get the αi 's, we solve for the coecient vector in (15.33) α = B −1 x Where (15.34) B −1 is the inverse matrix 17 of B. 17 "Matrix Inversion" <http://cnx.org/content/m2113/latest/> 286 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 15.8.3.2 Examples Example 15.18 Let us look at the standard basis rst and try to calculate α from it. B= Where 1 0 0 1 =I α let us nd the inverse of I is the identity matrix. In order to solve for B rst (which is obviously very trivial in this case): B −1 = Therefore we get, 1 0 0 1 α = B −1 x = x Example 15.19 Let us look at a ever-so-slightly more complicated basis of our basis matrix and inverse basis matrix becomes: 1 1 , = {h0 , h1 } 1 −1 Then B= B −1 = and for this example it is given that 1 1 1 2 1 2 1 −1 1 2 −1 2 x= Now we solve for 3 2 α α = B −1 x = 1 2 1 2 1 2 −1 2 3 2 = 2.5 0.5 and we get x = 2.5h0 + 0.5h1 Exercise 15.1 Now we are given the following basis matrix and (Solution on p. 301.) 1 3 {b0 , b1 } = , 2 0 x= 3 2 x in terms of x: For this problem, make a sketch of the bases and then represent b0 and b1 . 287 note: A change of basis simply looks at the standard basis to our new basis, x from a "dierent perspective." B −1 transforms x from {b0 , b1 }. Notice that this is a totally mechanical procedure. 15.8.4 Extending the Dimension and Space We can also extend all these ideas past just urally to higher (> 2) dimensions. such that Rn and Cn . This procedure extends natn Given a basis {b0 , b1 , . . . , bn−1 } for R , we want to nd {α0 , α1 , . . . , αn−1 } and look at them in R2 x = α0 b0 + α1 b1 + · · · + αn−1 bn−1 Again, we will set up a basis matrix (15.35) B= b0 b1 b2 ... bn−1 where the columns equal the basis vectors and it will always be an n×n matrix (although the above matrix does not appear to be square since we left terms in vector notation). We can then proceed to rewrite (15.33) x= b0 b1 ... bn−1 α0 . . . = Bα αn−1 and α = B −1 x 15.9 Function Space Let 18 We can also nd basis vectors (Section 15.8) for vector spaces (Section 15.1) other than Rn . P2 is a v.s. Pn be the vector space of n-th order polynomials on (-1, 1) with real coecients (verify at home). Example 15.20 P2 = {all quadratic polynomials}. Let {b0 (t) , b1 (t) , b2 (t)} span P2 , i.e. you can write any b0 (t) = 1, b1 (t) = t, b2 (t) = t2 . f (t) ∈ P2 as f (t) = α0 b0 (t) + α1 b1 (t) + α2 b2 (t) for some Note: αi ∈ R. is 3 dimensional. P2 f (t) = t2 − 3t − 4 Alternate basis {b0 (t) , b1 (t) , b2 (t)} = write 1, t, 1 3t2 − 1 2 f (t) in terms of this new basis 3 d0 (t) = b0 (t), d1 (t) = b1 (t), d2 (t) = 2 b2 (t) − 1 b0 (t). 2 f (t) = t2 − 3t − 4 = 4b0 (t) − 3b1 (t) + b2 (t) 18 This content is available online at <http://cnx.org/content/m10770/2.5/>. 288 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS f (t) = β0 d0 (t) + β1 d1 (t) + β2 d2 (t) = β0 b0 (t) + β1 b1 (t) + β2 f (t) = so 3 1 b2 (t) − b0 (t) 2 2 β0 − 1 2 3 b0 (t) + β1 b1 (t) + β2 b2 (t) 2 β0 − 1 =4 2 β 1 = −3 3 β2 = 1 2 then we get 2 f (t) = 4.5d0 (t) − 3d1 (t) + d2 (t) 3 Example 15.21 ejω0 nt |∞ −∞ n= is a basis for L2 ([0, T ]), T = 2π ω0 , f (t) = n Cn ejω0 nt . "change of basis" formula We calculate the expansion coecients with Cn = 1 T T f (t) e−(jω0 nt) dt 0 (15.36) note: There are an innite number of elements in the basis set, that means L2 ([0, T ]) is innite dimensional (scary!). Innite-dimensional spaces are hard to visualize. We can get a handle on the intuition by recognizing they share many of the same mathematical properties with nite dimensional spaces. Many concepts apply to both (like "basis expansion"). Some don't (change of basis isn't a nice matrix formula). 15.10 Haar Wavelet Basis 15.10.1 Introduction 19 Fourier series (Section 6.2) is a useful orthonormal representation (Section 15.8) on phenomena (Section 6.11)). properties. L2 ([0, T ]) especiallly for inputs into LTI systems. However, it is ill suited for some applications, i.e. image processing (recall Gibb's Wavelets, discovered in the last 15 years, are another kind of basis for L2 ([0, T ]) and have many nice content is available online at <http://cnx.org/content/m10764/2.7/>. 19 This 289 15.10.2 Basis Comparisons Fourier series - cn give frequency information. Basis functions last the entire interval. Figure 15.18: Fourier basis functions Wavelets - basis functions give frequency info but are local in time. 290 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Figure 15.19: Wavelet basis functions In Fourier basis, the basis functions are harmonic multiples of ejω t 0 Figure 15.20: basis = n 1 √ T ejω0 nt o In Haar wavelet basis 20 , the basis functions are scaled and translated versions of a "mother wavelet" ψ (t). 20 "The Haar System as an Example of DWT" <http://cnx.org/content/m10437/latest/> 291 Figure 15.21 Basis functions Let {ψj,k (t)} are indexed by a Then scale j and a shift k. j φ (t) = 1 , 0 ≤ t < T φ (t) , 2 2 ψ 2j t − k |j ∈ Z and k = 0, 1, 2, . . . , 2j − 1 292 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Figure 15.22 ψ (t) = 1 if 0 ≤ t < T 2 T 2 −1 if 0 ≤ <T (15.37) 293 Figure 15.23 Let ψj,k (t) = 2 2 ψ 2j t − k j Figure 15.24 Larger j→ "skinnier" basis function, j = {0, 1, 2, . . . }, 2j shifts at each scale: k = 0, 1, . . . , 2j − 1 Check: each ψj,k (t) has unit energy 294 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Figure 15.25 ψj,k 2 (t) dt = 1 ⇒ Any two basis functions are orthogonal. ψj,k (t) 2 =1 (15.38) (a) Figure 15.26: (b) Integral of product = 0 (a) Same scale (b) Dierent scale 295 Also, {ψj,k , φ} span L2 ([0, T ]) 15.10.3 Haar Wavelet Transform Using what we know about Hilbert spaces (Section 15.3): For any Synthesis f (t) ∈ L2 ([0, T ]), we can write f (t) = j k (wj,k ψj,k (t)) + c0 φ (t) (15.39) Analysis wj,k = 0 T f (t) ψj,k (t) dt T (15.40) c0 = 0 note: the f (t) φ (t) dt (15.41) real The Haar transform is super useful especially in image compression Example 15.22 wj,k are This demonstration lets you create a signal by combining Haar basis functions, illustrating the synthesis equation of the Haar Wavelet Transform. See here demo. 21 for instructions on how to use the This is an unsupported media type. To view, please see http://cnx.org/content/m10764/latest/HaarSyn.llb 15.11 Orthonormal Bases in Real and Complex Spaces 15.11.1 Notation Transpose operator 22 AT ips the matrix across it's diagonal. A= AT = a11 a21 a11 a12 a12 a22 a21 a22 Column i of A is row i of AT 21 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 22 This content is available online at <http://cnx.org/content/m10765/2.8/>. 296 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Recall, inner product 23 x= y= xT y = x0 x1 ... x n−1 x0 x1 . . . = (xi yi ) =< y, x > xn−1 y0 y1 . . . yn−1 y0 y1 . . . yn−1 on R n Hermitian transpose AH , transpose and conjugate AH = AT < y , x >= x H y = on ∗ (xi yi ∗ ) Cn Cn Now, let {b0 , b1 , . . . , bn−1 } be an orthonormal basis (Section 15.7.3: Orthonormal Basis) for i = {0, 1, . . . , n − 1} < bi , bi >= 1 , i = j < bi , bj >= bj H bi = 0 Basis matrix: . . . . . . . . . B = b0 . . . Now, b1 . . . ... b n−1 . . . H ... b0 H b1 H . . . ... ... B B= ... 23 "Conclusion" bn−1 H ... b0 . . . ... . . . . . . . . . b0 H b0 b1 H b0 . . . b0 H b1 b1 H b1 bn−1 H b1 ... ... b0 H bn−1 b1 H bn−1 bn−1 H bn−1 b1 . . . ... b n−1 . . . = bn−1 H b0 ... <http://cnx.org/content/m10775/latest/> 297 For orthonormal basis with basis matrix B B H = B −1 ( B T = B −1 in So, to nd Rn ) B H is easy to calculate {α0 , α1 , . . . , αn−1 } such that while B −1 is hard to calculate. x= Calculate (αi bi ) α = B −1 x ⇒ α = B H x Using an orthonormal basis we rid ourselves of the inverse operation. 15.12 Plancharel and Parseval's Theorems 15.12.1 Plancharel Theorem Theorem 15.2: coecients. Let Plancharel Theorem The inner product of two vectors/signals is the same as the 24 2 inner product of their expansion {bi } be an orthonormal basis for a Hilbert Space H. x ∈ H, y ∈ H x= y= then (αi bi ) (βi bi ) (αi βi ∗ ) to < x, y >H = Example Applying the Fourier Series, we can go from f (t) ∗ {cn } ∞ and g (t) to {dn } T 0 f (t) g (t) dt = n=−∞ (cn dn ∗ ) inner product in time-domain = inner product of Fourier coecients. Proof: x= y= < x, y >H =< (αi bi ) , (βj bj ) >= αi < bi , (αi bi ) (βj bj ) (βj bj ) >= αi (βj ∗ ) < bi , bj >= (αi βi ∗ ) by using inner product rules (p. 269) note: < bi , bj >= 0 when i=j and < bi , bj >= 1 2 . when i=j 2 . If Hilbert space H has a ONB, then inner products are equivalent to inner products in All H with ONB are somehow equivalent to Point of interest: square-summable sequences are important 24 This content is available online at <http://cnx.org/content/m10769/2.6/>. 298 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 15.12.2 Parseval's Theorem Theorem 15.3: Let Parseval's Theorem ONB Energy of a signal = sum of squares of it's expansion coecients x ∈ H , {bi } x= Then (αi bi ) = (|αi |) 2 (x 2 H) Proof: Directly from Plancharel (x H) 2 = < x, x >H = (αi αi ∗ ) = (|αi |) 2 Example 15.23 1 Fourier Series √ T ejw0 nt 1 f (t) = √ T T 0 1 cn √ ejw0 nt T ∞ 2 (|f (t) |) dt = n=−∞ [Media Object] 25 (|cn |) 2 15.13 Approximation and Projections in Hilbert Space 15.13.1 Introduction Given a line 'l' and a point 'p' in the plane, what's the closest point 'm' to 'p' on 'l' ? 26 Figure 15.27: Figure of point 'p' and line 'l' mentioned above. Same problem: Let x and v be vectors in minimized? (what point in span{v} best approximates x?) R2 . Say v = 1. For what value of α is x − αv 2 25 This media object is a LabVIEW VI. Please view or download it at <Parsevals Theorem.llb> 26 This content is available online at <http://cnx.org/content/m10766/2.8/>. 299 Figure 15.28 The condition is that x− α v ^ and αv are orthogonal. 15.13.2 Calculating α How to calculate α? x− α v ) ^ is perpendicular to every vector in span{v}, so ^ ^ We know that ( < x− α v, βv >= 0 , ∀β ^ β ∗ < x, v > − α β ∗ < v, v >= 0 because < v, v >= 1, so < x, v > − α= 0 ⇒α=< x, v > Closest vector in span{v} = ^ ^ < x, v > v , where < x, v > v is the projection of x onto v. We can do the same thing in higher dimensions. Exercise 15.2 Let (Solution on p. 301.) best approximates x. i.e., Example 15.24 that V ⊂H be a subspace of a Hilbert space (Section 15.3) H. Let x∈H be given. Find the y∈V x−y is minimized. x ∈ R3 , 1 a 0 V = span 0 , 1 , x = b . 0 0 c 2 So, 1 0 a y= (< x, bi > bi ) = a 0 + b 1 = b i=1 0 0 0 Example 15.25 V = {space of periodic signals with frequency no greater than the signal in V that best approximates f ? 3w0 }. Given periodic f(t), what is 1 1. { √ T ejw0 kt , k = -3, -2, ..., 2, 3} is an ONB for V 300 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS 2. 3 1 jw0 kt > ejw0 kt is the closest signal in V to f(t) k=−3 < f (t) , e T using only 7 terms of its Fourier series (Section 6.2). g (t) = ⇒ reconstruct f(t) Example 15.26 Let V = {functions piecewise constant between the integers} 1. ONB for V. 1 if i − 1 ≤ t < i bi = 0 otherwise where {bi } is an ONB. Best piecewise constant approximation? ∞ g (t) = i=−∞ ∞ (< f, bi > bi ) i < f, bi >= −∞ f (t) bi (t) dt = i −1 f (t) dt Example 15.27 This demonstration explores approximation using a Fourier basis and a Haar Wavelet basis.See here 27 for instructions on how to use the demo. This media object is a LabVIEW VI. Please view or download it at <Approximation.llb> 27 "How to use the LabVIEW demos" <http://cnx.org/content/m11550/latest/> 301 Solutions to Exercises in Chapter 15 Solution to Exercise 15.1 (p. 286) In order to represent x in terms of b0 and b1 we will follow the same steps we used in the above example. B= B −1 = 1 3 0 1 3 2 0 1 2 −1 6 α = B −1 x = And now we can write 1 2 3 x in terms of b0 and b1 . 2 x = b0 + b1 3 And we can easily substitute in our known values of Solution to Exercise 15.2 (p. 299) 2. Project b0 and b1 to verify our results. 1. Find an orthonormal basis (Section 15.7.3: Orthonormal Basis) {b1 , . . . , bk } for V x onto V using k y= i=1 then (< x, bi > bi ) ⊥ V( y is the closest point in V to x and (x-y) < x − y, v >= 0 , ∀v ∈ V 302 CHAPTER 15. APPENDIX: HILBERT SPACES AND ORTHOGONAL EXPANSIONS Chapter 16 Homework Sets 16.1 Homework #1 note: 1 Noon, Thursday, September 5, 2002 16.1.1 Assignment 1 Homework, tests, and solutions from previous oerings of this course are o limits, under the honor code. 16.1.1.1 Problem 1 Form a study group of 3-4 members. With your group, discuss and synthesize the major themes of this week of lectures. Turn in a one page summary of your discussion. You need turn in only one summary per group, but include the names of all group members. Please do not write up just a "table of contents." 16.1.1.2 Problem 2 Construct a WWW page (with your picture) and email Mike Wakin ([email protected]) your name (as you want it to appear on the class web page) and the URL. If you need assistance setting up your page or taking/scanning a picture (both are easy!), ask your classmates. 16.1.1.3 Problem 3: Learning Styles Follow this learning styles link 2 (also found on the Elec 301 web page 3 ) and learn about the basics of learning styles. Write a short summary of what you learned. Also, complete the "Index of learning styles" self-scoring test on the web and bring your results to class. 16.1.1.4 Problem 4 Make sure you know the material in Lathi, Chapter B, Sections 1-4, 6.1, 6.2, 7. Specically, be sure to review topics such as: • • • complex arithmetic (adding, multiplying, powers) nding (complex) roots of polynomials complex plane 4 and plotting roots 1 This content is available online at <http://cnx.org/content/m10826/2.9/>. 2 http://www2.ncsu.edu/unity/lockers/users/f/felder/public/Learning_Styles.html 3 http://www-dsp.rice.edu/courses/elec301/ 4 "The Complex Plane" <http://cnx.org/content/m10596/latest/> 303 304 CHAPTER 16. vectors (adding, inner products) HOMEWORK SETS • 16.1.1.5 Problem 5: Complex Number Applet Reacquaint yourself with complex numbers 5 by going to the course applets web page 6 and clicking on the Complex Numbers applet (may take a few seconds to load). (a) Change the default add function to exponential (exp). Click on the complex plane to get a blue arrow, which is your complex number is equal to z. Click again anywhere on the complex plane to get a yellow arrow, which ez . Now drag the tip of the blue arrow along the unit circle on with |z | = 1 (smaller circle). For which values of your ndings. z on the unit circle does ez also lie on the unit circle? Why? (b) Experiment with the functions absolute (abs), real part (re), and imaginary part (im) and report 16.1.1.6 Problem 6: Complex Arithmetic Reduce the following to the Cartesian form, a + jb. −1−j √ (a) 2 1+2j (b) 3+4j √ 1+ 3j (c) √ √3−j (d) (e) Do 20 not use your calculator! j jj 16.1.1.7 Problem 7: Roots of Polynomials Find the roots of each of the following polynomials (show your work). Use MATLAB to check your answer with the (a) roots command and to plot the roots in the complex plane. Mark the root locations with an 'o'. Put all of the roots on the same plot and identify the corresponding polynomial (a, b, etc...). z 2 − 4z 2 (b) z − 4z + 4 2 (c) z − 4z + 8 2 (d) z + 8 2 (e) z + 4z + 8 2 (f ) 2z + 4z + 8 16.1.1.8 Problem 8: Nth Roots of Unity is called an Nth Root of Unity. e j 2π N (a) Why? (b) Let (c) Let z=e 7 . j 4π z=e 7 . j 2π Draw Draw z, z2, . . . , z7 z, z2, . . . , z7 in the complex plane. in the complex plane. 16.1.1.9 Problem 9: Writing Vectors in Terms of Other Vectors 2 2 A pair of vectors u ∈ C and v ∈ C are called linearly independent if αu + βv = 0 It is a fact that we can write any vector in if and only if as a α=β=0 C2 weighted sum (or linear combination) of any two β are complex-valued. linearly independent vectors, where the weights α and 5 "Complex Numbers" <http://cnx.org/content/m0081/latest/> 6 http://www.dsp.rice.edu/courses/elec301/applets.shtml 305 (a) Write 3 + 4j 6 + 2j as a linear combination of 1 2 1 2 and −5 3 −5 3 . That is, nd α and β such that 3 + 4j 6 + 2j x1 = α +β a linear combination of (b) More generally, write the answer for a given x as as x2 α (x) and β (x). A x= 1 2 and −5 3 . We will denote (c) Write the answer to (a) in matrix form, i.e. nd a 2×2 matrix A such that x1 x2 = α (x) β (x) u and (d) Repeat (b) and (c) for a general set of linearly independent vectors v. 16.1.1.10 Problem 10: Fun with Fractals A Julia set J is obtained by characterizing points in the complex plane. Specically, let f (x) = x2 + µ with µ complex, and dene g0 (x) = x g1 (x) = f (g0 (x)) = f (x) g2 (x) = f (g1 (x)) = f (f (x)) . . . gn (x) = f (gn−1 (x)) Then for each x in the complex plane, we say x∈J if the sequence {|g0 (x) |, |g1 (x) |, |g2 (x) |, . . . } does not tend to innity. J. µ, Notice that if x ∈ J, then each element of the sequence {g0 (x) , g1 (x) , g2 (x) , . . . } also belongs to For most values of the boundary of a Julia set is a fractal curve - it contains "jagged" detail no matter how far you zoom in on it. The well-known Mandelbrot set contains all values of corresponding Julia set is connected. (a) Let (b) Let µ for which the µ = −1. Is x = 1 in J ? µ = 0. What conditions on x ensure that x belongs to J? and plot the results using the threshold (c) Create an approximate picture of a Julia set in MATLAB. The easiest way is to create a matrix of complex numbers, decide for each number whether it belongs to command. To determine whether a number belongs to iterations of we say that J, J, it is helpful to dene a limit g . For a given x, if the magnitude |gn (x) | remains below some x belongs to J . The code below will help you get started: imagesc N on the number of M for all 0 ≤ n ≤ N , 306 CHAPTER 16. HOMEWORK SETS N = 100; % Max # of iterations M = 2; % Magnitude threshold mu = -0.75; % Julia parameter realVals = [-1.6:0.01:1.6]; imagVals = [-1.2:0.01:1.2]; xVals = ones(length(imagVals),1) * realVals + ... j*imagVals'*ones(1,length(realVals)); Jmap = ones(size(xVals)); g = xVals; % Start with g0 % % % % Insert code here to fill in elements of Jmap. Leave a '1' in locations where x belongs to J, insert '0' in the locations otherwise. It is not necessary to store all 100 iterations of g! imagesc(realVals, imagVals, Jmap); colormap gray; xlabel('Re(x)'); ylabel('Imag(x)'); This creates the following picture for µ = −0.75, N = 100, and M = 2. 307 Figure 16.1: Example image where the x-axis is Re (x) and the y-axis is Im (x). Using the same values for N , M , and x, create a picture of the Julia set for µ = −0.391 − 0.587j . Try assigning dierent color values to Jmap. Print out this picture and hand it in with your MATLAB code. note: iteration when the magnitude exceeds neat picture. M. Tip: try imagesc(log(Jmap)) For example, let Jmap indicate the rst and colormap jet for a 16.2 Homework #1 Solutions 16.2.1 Problem #1 No solutions provided. 7 16.2.2 Problem #2 No solutions provided. 16.2.3 Problem #3 No solutions provided. 16.2.4 Problem #4 No solutions provided. 7 This content is available online at <http://cnx.org/content/m10830/2.4/>. 308 CHAPTER 16. HOMEWORK SETS 16.2.5 Problem #5 16.2.5.1 Part (a) ez lies on the unit circle for z = ±j . When z = ±j , ez = e±j = cos (±1) + j sin (±1) e ±j cos2 (±1) + sin2 (±1) 1 σ=0 so that 1 2 = = (16.1) which gives us the unit circle! Think of it this way: for z = σ + jθ, you want a eσ+jθ reduces as eσ+jθ = eσ ejθ = e0 ejθ = ejθ We know by Euler's formula (Section 1.6.2: Euler's Relation) that ejθ = cos (θ) + j sin (θ) The magnitude of this is given by So, we know we want to pick have to choose sin2 (θ) + cos2 (θ), which is 1 (which implies that ejθ is on the unit circle). a z = Ajθ that is on the unit circle (from the problem statement), so we A = ±1 to get unit magnitude. 16.2.5.2 Part (b) • | · | gives magnitude of complex number • Re (·) gives real part of complex number • Im (·) gives imaginary part of complex number 16.2.6 Problem #6 16.2.6.1 Part (a) −1 − j √ 2 20 = √ 5π 2e 4 √ 2 20 =e 5π 4 20 = ej 25π = ejπ = −1 16.2.6.2 Part (b) 1 + 2j = 3 + 4j 1 + 2j 3 + 4j 3 − 4j 3 − 4j = 3 + 6j − 4j + 8 11 + 2j 11 2 = = +j 9 + 16 25 25 25 16.2.6.3 Part (c) √ π π 2ej 3 1 + 3j √ = = ej 2 = j j −π 6 3−j 2e 16.2.6.4 Part (d) j = ej 2 π 1 2 = ej 4 = cos π π π + j sin 4 4 √ = √ 2 2 + j 2 2 309 16.2.6.5 Part (e) j j = ej 2 π j = ej 2π 2 =e −π 2 16.2.7 Problem #7 16.2.7.1 Part (a) z 2 − 4z = z (z − 4) Roots of z = {0, 4} 16.2.7.2 Part (b) z 2 − 4z + 4 = (z − 2) 2 Roots of z = {2, 2} 16.2.7.3 Part (c) z 2 − 4z + 8 Roots of z = 4± √ 16 − 32 = 2 ± 2j 2 16.2.7.4 Part (d) z2 + 8 √ √ ± −32 Roots of z = = ±2 2j 2 16.2.7.5 Part (e) z 2 + 4z + 8 Roots of z = −4 ± √ 16 − 32 = −2 ± 2j 2 16.2.7.6 Part (f) 2z 2 + 4z + 8 Roots of z = −4 ± √ √ 16 − 64 = −1 ± 3 j 4 16.2.7.7 Matlab Code and Plot %%%%%%%%%%%%%%% %%%%% PROBLEM 7 %%%%%%%%%%%%%%% rootsA = roots([1 -4 0]) rootsB = roots([1 -4 4]) rootsC = roots([1 -4 8]) 310 CHAPTER 16. HOMEWORK SETS rootsD = roots([1 0 8]) rootsE = roots([1 4 8]) rootsF = roots([2 4 8]) zplane([rootsA; rootsB; rootsC; rootsD; rootsE; rootsF]); gtext('a') gtext('a') gtext('b') gtext('b') gtext('c') gtext('c') gtext('d') gtext('d') gtext('e') gtext('e') gtext('f') gtext('f') Figure 16.2: Plot of all the roots. 311 16.2.8 Problem #8 16.2.8.1 Part (a) Raise e j 2π N to the N th power. e note: j 2π N N = ej 2π = 1 1 N j 2π N Similarly, (1) N = ej 2π 1 =e 16.2.8.2 Part (b) For z=e j 2π 7 , zk = e j 2π 7 k = ej 2π 7 k We will have points on the unit circle with angle of MATLAB can be found below, followed by the plot. 2π 7 2π 2π 2 7 , 7 ,..., 7 . The code used to plot these in %%%%%%%%%%%%%%% %%%%% PROBLEM 8 %%%%%%%%%%%%%%% %%% Part (b) figure(1); clf; hold on; th = [0:0.01:2*pi]; unitCirc = exp(j*th); plot(unitCirc,'--'); for k = 1:7 z = exp(j*2*pi*k/7); plot(z,'o'); text(1.2*real(z),1.2*imag(z),strcat('z^',num2str(k))); end xlabel('real part'); ylabel('imag part'); title('Powers of exp(j2\pi/7) on the unit circle'); axis([-1.5 1.5 -1.5 1.5]); axis square; 312 CHAPTER 16. HOMEWORK SETS Figure 16.3: MATLAB plot of part (b). 16.2.8.3 Part (c) For z=e j 4π 7 , zk = e Where we have 2 j 4π 7 k = ej 2π 6 2k 7 z , z 2 , . . . , z 7 = ej 2π 7 , ej 2π 7 , ej 2π 7 , ej 2π 7 , ej 2π 7 , ej 2π 7 , 1 The code used to plot these in MATLAB can be found below, followed by the plot. 4 1 3 5 %%% Part (c) figure(1); clf; hold on; th = [0:0.01:2*pi]; unitCirc = exp(j*th); plot(unitCirc,'--'); 313 for k = 1:7 z = exp(j*4*pi*k/7); plot(z,'o'); text(1.2*real(z),1.2*imag(z),strcat('z^',num2str(k))); end xlabel('real part'); ylabel('imag part'); title('Powers of exp(j4\pi/7) on the unit circle'); axis([-1.5 1.5 -1.5 1.5]); axis square; Figure 16.4: MATLAB plot of part (c). 314 CHAPTER 16. HOMEWORK SETS 16.2.9 Problem #9 16.2.9.1 Part (a) To solve for 3 + 4j 6 + 2j = α 1 2 +β −5 3 β we must solve the following system of equations: α − 5β = 3 + 4 j 2α + 3 β = 6 + 2 j If we multiply the top equation by terms: −2 we will get the following, which allows us to cancel out the alpha −2α + 10β = −6 − 8j 2α + 3 β = 6 + 2 j And now we have, 13β = −6j β= And to solve for −6 j 13 α we have the following equation: α = = = 3 + 4 j + 5β 3 + 4j + 5 3+ 22 13 j −6 13 j (16.2) 16.2.9.2 Part (b) x1 x2 = α 1 2 +β −5 3 x1 = α − 5β x2 = 2α + 3β Solving for α and β we get: α (x) = β (x) = 3 x1 + 5 x2 13 −2x1 + x2 13 16.2.9.3 Part (c) α ( x) β (x) = 3 13 −2 13 5 13 1 13 x1 x2 315 16.2.9.4 Part (d) Write u= u1 u2 and v= v1 v2 . Then solve x1 x2 = α u1 u2 +β v1 v2 which corresponds to the system of equations x1 = αu1 + βv1 x2 = αu2 + βv2 Solving for α and β we get α ( x) = β (x) = For the matrix v2 x1 − v1 x2 u1 v2 − u2 v1 u2 x1 − u1 x2 v1 u2 − u1 v2 −v 1 u1 A we get v 1 2 A= u1 v2 − u2 v1 −u2 16.2.10 Problem #10 16.2.10.1 Part (a) If u = −1, then f (x) = x2 − 1. Examine the sequence {g0 (x) , g1 (x) , . . . }: g0 (x) = 1 g1 (x) = 12 − 1 = 0 g2 (x) = 02 − 1 = −1 g3 (x) = (−1) − 1 = 0 g4 (x) = 02 − 1 = −1 . . . The magnitude sequence remains bounded so 2 x=1 belongs to J. 316 CHAPTER 16. HOMEWORK SETS 16.2.10.2 Part (b) If u = 0, then f ( x) = x2 . So we have g0 (x) = x g1 (x) = x2 g2 (x) = x2 . . . 2 = x4 gn (x) = x2 Writing n = x2n x = rejθ , we have gn (x) = x2n = r2n ejθ2n , and so we have |gn (x) | = r2n The magnitude sequence blows up if and only if corresponds to the unit disk. r > 1. Thus x belongs to J if and only if |x| ≤ 1. So, J 16.2.10.3 Part (c) %%%%%%%%%%%%%%%% %%%%% PROBLEM 10 %%%%%%%%%%%%%%%% %%% Part (c) - solution code N = 100; M = 2; % Max # of iterations % Magnitude threshold mu = -0.391 - 0.587*j; % Julia parameter realVals = [-1.6:0.01:1.6]; imagVals = [-1.2:0.01:1.2]; xVals = ones(length(imagVals),1)*realVals + ... j*imagVals'*ones(1,length(realVals)); Jmap = ones(size(xVals)); g = xVals; % Start with g0 for n = 1:N g = g.^2 + mu; big = (abs(g) > M); Jmap = Jmap.*(1-big); end imagesc(realVals,imagVals,Jmap); colormap gray; xlabel('Re(x)'); ylabel('Imag(x)'); 317 Figure 16.5: MATLAB plot of part (c). 16.2.10.4 Just for Fun Solution %%% Just for fun code N = 100; % Max # of iterations M = 2; % Magnitude threshold mu = -0.391 - 0.587*j; % Julia parameter realVals = [-1.6:0.005:1.6]; imagVals = [-1.2:0.005:1.2]; xVals = ones(length(imagVals),1)*realVals + ... j*imagVals'*ones(1,length(realVals)); Jmap = zeros(size(xVals)); % Now, we put zeros in the 'middle', for a % cool effect. g = xVals; % Start with g0 for n = 1:N g = g.^2 + mu; big = (abs(g) > M); notAlreadyBig = (Jmap == 0); 318 CHAPTER 16. HOMEWORK SETS Jmap = Jmap + n*(big.*notAlreadyBig); end imagesc(realVals,imagVals,log(Jmap));colormap jet; xlabel('Re(x)'); ylabel('Imag(x)'); Figure 16.6: MATLAB plot. Chapter 17 Viewing Embedded LabVIEW Content 1 In order to view LabVIEW content embedded in Connexions modules, you must install the LabVIEW Runtime Engine on your computer. The following are sets of instructions for installing the software on dierent platforms. note: Embedded LabVIEW content is currently supported only under Windows 2000/XP. Also, you must have version 8.0.1 of the LabView Run-time Engine to run much of the embedded content in Connexions. 17.1 Installing the LabVIEW Run-time Engine on Microsoft Windows 2000/XP 1. Point your web browser to the LabVIEW Run-time Engine download 2 page at: http://digital.ni.com/softlib.nsf/websearch/077b51e8d15604bd8625711c006240e7 3. Login or create a prole with NI to continue. 4. Once logged in, click the . 2. If you're not logged in to NI, click the link to continue the download process at the bottom of the page. LabVIEW_8.0.1_Runtime_Engine.exe link and save the le to disk. 5. Once the le has downloaded, double click it and follow the steps to install the run-time engine. . 7. Put the 6. Download the LabVIEW Browser Plug-in at: http://zone.ni.com/devzone/conceptd.nsf/webmain/7DBFD404C6AD0B2 LVBrowserPlugin.ini le in the My Documents\LabVIEW Data folder. (You may have to create this folder if it doesn't already exist.) 8. Restart your web browser to complete the installation of the plug-in. 1 This content is available online at <http://cnx.org/content/m13753/1.3/>. 2 http://digital.ni.com/softlib.nsf/websearch/077b51e8d15604bd8625711c006240e7 3 http://zone.ni.com/devzone/conceptd.nsf/webmain/7DBFD404C6AD0B24862570BB0072F83B/$FILE/LVBrowserPlugin.ini 319 320 GLOSSARY Glossary B Basis A basis for Cn is a set of vectors that: (1) spans Cn and (2) is linearly independent. C Cauchy-Schwarz Inequality For x, y in an inner product space | < x, y > | ≤ x with equality holding scalar y 4 if and only if x and y are linearly dependent , i.e. x = αy for some α. D dierence equation An equation that shows the relationship between consecutive values of a sequence and the dierences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs. Example: y [n] + 7y [n − 1] + 2y [n − 2] = x [n] − 4x [n − 1] (14.42) domain The group, or set, of values that are dened by a given function. Example: Using the rational function above, (14.38), the domain can be dened as any real where number x x does not equal 1 or negative 3. Written out mathematical, we get the following: { x ∈ R |x = −3 and x = 1} (14.40) E eigenvector An eigenvector of A is a vector v ∈ Cn such that Av = λ v where (5.2) λ is called the corresponding eigenvalue. A only changes the length of v, not its direction. I inner product 4 http://cnx.org/content/m10757/latest/ GLOSSARY The inner product is dened mathematically as: 321 < x, y > = yT x = y0 y1 ... y n−1 x0 x1 . . . (15.1) xn−1 = n−1 i=0 (xi yi ) L lp [0, N − 1] = f ∈ CN , f but from previous discussion lp p <∞ [0, N − 1] = CN Lp (R) = f , f ∞ p <∞ 1 p f where p = −∞ (|f [n] |) dt p 1≤p<∞ f ∞ = esssup|f (t) | where −∞ < t < ∞ Lp [T1 , T2 ] = f [T1 , T2 ] , f T2 p <∞ 1 p f where p = T1 (|f (t) |) dt p 1≤p<∞ f p = esssup|f (t) | where T1 ≤ t ≤ T2 lp ( z ) = f , f p ∞ p <∞ p f where p = n=−∞ ((|f [n] |) ) 1≤p<∞ f ∞ = maxn∈z {|f [n] |} limit 322 GLOSSARY A sequence {gn } |∞ n=1 converges to a limit g∈R if for every >0 there is an integer N such that |gi − g | < We usually denote a limit by writing , i≥N i→∞ or lim gi = g gi → g Linearly Independent For a given set of vectors, {x1 , x2 , . . . , xn }, they are linearly independent if c1 x1 + c2 x2 + · · · + cn xn = 0 only when Example: c1 = c2 = · · · = cn = 0 We are given the following two vectors: x1 = x2 = These are 3 2 −6 −4 not linearly independent as proven by the following statement, which, by x2 = −2x1 ⇒ 2x1 + x2 = 0 inspection, can be seen to not adhere to the denition of linear independence stated above. Another approach to reveal a vectors independence is by graphing the vectors. Looking at these two vectors geometrically (as in independent. 5 ), one can again prove that these vectors are not linearly N Normalized Basis a basis 6 {bi } where each bi has unit norm bi = 1 , i ∈ Z (15.19) O Orthogonal Basis a basis {bi } in which the elements are mutually orthogonal < bi , bj >= 0 , i = j orthogonal We say that x and y are orthogonal if: < x , y >= 0 Orthonormal Basis 5 http://cnx.org/content/m10734/latest/ 6 http://cnx.org/content/m10772/latest/ GLOSSARY a basis that is both 323 normalized and orthogonal bi = 1 , i ∈ Z < bi , bj > , i = j P poles Also called singularities, these are the points s at which poles 1. The value(s) for Lx1 (s) blows up. z where Q (z ) = 0. 2. The complex frequencies that make the overall gain of the lter transfer function innite. poles 1. The value(s) for z where the denominator of the transfer function equals zero 2. The complex frequencies that make the overall gain of the lter transfer function innite. R rational function For any two polynomials, A and B, their quotient is called a rational function. Example: Below is a simple example of a basic rational function, f (x). Note that the numerator and denominator can be polynomials of any order, but the rational function is undened when the denominator equals zero. f (x) = x2 − 4 2x2 + x − 3 (14.38) S sequence A sequence is a function gn dened on the positive integers 'n'. We often denote a sequence by {gn } |∞ n=1 Example: A real number sequence: gn = Example: 1 n nπ 2 nπ 2 A vector sequence: gn = Example: sin cos A function sequence: 1 if 0 ≤ t < 1 n gn (t) = 0 otherwise note: A function can be thought of as an innite dimensional vector where for each value of 't' we have one dimension Span 324 GLOSSARY The span 7 of a set of vectors combination of {x1 , x2 , . . . , xk } {x1 , x2 , . . . , xk } is the set of vectors that can be written as a linear span ({x1 , . . . , xk }) = {α1 x1 + α2 x2 + · · · + αk xk , αi ∈ Cn } Example: Given the vector x1 = the span of Example: 3 2 x1 is a line. x1 = x2 = 3 2 1 2 Given the vectors the span of these vectors is C2 . U Uniform Convergence The sequence N such that {gn } |∞ converges n=1 n ≥ N implies 8 uniformly to function g if for every >0 there is an integer (9.7) |gn (t) − g (t) | ≤ for all t ∈ R. S α∈R is a collection of "vectors" such that (1) if or V Vector space A linear vector space scalars α (where α ∈ C) and (2) if f1 ∈ S , f2 ∈ S , then f1 ∈ S ⇒ αf1 ∈ S f1 + f2 ∈ S for all Z zeros 1. The value(s) for z where P (z ) = 0. 2. The complex frequencies that make the overall gain of the lter transfer function zero. zeros 1. The value(s) for z where the numerator of the transfer function equals zero 2. The complex frequencies that make the overall gain of the lter transfer function zero. 7 http://cnx.org/content/m10734/latest/ 8 http://cnx.org/content/m10895/latest/ INDEX 325 Index of Keywords and Terms Keywords are listed by the section with that keyword (page numbers are in parentheses). apples, Ÿ 1.1 (1) Keywords do not necessarily appear in the text of the page. They are merely associated with that section. Ex. Terms are referenced by the page they appear on. Ex. apples, 1 A alias, Ÿ 12.5(209), Ÿ 12.6(212) aliasing, Ÿ 7.2(142), Ÿ 12.5(209), 209, Ÿ 12.6(212) almost everywhere, Ÿ 6.11(134) alphabet, Ÿ 1.7(33), 35 analog, Ÿ 1.1(1), 2, 33, Ÿ 10.6(184), Ÿ 10.7(185) analysis, 114 Anti-Aliasing, Ÿ 12.6(212) anticausal, Ÿ 1.1(1), 3 aperiodic, Ÿ 1.1(1), 112 approximation, Ÿ 15.13(298) coecient, Ÿ 6.2(112) coecient vector, Ÿ 15.8(283), 285 commutative, 53, 60, 73 complex, Ÿ 1.7(33), Ÿ 7.2(142), Ÿ 14.7(254) complex amplitude, 31 complex continuous-time exponential signal, 30 complex exponential, 24, Ÿ 1.6(30), 30 complex exponential sequence, 34 complex plane, Ÿ 1.6(30) complex sinusoids, Ÿ 7.2(142) complex vector space, 263 complex vector spaces, Ÿ 15.1(263) complex-valued, Ÿ 1.7(33) complex-valued function, 220, 220 complexity, 163, Ÿ 8.3(165) composite, 167 computational advantage, 165 conjugates, 146 Constant-Coecient, Ÿ 4.4(83) continuous, 1, 132 continuous frequency, Ÿ 10.4(183), Ÿ 11.1(191) continuous system, 37 continuous time, Ÿ 1.1(1), Ÿ 1.4(23), Ÿ 3.1(47), Ÿ 3.2(53), Ÿ 3.4(65), Ÿ 7.1(141), Ÿ 10.3(181), Ÿ 11.1(191), Ÿ 11.2(192), Ÿ 12.1(197), 197, Ÿ 12.7(214), Ÿ 13.1(219), Ÿ 13.2(221), Ÿ 13.3(223), Ÿ 13.4(223), Ÿ 13.6(227) continuous-time, Ÿ 3.1(47) Continuous-Time Fourier Transform, 191 control theory, 229 converge, Ÿ 6.9(130), Ÿ 9.3(174) convergence, Ÿ 6.9(130), 132, Ÿ 9.1(169), Ÿ 9.2(171), Ÿ 9.3(174) converges, Ÿ 9.3(174) convolution, Ÿ 3.2(53), Ÿ 3.3(59), Ÿ 4.2(72), 73, Ÿ 4.3(157), Ÿ 6.7(126), Ÿ 4.3(157), Ÿ 11.2(192) convolution integral, 53 convolution sum, 73 convolutions, Ÿ 4.3(157), Ÿ 4.3(157) convolve, Ÿ 4.3(157), Ÿ 4.3(157) B bandlimited, 199 baraniuk, Ÿ 16.2(307) bases, Ÿ 5.1(91) basis, Ÿ 5.1(91), 94, 94, 94, Ÿ 7.2(142), 146, Ÿ 7.3(148), 206, Ÿ 15.7(280), Ÿ 15.8(283), 284, Ÿ 15.9(287), Ÿ 15.10(288), Ÿ 15.11(295) basis matrix, Ÿ 15.8(283), 285 best approximates, 299 BIBO, Ÿ 3.4(65) bilateral Laplace transform pair, 219 bilateral z-transform, 231 bounded input bounded output, Ÿ 3.4(65) bounded input-bounded output (BIBO), 40 boxcar lter, 72 buttery, Ÿ 8.3(165), 166 C Cartesian, 304 cascade, Ÿ 2.2(41) cauch-schwarz, Ÿ 15.5(270) cauchy, Ÿ 15.5(270) cauchy-schwarz inequality, Ÿ 15.5(270), 270 causal, Ÿ 1.1(1), 3, Ÿ 2.1(37), 39, Ÿ 2.2(41), 257 characteristic equation, 52 characteristic polynomial, 253 circular, Ÿ 4.3(157), Ÿ 6.7(126), Ÿ 4.3(157) circular convolution, Ÿ 4.3(157), 79, Ÿ 6.7(126), Ÿ 4.3(157), 158 circular shift, Ÿ 7.5(153) circular shifting, 155 circular shifts, Ÿ 7.5(153), 153 326 INDEX Cooley-Tukey, Ÿ 8.1(163), Ÿ 8.3(165) csi, Ÿ 15.5(270) CTFT, Ÿ 11.1(191), Ÿ 12.1(197) cuachy, Ÿ 15.5(270) dtfs, Ÿ 7.2(142), Ÿ 7.3(148), Ÿ 7.4(150) DTFT, Ÿ 10.1(177), Ÿ 10.4(183), Ÿ 12.1(197) dynamic content, Ÿ 17(319) D E edgy, 118 eigen, Ÿ 5.4(103) eigenfunction, Ÿ 5.2(95), Ÿ 5.4(103), Ÿ 5.5(104), 105, Ÿ 6.4(116) eigenfunctions, Ÿ 5.4(103), Ÿ 6.2(112), Ÿ 6.4(116) eigensignal, 105 eigenvalue, Ÿ 5.2(95), 320, 97, Ÿ 5.4(103), Ÿ 5.5(104) eigenvalues, Ÿ 5.2(95), Ÿ 5.4(103) eigenvector, Ÿ 5.2(95), 96, Ÿ 5.4(103), Ÿ 5.5(104) eigenvectors, Ÿ 5.4(103) Elec 301, Ÿ 1.2(9), Ÿ 4.4(83), Ÿ 4.5(84), Ÿ 16.1(303), Ÿ 16.2(307) elec301, Ÿ 16.1(303), Ÿ 16.2(307) embedded, Ÿ 17(319) energy, 131 Euclidean norm, Ÿ 15.2(265) Euler's Identity, 30 Euler's Relation, 31 even signal, Ÿ 1.1(1), 4, Ÿ 6.6(122) example, Ÿ 10.7(185) examples, Ÿ 10.7(185) existence, 132 expansion, Ÿ 15.9(287), Ÿ 15.10(288) exponential, Ÿ 1.4(23), Ÿ 1.7(33), Ÿ 6.3(115) exponential function, 30 de, Ÿ 3.1(47) Decaying Exponential, 24 decompose, Ÿ 1.7(33), Ÿ 15.8(283), 284 delta function, Ÿ 15.7(280) determinant, Ÿ 5.2(95) deterministic signal, 7 dft, Ÿ 4.3(157), Ÿ 7.5(153), Ÿ 4.3(157), Ÿ 8.2(164), Ÿ 10.1(177), Ÿ 10.2(179) dierence equation, Ÿ 4.1(69), 69, 83, 251, 251 Dierence Equations, Ÿ 4.4(83), Ÿ 4.5(84) dierential, Ÿ 3.1(47) dierential equations, Ÿ 3.1(47) digital, Ÿ 1.1(1), 2, Ÿ 10.6(184), Ÿ 10.7(185) digital signal processing, Ÿ 4.1(69), Ÿ 10.7(185) Dirac delta, 25 dirac delta function, Ÿ 1.4(23), Ÿ 1.5(27), 27 direct method, 253 dirichlet conditions, Ÿ 6.10(132), 132 Dirichlet sinc, Ÿ 10.1(177) discontinuity, Ÿ 6.9(130), 132 discontinuous functions, 135 discrete, 1, Ÿ 7.2(142), Ÿ 12.7(214) discrete fourier transform, Ÿ 4.3(157), Ÿ 4.3(157), Ÿ 10.2(179), 179 discrete system, 37 discrete time, Ÿ 1.1(1), Ÿ 3.4(65), Ÿ 4.2(72), Ÿ 7.1(141), Ÿ 7.5(153), Ÿ 10.4(183), Ÿ 12.1(197), Ÿ 13.6(227), Ÿ 14.2(236) discrete time fourier series, Ÿ 7.2(142), 142, Ÿ 7.3(148), Ÿ 7.4(150) discrete time processing, Ÿ 12.7(214) discrete time signals, 197 discrete-time, Ÿ 1.7(33), Ÿ 7.3(148), Ÿ 10.5(184), Ÿ 10.6(184), Ÿ 10.7(185) discrete-time convolution, 72 discrete-time exponential signal, 31 discrete-time Fourier transform, Ÿ 10.7(185) Discrete-Time Fourier Transform properties, Ÿ 10.5(184) discrete-time sinc function, 188 discrete-time systems, Ÿ 4.1(69) domain, 250, 250 dot product, Ÿ 15.3(268) dot products, 268 DSP, Ÿ 4.1(69), Ÿ 10.7(185), Ÿ 12.4(207), Ÿ 14.2(236) DT, Ÿ 4.2(72) F fast Fourier transform, Ÿ 8.1(163), Ÿ 8.2(164), Ÿ 8.3(165) FFT, Ÿ 8.1(163), Ÿ 8.2(164), Ÿ 8.3(165), Ÿ 10.1(177) lter, Ÿ 12.6(212), Ÿ 14.8(258) nite-duration sequence, 238 nite-length sequence, 248 nite-length signal, 8 FIR, 72 form, 165 fourier, Ÿ 5.2(95), Ÿ 6.2(112), Ÿ 6.3(115), Ÿ 6.4(116), Ÿ 6.5(119), Ÿ 6.6(122), Ÿ 6.7(126), Ÿ 6.8(127), Ÿ 6.9(130), Ÿ 6.10(132), Ÿ 6.12(137), Ÿ 7.2(142), Ÿ 7.3(148), Ÿ 7.4(150), Ÿ 7.5(153), Ÿ 8.2(164), Ÿ 15.10(288) fourier analysis, Ÿ 7.2(142) fourier coecient, Ÿ 6.9(130) fourier coecients, Ÿ 6.2(112), 113, Ÿ 6.3(115) fourier series, Ÿ 5.2(95), Ÿ 6.2(112), Ÿ 6.3(115), INDEX Ÿ 6.4(116), Ÿ 6.6(122), Ÿ 6.7(126), Ÿ 6.9(130), Ÿ 6.10(132), Ÿ 6.11(134), Ÿ 6.12(137), Ÿ 7.1(141), Ÿ 7.2(142), Ÿ 7.3(148), Ÿ 7.4(150), Ÿ 15.7(280), Ÿ 15.10(288) fourier transform, Ÿ 4.3(157), Ÿ 6.10(132), Ÿ 7.1(141), Ÿ 7.5(153), Ÿ 4.3(157), Ÿ 8.2(164), Ÿ 10.2(179), Ÿ 10.3(181), Ÿ 10.4(183), Ÿ 10.5(184), Ÿ 10.6(184), Ÿ 10.7(185), Ÿ 11.1(191), Ÿ 11.2(192), Ÿ 14.1(231), 231 fourier transform pairs, Ÿ 5.6(107) frequency, Ÿ 10.6(184) Frequency Domain, Ÿ 10.5(184) frequency shift keying, Ÿ 15.5(270) fsk, Ÿ 15.5(270) function, Ÿ 14.5(249) function sequences, Ÿ 9.3(174) function space, Ÿ 15.9(287) function spaces, 95, Ÿ 15.9(287) fundamental period, 2 327 inner product, Ÿ 15.3(268), 268, Ÿ 15.4(269), Ÿ 15.5(270), Ÿ 15.11(295) inner product space, 269 inner products, Ÿ 15.3(268), 268, Ÿ 15.5(270) interpolation, Ÿ 12.2(201) Inverse Laplace Transform, Ÿ 13.5(226) inverse transform, Ÿ 6.2(112), 114 K L kronecker, Ÿ 15.7(280) LabVIEW, Ÿ 17(319) laplace transform, Ÿ 3.4(65), Ÿ 7.1(141), Ÿ 13.1(219), Ÿ 13.2(221), Ÿ 13.3(223), Ÿ 13.4(223), Ÿ 13.6(227) left-handed, 7 limit, 169 linear, Ÿ 2.1(37), 37, Ÿ 2.2(41), Ÿ 4.4(83) linear algebra, Ÿ 5.1(91) linear and time-invariant, 72 linear combination, 304 linear convolution, 79, 158 linear function space, 263 linear function spaces, Ÿ 15.1(263) linear independence, Ÿ 5.1(91) linear system, Ÿ 3.1(47), Ÿ 5.2(95) linear time invariant, Ÿ 3.2(53), Ÿ 5.5(104) linear time-invariant systems, 247 linear transformation, Ÿ 6.5(119), 119 linearity, Ÿ 11.2(192) linearly independent, 91, 91, 304 LTI, Ÿ 3.2(53), 72, Ÿ 5.4(103), Ÿ 5.5(104), Ÿ 6.2(112), Ÿ 6.8(127) LTI system, Ÿ 6.2(112), Ÿ 6.4(116) G geometric series, 185 Gibb's phenomenon, 137 gibbs phenomenon, Ÿ 6.11(134), 135 graphical method, 56 Growing Exponential, 24 H haar, Ÿ 15.10(288) haar wavelet, Ÿ 15.10(288) harmonic, Ÿ 7.2(142) harmonic sinusoids, Ÿ 7.2(142), 143 hermitian, Ÿ 15.11(295) hilbert, Ÿ 15.4(269), Ÿ 15.5(270), Ÿ 15.6(277), Ÿ 15.8(283), Ÿ 15.10(288) Hilbert space, 269, Ÿ 15.6(277), Ÿ 15.7(280), Ÿ 15.13(298) hilbert spaces, Ÿ 15.4(269), Ÿ 15.6(277), Ÿ 15.8(283), Ÿ 15.10(288) homework 1, Ÿ 16.1(303) homework one, Ÿ 16.1(303) homogeneous solution, 253 M magnitude, 308 matched lter, Ÿ 15.5(270) matched lter detector, Ÿ 15.5(270), 271 matched lters, Ÿ 15.5(270) matrix equation, Ÿ 7.3(148) maxima, 133 maximum, 269 mean square, 137 minima, 133 modulation, Ÿ 11.2(192) mutually orthogonal, 322 I identity matrix, 286 IIR, 71 imaginary part, 308 Important note:, 216 impulse, Ÿ 1.4(23), Ÿ 1.5(27) impulse response, 29, Ÿ 4.2(72) independence, Ÿ 5.1(91) indirect method, 253 innite-length signal, 8 initial conditions, 70, 251 inner, Ÿ 15.4(269) N noisy signals, 118 nonanticipative, 39 noncausal, Ÿ 1.1(1), 3, Ÿ 2.1(37), 39 nonlinear, Ÿ 2.1(37), 37 nonuniform convergence, Ÿ 6.11(134), 135 Norm, Ÿ 1.2(9), Ÿ 9.1(169), Ÿ 9.2(171), Ÿ 15.2(265), 265, Ÿ 15.3(268) 328 INDEX norm convergence, Ÿ 9.1(169), Ÿ 9.2(171) normalization, Ÿ 15.2(265) normalized, Ÿ 15.7(280), 323 normalized basis, Ÿ 15.7(280), 280, 280 normed linear space, Ÿ 15.2(265), 265, 269 normed space, Ÿ 15.6(277) normed vector space, Ÿ 15.2(265), 265 norms, Ÿ 15.2(265) not, 164 Nth Root of Unity, 304 Nyquist, Ÿ 10.6(184), Ÿ 12.4(207) Nyquist frequency, Ÿ 10.7(185), Ÿ 12.4(207), 208 Nyquist theorem, Ÿ 12.4(207), 207 R random signal, 7 rational, Ÿ 14.5(249) rational function, Ÿ 14.5(249), 249, 249 rational functions, Ÿ 14.5(249) real part, 308 real vector space, 263, 276 real vector spaces, Ÿ 15.1(263) real-valued, Ÿ 1.7(33) reconstruct, Ÿ 12.3(205) reconstruction, Ÿ 12.2(201), Ÿ 12.3(205), Ÿ 12.4(207), Ÿ 12.5(209) region of convergence, Ÿ 13.4(223) region of convergence (ROC), 223 rest, 88 right-handed, 7 right-sided sequence, 239, 240 ROC, Ÿ 13.4(223), Ÿ 14.1(231), 232, 237 root mean squared, 18 O odd signal, Ÿ 1.1(1), 4, Ÿ 6.6(122) on our computer!!!, 216 order, Ÿ 8.3(165), Ÿ 13.5(226), 227, 251 orthogonal, Ÿ 15.3(268), 269, 269, Ÿ 15.7(280), 323, 299 orthogonal basis, Ÿ 15.7(280), 280 orthonormal, Ÿ 7.2(142), Ÿ 15.7(280), Ÿ 15.8(283) orthonormal basis, Ÿ 7.2(142), 147, Ÿ 15.7(280), 281, Ÿ 15.8(283), 283 S s-plane, 33 sample, Ÿ 12.4(207), Ÿ 12.5(209) sampling, 179, 197, Ÿ 12.3(205), Ÿ 12.4(207), Ÿ 12.5(209), Ÿ 12.6(212) schwarz, Ÿ 15.5(270) sequence, 169 sequence of functions, 131 Sequence-Domain, Ÿ 10.5(184) sequences, Ÿ 1.7(33), Ÿ 9.1(169), Ÿ 9.3(174) shift-invariant, 69 shift-invariant systems, Ÿ 4.1(69) sifting property, Ÿ 1.4(23), Ÿ 1.5(27), 28 signal, Ÿ 6.1(111), Ÿ 6.12(137) signals, Ÿ 1.3(20), Ÿ 1.4(23), Ÿ 1.5(27), Ÿ 1.6(30), Ÿ 1.7(33), Ÿ 2.1(37), Ÿ 3.2(53), Ÿ 3.3(59), Ÿ 3.4(65), Ÿ 4.2(72), Ÿ 6.2(112), Ÿ 6.3(115), Ÿ 7.1(141), Ÿ 10.3(181), Ÿ 13.4(223), Ÿ 13.6(227) signals and systems, Ÿ 1.1(1), Ÿ 4.2(72) sinc, Ÿ 12.3(205) sine, Ÿ 1.7(33) singularities, Ÿ 13.5(226), 226 sinusoid, Ÿ 1.7(33), Ÿ 6.3(115) smooth signals, 117 span, Ÿ 5.1(91), 93, Ÿ 5.4(103), 287 square pulse, 135 stability, Ÿ 3.4(65) stable, 40, 257 standard basis, 94, Ÿ 7.3(148), Ÿ 15.8(283) strong dirichlet condition, Ÿ 6.10(132) Strong Dirichlet Conditions, 133 superposition, Ÿ 2.2(41), Ÿ 4.1(69) symbolic-valued signals, Ÿ 1.7(33) P parallel, Ÿ 2.2(41) Parseval, Ÿ 15.12(297) Parseval's Theorem, Ÿ 10.7(185) particular solution, 253 perfect, Ÿ 12.3(205) period, 2, Ÿ 6.1(111), 111, Ÿ 7.4(150) periodic, Ÿ 1.1(1), Ÿ 6.1(111), Ÿ 7.4(150) periodic function, Ÿ 6.1(111), 111 periodicity, Ÿ 6.1(111) phasor, 31 Plancharel, Ÿ 15.12(297) point wise, Ÿ 9.1(169), Ÿ 9.2(171) pointwise, Ÿ 6.9(130), 137, Ÿ 9.1(169), Ÿ 9.2(171), 171 pointwise convergence, Ÿ 6.9(130), Ÿ 9.2(171) pole, Ÿ 3.4(65), Ÿ 13.4(223), Ÿ 13.6(227), Ÿ 14.7(254), Ÿ 14.8(258) pole-zero cancellation, 229 poles, Ÿ 13.5(226), 226, 227, 250, 254 polynomial, Ÿ 14.5(249) power series, 232, 237 processing, Ÿ 12.7(214) projection, Ÿ 15.3(268), 269, Ÿ 15.13(298) properties, Ÿ 6.6(122) property, Ÿ 3.3(59) proportional, 163 INDEX symmetry, Ÿ 6.5(119), Ÿ 6.6(122), Ÿ 11.2(192) symmetry properties, Ÿ 6.6(122) symmetry property, Ÿ 6.6(122) synthesis, 114 system, Ÿ 4.3(157), Ÿ 5.4(103), Ÿ 6.4(116), Ÿ 6.12(137), Ÿ 4.3(157) systems, Ÿ 1.7(33), Ÿ 3.2(53), Ÿ 3.3(59), Ÿ 3.4(65), Ÿ 7.1(141), Ÿ 10.3(181), Ÿ 13.4(223), Ÿ 13.6(227) unit step, Ÿ 1.4(23) unit-step function, 25 unity, 27 unstable, 40 329 V vector, Ÿ 9.2(171), Ÿ 15.1(263) vector space, Ÿ 15.1(263), 263, Ÿ 15.6(277), Ÿ 15.7(280), Ÿ 15.9(287) vector spaces, Ÿ 15.1(263), Ÿ 15.6(277), Ÿ 15.7(280) vectors, Ÿ 9.2(171) vertical asymptotes, 250 VI, Ÿ 17(319) virtual instrument, Ÿ 17(319) T t-periodic, Ÿ 6.1(111) threshold, 274 time dierentiation, Ÿ 11.2(192) time domain, Ÿ 4.1(69) time invariant, Ÿ 2.1(37), 38 time reversal, Ÿ 1.3(20) time scaling, Ÿ 1.3(20), Ÿ 11.2(192) time shifting, Ÿ 1.3(20), Ÿ 11.2(192) time variant, 39 time varying, Ÿ 2.1(37) time-invariant, Ÿ 2.2(41) transfer function, 252 transform, Ÿ 6.2(112), 114 transform pairs, Ÿ 14.2(236) transforms, 102, 287 transpose, Ÿ 15.11(295) two-sided sequence, 241 W wavelet, Ÿ 15.10(288) wavelets, Ÿ 15.10(288), 288 weak dirichlet condition, Ÿ 6.10(132), 132 weighted sum, 304 X Y Z x-intercept, 250 y-intercept, 250 z transform, Ÿ 3.4(65), Ÿ 7.1(141), Ÿ 13.6(227), Ÿ 14.2(236) z-plane, 231, Ÿ 14.7(254) z-transform, Ÿ 14.1(231), 231, Ÿ 14.2(236), 237, Ÿ 14.5(249) z-transforms, 236 zero, Ÿ 3.4(65), Ÿ 13.4(223), Ÿ 13.6(227), Ÿ 14.7(254), Ÿ 14.8(258) zero-input response, 47 zero-state response, 47 zeros, 227, 250, 254 U uniform, Ÿ 9.3(174) uniform convergence, Ÿ 9.3(174), 174 unilateral, Ÿ 14.2(236) unilateral z-transform, 231 unique, 284 unit impulse, 25 unit sample, Ÿ 1.7(33), 34 330 ATTRIBUTIONS Attributions Collection: Signals and Systems Edited by: Richard Baraniuk URL: http://cnx.org/content/col10064/1.11/ License: http://creativecommons.org/licenses/by/1.0 Module: "Signal Classications and Properties" By: Melissa Selik, Richard Baraniuk, Michael Haag URL: http://cnx.org/content/m10057/2.18/ Pages: 1-9 Copyright: Melissa Selik, Richard Baraniuk, Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Size of A Signal: Norms" By: Richard Baraniuk URL: http://cnx.org/content/m12363/1.3/ Pages: 9-20 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Signal Operations" By: Richard Baraniuk URL: http://cnx.org/content/m10125/2.9/ Pages: 20-23 Copyright: Richard Baraniuk, Adam Blair License: http://creativecommons.org/licenses/by/1.0 Module: "Useful Signals" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10058/2.14/ Pages: 23-27 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "The Impulse Function" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10059/2.20/ Pages: 27-30 Copyright: Melissa Selik, Richard Baraniuk, Adam Blair License: http://creativecommons.org/licenses/by/1.0 Module: "The Complex Exponential" By: Richard Baraniuk URL: http://cnx.org/content/m10060/2.21/ Pages: 30-33 Copyright: Richard Baraniuk, Adam Blair License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS Module: "Discrete-Time Signals" By: Don Johnson URL: http://cnx.org/content/m0009/2.24/ Pages: 33-35 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "System Classications and Properties" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10084/2.20/ Pages: 37-40 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Properties of Systems" By: Thanos Antoulas, JP Slavinsky URL: http://cnx.org/content/m2102/2.17/ Pages: 41-46 Copyright: Thanos Antoulas, JP Slavinsky License: http://creativecommons.org/licenses/by/1.0 Module: "CT Linear Systems and Dierential Equations" By: Michael Haag, Richard Baraniuk URL: http://cnx.org/content/m10855/2.7/ Pages: 47-52 Copyright: Michael Haag, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Continuous-Time Convolution" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10085/2.28/ Pages: 53-59 Copyright: Melissa Selik, Richard Baraniuk, Adam Blair License: http://creativecommons.org/licenses/by/1.0 Module: "Properties of Convolution" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10088/2.15/ Pages: 59-65 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "BIBO Stability" By: Richard Baraniuk URL: http://cnx.org/content/m10113/2.10/ Pages: 65-67 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Systems in the Time-Domain" By: Don Johnson URL: http://cnx.org/content/m10251/2.23/ Pages: 69-72 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 331 332 Module: "Discrete-Time Convolution" By: Ricardo Radaelli-Sanchez, Richard Baraniuk URL: http://cnx.org/content/m10087/2.19/ Pages: 72-79 Copyright: Ricardo Radaelli-Sanchez, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Circular Convolution and the DFT" By: Justin Romberg URL: http://cnx.org/content/m10786/2.8/ Pages: 157-161 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Linear Constant-Coecient Dierence Equations" By: Richard Baraniuk URL: http://cnx.org/content/m12325/1.3/ Pages: 83-84 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Solving Linear Constant-Coecient Dierence Equations" By: Richard Baraniuk URL: http://cnx.org/content/m12326/1.4/ Pages: 84-89 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Linear Algebra: The Basics" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10734/2.5/ Pages: 91-95 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Eigenvectors and Eigenvalues" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10736/2.8/ Pages: 95-100 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Matrix Diagonalization" By: Michael Haag URL: http://cnx.org/content/m10738/2.5/ Pages: 100-103 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Eigen-stu in a Nutshell" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10742/2.5/ Pages: 103-104 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS ATTRIBUTIONS Module: "Eigenfunctions of LTI Systems" By: Justin Romberg URL: http://cnx.org/content/m10500/2.8/ Pages: 104-107 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Transform Properties" By: Don Johnson URL: http://cnx.org/content/m0045/2.8/ Pages: 107-108 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Periodic Signals" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10744/2.7/ Pages: 111-112 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Series: Eigenfunction Approach" By: Justin Romberg URL: http://cnx.org/content/m10496/2.22/ Pages: 112-115 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Derivation of Fourier Coecients Equation" By: Michael Haag URL: http://cnx.org/content/m10733/2.6/ Pages: 115-116 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Series in a Nutshell" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10751/2.4/ Pages: 116-119 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Series Properties" By: Justin Romberg, Benjamin Fite URL: http://cnx.org/content/m10740/2.8/ Pages: 119-122 Copyright: Justin Romberg, Benjamin Fite License: http://creativecommons.org/licenses/by/1.0 Module: "Symmetry Properties of the Fourier Series" By: Justin Romberg URL: http://cnx.org/content/m10838/2.4/ Pages: 122-126 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 333 334 Module: "Circular Convolution Property of Fourier Series" By: Justin Romberg URL: http://cnx.org/content/m10839/2.4/ Pages: 126-127 Copyright: Richard Baraniuk, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Series and LTI Systems" By: Justin Romberg URL: http://cnx.org/content/m10752/2.7/ Pages: 127-130 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Convergence of Fourier Series" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10745/2.4/ Pages: 130-132 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Dirichlet Conditions" By: Ricardo Radaelli-Sanchez URL: http://cnx.org/content/m10089/2.10/ Pages: 132-134 Copyright: Ricardo Radaelli-Sanchez License: http://creativecommons.org/licenses/by/1.0 Module: "Gibbs's Phenomena" By: Ricardo Radaelli-Sanchez, Richard Baraniuk URL: http://cnx.org/content/m10092/2.9/ Pages: 134-137 Copyright: Ricardo Radaelli-Sanchez, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Series Wrap-Up" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10749/2.5/ Pages: 137-138 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Analysis" By: Richard Baraniuk URL: http://cnx.org/content/m10096/2.11/ Pages: 141-142 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Fourier Analysis in Complex Spaces" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10784/2.7/ Pages: 142-148 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS ATTRIBUTIONS Module: "Matrix Equation for the DTFS" By: Roy Ha URL: http://cnx.org/content/m10771/2.6/ Pages: 148-149 Copyright: Roy Ha License: http://creativecommons.org/licenses/by/1.0 Module: "Periodic Extension to DTFS" By: Roy Ha URL: http://cnx.org/content/m10778/2.9/ Pages: 150-153 Copyright: Roy Ha License: http://creativecommons.org/licenses/by/1.0 Module: "Circular Shifts" By: Justin Romberg URL: http://cnx.org/content/m10780/2.7/ Pages: 153-157 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Circular Convolution and the DFT" By: Justin Romberg URL: http://cnx.org/content/m10786/2.8/ Pages: 157-161 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "DFT: Fast Fourier Transform" By: Don Johnson URL: http://cnx.org/content/m0504/2.8/ Page: 163 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "The Fast Fourier Transform (FFT)" By: Justin Romberg URL: http://cnx.org/content/m10783/2.6/ Pages: 164-165 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Deriving the Fast Fourier Transform" By: Don Johnson URL: http://cnx.org/content/m0528/2.7/ Pages: 165-167 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 335 336 Module: "Convergence of Sequences" By: Richard Baraniuk URL: http://cnx.org/content/m10883/2.5/ Pages: 169-170 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Convergence of Vectors" By: Michael Haag URL: http://cnx.org/content/m10894/2.3/ Pages: 171-174 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Uniform Convergence of Function Sequences" By: Michael Haag, Richard Baraniuk URL: http://cnx.org/content/m10895/2.6/ Pages: 174-175 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Fourier Transformation" By: Phil Schniter URL: http://cnx.org/content/m10421/2.11/ Pages: 177-179 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Fourier Transform (DFT)" By: Don Johnson URL: http://cnx.org/content/m10249/2.28/ Pages: 179-180 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Table of Common Fourier Transforms" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10099/2.10/ Pages: 181-183 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Fourier Transform (DTFT)" By: Richard Baraniuk URL: http://cnx.org/content/m10108/2.12/ Page: 183 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS ATTRIBUTIONS Module: "Discrete-Time Fourier Transform Properties" By: Don Johnson URL: http://cnx.org/content/m0506/2.6/ Page: 184 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Fourier Transform Pair" By: Don Johnson URL: http://cnx.org/content/m0525/2.6/ Pages: 184-185 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "DTFT Examples" By: Don Johnson URL: http://cnx.org/content/m0524/2.11/ Pages: 185-188 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Continuous-Time Fourier Transform (CTFT)" By: Richard Baraniuk, Melissa Selik URL: http://cnx.org/content/m10098/2.10/ Pages: 191-192 Copyright: Richard Baraniuk, Melissa Selik License: http://creativecommons.org/licenses/by/1.0 Module: "Properties of the Continuous-Time Fourier Transform" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10100/2.14/ Pages: 192-195 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Sampling" By: Justin Romberg URL: http://cnx.org/content/m10798/2.7/ Pages: 197-201 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Reconstruction" By: Justin Romberg URL: http://cnx.org/content/m10788/2.6/ Pages: 201-205 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 337 338 Module: "More on Perfect Reconstruction" Used here as: "More on Reconstruction" By: Roy Ha, Justin Romberg URL: http://cnx.org/content/m10790/2.5/ Pages: 205-207 Copyright: Roy Ha, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Nyquist Theorem" By: Justin Romberg URL: http://cnx.org/content/m10791/2.6/ Pages: 207-208 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Aliasing" By: Justin Romberg, Don Johnson URL: http://cnx.org/content/m10793/2.7/ Pages: 209-212 Copyright: Justin Romberg, Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Anti-Aliasing Filters" By: Justin Romberg URL: http://cnx.org/content/m10794/2.5/ Pages: 212-214 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Time Processing of Continuous Time Signals" By: Justin Romberg URL: http://cnx.org/content/m10797/2.10/ Pages: 214-216 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "The Laplace Transforms" By: Richard Baraniuk URL: http://cnx.org/content/m10110/2.13/ Pages: 219-221 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Properties of the Laplace Transform" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10117/2.10/ Pages: 221-222 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS ATTRIBUTIONS Module: "Table of Common Laplace Transforms" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10111/2.11/ Page: 223 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Region of Convergence for the Laplace Transform" By: Richard Baraniuk URL: http://cnx.org/content/m10114/2.9/ Pages: 223-225 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "The Inverse Laplace Transform" By: Steven J. Cox URL: http://cnx.org/content/m10170/2.8/ Pages: 226-227 Copyright: Steven J. Cox License: http://creativecommons.org/licenses/by/1.0 Module: "Poles and Zeros" By: Richard Baraniuk URL: http://cnx.org/content/m10112/2.12/ Pages: 227-229 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "The Z Transform: Denition" By: Benjamin Fite URL: http://cnx.org/content/m10549/2.9/ Pages: 231-236 Copyright: Benjamin Fite License: http://creativecommons.org/licenses/by/1.0 Module: "Table of Common z-Transforms" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10119/2.14/ Pages: 236-237 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Region of Convergence for the Z-transform" By: Benjamin Fite URL: http://cnx.org/content/m10622/2.5/ Pages: 237-246 Copyright: Benjamin Fite License: http://creativecommons.org/licenses/by/1.0 Module: "Inverse Z-Transform" By: Benjamin Fite URL: http://cnx.org/content/m10651/2.4/ Pages: 246-248 Copyright: Benjamin Fite License: http://creativecommons.org/licenses/by/1.0 339 340 Module: "Rational Functions" By: Michael Haag URL: http://cnx.org/content/m10593/2.7/ Pages: 249-251 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Dierence Equation" By: Michael Haag URL: http://cnx.org/content/m10595/2.5/ Pages: 251-254 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Understanding Pole/Zero Plots on the Z-Plane" By: Michael Haag URL: http://cnx.org/content/m10556/2.8/ Pages: 254-258 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Filter Design using the Pole/Zero Plot of a Z-Transform" By: Michael Haag URL: http://cnx.org/content/m10548/2.9/ Pages: 258-261 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Vector Spaces" By: Michael Haag, Steven J. Cox, Justin Romberg URL: http://cnx.org/content/m10767/2.5/ Pages: 263-264 Copyright: Michael Haag, Steven J. Cox, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Norms" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10768/2.5/ Pages: 265-267 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Inner Products" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10755/2.7/ Pages: 268-269 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 ATTRIBUTIONS ATTRIBUTIONS Module: "Hilbert Spaces" By: Justin Romberg URL: http://cnx.org/content/m10840/2.5/ Pages: 269-270 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Cauchy-Schwarz Inequality" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10757/2.6/ Pages: 270-277 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Common Hilbert Spaces" By: Roy Ha, Justin Romberg URL: http://cnx.org/content/m10759/2.6/ Pages: 277-279 Copyright: Roy Ha, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Types of Basis" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10772/2.6/ Pages: 280-283 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Orthonormal Basis Expansions" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10760/2.5/ Pages: 283-287 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Function Space" By: Justin Romberg URL: http://cnx.org/content/m10770/2.5/ Pages: 287-288 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Haar Wavelet Basis" By: Roy Ha, Justin Romberg URL: http://cnx.org/content/m10764/2.7/ Pages: 288-295 Copyright: Roy Ha, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Orthonormal Bases in Real and Complex Spaces" By: Justin Romberg URL: http://cnx.org/content/m10765/2.8/ Pages: 295-297 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 341 342 Module: "Plancharel and Parseval's Theorems" By: Justin Romberg URL: http://cnx.org/content/m10769/2.6/ Pages: 297-298 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Approximation and Projections in Hilbert Space" By: Justin Romberg URL: http://cnx.org/content/m10766/2.8/ Pages: 298-300 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Homework 1" By: Richard Baraniuk, Justin Romberg URL: http://cnx.org/content/m10826/2.9/ Pages: 303-307 Copyright: Richard Baraniuk, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Homework 1 Solutions" By: Justin Romberg, Richard Baraniuk, Michael Haag URL: http://cnx.org/content/m10830/2.4/ Pages: 307-318 Copyright: Justin Romberg, Richard Baraniuk, Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Viewing Embedded LabVIEW Content" By: Matthew Hutchinson URL: http://cnx.org/content/m13753/1.3/ Page: 319 Copyright: Matthew Hutchinson License: http://creativecommons.org/licenses/by/2.0/ ATTRIBUTIONS Signals and Systems This course deals with signals, systems, and transforms, from their theoretical mathematical foundations to practical implementation in circuits and computer algorithms. At the conclusion of ELEC 301, you should have a deep understanding of the mathematics and practical issues of signals in continuous and discrete time, linear time invariant systems, convolution, and Fourier transforms. About Connexions Since 1999, Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching and learning environment open to anyone interested in education, including students, teachers, professors and lifelong learners. We connect ideas and facilitate educational communities. Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12 schools, distance learners, and lifelong learners. Connexions materials are in many languages, including English, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers. ...
View Full Document

This note was uploaded on 10/16/2010 for the course ECE 380 taught by Professor Baltazar during the Spring '10 term at Rice.

Page1 / 351

Signals and Systems - Signals and Systems Collection...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online