{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

One normally differentiates at this point between

Info iconThis preview shows pages 18–20. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: One normally differentiates at this point between so-called “energy signals”, where the right side of (1.12) is finite and there are no existence issues, and more practical “power signals”, where the average power over some time interval of interest T is instead finite: P = 1 T integraldisplay T/ 2 − T/ 2 | f ( t ) | 2 dt < ∞ Parseval’s theorem can be extended to apply to power signals with the definition of the power spectral density (PSD) S ( ω ) , which is the power density per unit frequency. Let us write an expression for the power in the limiting case of T → ∞ incorporating Parseval’s’ theorem above: lim T →∞ 1 T integraldisplay T/ 2 − T/ 2 | f ( t ) | 2 dt = lim T →∞ 1 2 πT integraldisplay ∞ −∞ | F ( ω ) | 2 dω = 1 2 π integraldisplay ∞ −∞ S ( ω ) dω S ( ω ) ≡ lim T →∞ 1 T | F ( ω ) | 2 (1.13) 17 This is the conventional definition of the PSD, the spectrum of a signal. In practice, an approximation is made where T is finite but long enough to incorporate all of the behavior of interest in the signal. If the signal is stationary, the choice of the particular time interval of analysis is arbitrary. Sampling theorem Using a Fourier series, we saw in (1.1) how a continuous, periodic function can be expressed in terms of a discrete set of coefficients. If the function is also bandlimited, then those coefficients form a finite set of nonzero coefficients. Only these coefficients need be known to completely specify the function. The question arises, can the function be expressed directly with a finite set of samples? This is tantamount to asking if the integral in (1.2) can be evaluated exactly using discrete mathematics. That the answer to both questions is ’yes’ is the foundation for the way data are acquired and processed. Discrete sampling of a signal can be viewed as multiplication with a train of Dirac delta functions separated by an interval T . The sampling function in this case is s ( t ) = T ∞ summationdisplay n = −∞ δ ( t − nT ) = ∞ summationdisplay n = −∞ e j 2 πnt/T where the second form is the equivalent Fourier series representation. Multiplying this by the sampled function x ( t ) yields the samples y ( t ) : y ( t ) = ∞ summationdisplay n = −∞ x ( t ) e j 2 πnt/T Y ( ω ) = ∞ summationdisplay n = −∞ X parenleftbigg ω − 2 πn T parenrightbigg where the second line this time is the frequency-domain representation (see Figure 1.5). Evidently, the spectrum of the sampled signal | Y ( ω ) | 2 is an endless succession of replicas of the spectrum of the original signal, spaced by intervals of the sampling frequency ω ◦ = 2 π/T . The replicas can be removed easily by low-pass filtering. None of the original information is lost so long as the replicas do not overlap or alias onto one another. Overlap can be avoided so long as the sampling frequency is as great or greater than the total bandwidth of the signal, a number bounded by twice the maximum frequency component. This requirement defines the Nyquist sampling frequencythe maximum frequency component....
View Full Document

{[ snackBarMessage ]}

Page18 / 49

One normally differentiates at this point between so-called...

This preview shows document pages 18 - 20. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online