500ch9 - EE603 Class Notes 09/04/09 John Stensby Chapter 9:...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE603 Class Notes 09/04/09 John Stensby Chapter 9: Commonly Used Models: Narrow-Band Gaussian Noise and Shot Noise Narrow-band, wide-sense-stationary (WSS) Gaussian noise η (t) is used often as a noise model in communication systems. For example, η(t) might be the noise component in the output of a radio receiver intermediate frequency (IF) filter/amplifier. In these applications, sample functions of η (t) are expressed as η(t) = ηc (t)cos ω c t − ηs (t)sin ω c t , (9-1) where ωc is termed the center frequency (for example, ωc could be the actual center frequency of the above-mentioned IF filter). The quantities ηc(t) and ηs(t) are termed the quadrature components (sometimes, ηc(t) is known as the in-phase component and ηs(t) is termed the quadrature component), and they are assumed to be real-valued. Narrow-band noise η (t) can be represented in terms of its envelope R(t) and phase φ(t). This representation is given as η(t) = R(t) cos( ωc t + φ (t)) , (9-2) where 2 R(t) ≡ ηc (t) + ηs2(t) (9-3) −1 φ (t) ≡ tan (ηs (t) / ηc (t)). Normally, it is assumed that R(t) ≥ 0 and –π < φ (t) ≤ π for all time. Note the initial assumptions placed on η(t). The assumptions of Gaussian and WSS behavior are easily understood. The narrow-band attribute of η(t) means that ηc(t), ηs(t), R(t) and φ(t) are low-pass processes; these low-pass processes vary slowly compared to cosωct; they Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-1 EE603 Class Notes 09/04/09 John Stensby watts/Hz Sη(ω) -ωc ωc ω Rad/Sec Fig. 9-1: Example spectrum of narrow-band noise. are on a vastly different time scale from cosωct. Many periods of cosωct occur before there is notable change in ηc(t), ηs(t), R(t) or φ(t). A second interpretation can be given for the term narrow-band. This is accomplished in terms of the power spectrum of η(t), denoted as Sη(ω). By the Wiener-Khinchine theorem, Sη(ω) is the Fourier transform of Rη(τ), the autocorrelation function for WSS η(t). Since η(t) is real valued, the spectral density Sη(ω) satisfies Sη ( ω) ≥ 0 (9-4) Sη ( ω) = Sη ( −ω). Figure 9-1 depicts an example spectrum of a narrow-band process. The narrow-band attribute means that Sη(ω) is zero except for a narrow band of frequencies around ± ωc ; process η(t) has a bandwidth (however it might be defined) that is small compared to the center frequency ωc. Power spectrum Sη(ω) may, or may not, have ±ωc as axes of local symmetry. If ωc is an axis of local symmetry, then Sη (ω + ω c ) = Sη ( − ω + ω c ) (9-5) for 0 < ω < ωc, and the process is said to be a symmetrical band-pass process (Fig. 9-1 depicts a symmetrical band-pass process). It must be emphasized that the symmetry stated by the second of (9-4) is always true (i.e., the power spectrum is even); however, the symmetry stated by (9-5) Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-2 EE603 Class Notes 09/04/09 John Stensby may, or may not, be true. As will be shown in what follows, the analysis of narrow-band noise is simplified if (9-5) is true. To avoid confusion when reviewing the engineering literature on narrow-band noise, the reader should remember that different authors use slightly different definitions for the crosscorrelation of jointly-stationary, real-valued random processes x(t) and y(t). As used here, the cross-correlation of x and y is defined as Rxy(τ ) ≡ E[x(t+τ )y(t)]. However, when defining Rxy, some authors shift (by τ ) the time variable of the function y instead of the function x. Fortunately, this possible discrepancy is accounted for easily when comparing the work of different authors. η(t) has Zero Mean The mean of η(t) must be zero. This conclusion follows directly from E[η(t)] = E[ηc (t)]cos ωc t − E[ηs (t)]sin ωc t . (9-6) The WSS assumption means that E[η(t)] must be time invariant (constant). Inspection of (9-6) leads to the conclusion that E[ηc] = E[ηs] = 0 so that E[η] = 0. ˆ Quadrature Components In Terms of η and η Let the Hilbert transform of WSS noise η (t) be denoted in the usual way by the use of a circumflex; that is, η(t) denotes the Hilbert transform of η(t) (see Appendix 9A for a discussion of the Hilbert transform). The Hilbert transform is a linear, time-invariant filtering operation applied to η(t); hence, from the results developed in Chapter 7, η(t) is WSS. In what follows, some simple properties are needed of the cross correlation of η (t) and η(t) . Recall that η(t) is the output of a linear, time-invariant system that is driven by η(t). Also recall that techniques are given in Chapter 7 for expressing the cross correlation between a system input and output. Using this approach, it can be shown easily that Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-3 EE603 Class Notes 09/04/09 John Stensby ˆ ˆ R ηη ( τ) ≡ E[η(t + τ)η(t)] = − R η ( τ) ˆ ˆ ˆ R ηη ( τ) ≡ E[η(t + τ)η(t)] = R η ( τ) ˆ (9-7) R ηη (0) = R ηη (0) = 0 ˆ ˆ R η ( τ) = R η ( τ) . ˆ Equation (9-1) can be used to express η(t) . The Hilbert transform of the noise signal can be expressed as η(t) = ηc (t) cos ω c t − ηs (t) sin ω c t = ηc (t) cos ω c t − ηs (t) sin ω c t (9-8) = ηc (t) sin ω c t + ηs (t) cos ω c t . This result follows from the fact that ωc is much higher than any frequency component in η c or η s so that the Hilbert transform is only applied to the high-frequency sinusoidal functions (see Appendix 9A). The quadrature components can be expressed in terms of η and η . This can be done by solving (9-1) and (9-8) for ηc (t) = η(t)cos ω c t + η(t)sin ω c t (9-9) ηs (t) = η(t) cos ω c t − η(t)sin ω c t . These equations express the quadrature components as a linear combination of Gaussian η. Hence, the components ηc and ηs are Gaussian. In what follows, Equation (9-9) will be used to calculate the autocorrelation and crosscorrelation functions of the quadrature components. It will be shown that the quadrature components are WSS and that ηc and ηs are jointly WSS. Furthermore, WSS process η(t) is a symmetrical band-pass process if, and only if, ηc and ηs are Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-4 EE603 Class Notes 09/04/09 John Stensby uncorrelated for all time shifts. Relationships Between Autocorrelation Functions Rη , Rηc and Rηs It is easy to compute, in terms of Rη, the autocorrelation of the quadrature components. Use (9-9) and compute the autocorrelation R ηc ( τ ) = E[ηc (t)ηc (t + τ )] = E[η(t)η(t + τ )]cos ω c t cos ω c (t + τ ) + E[η(t)η(t + τ )]sin ω c t cos ω c (t + τ ) (9-10) + E[η(t)η(t + τ )]cos ω c t sin ω c (t + τ ) + E[η(t)η(t + τ )]sin ω c t sin ω c (t + τ ) . This last result can be simplified by using (9-7) to obtain R ηc ( τ ) = R η ( τ )[cos ω c t cos ω c (t + τ ) + sin ω c t sin ω c (t + τ )] + R η ( τ )[cos ω c t sin ω c (t + τ ) − sin ω c t cos ω c (t + τ )] , a result that can be expressed as R ηc ( τ ) = R η ( τ ) cos ω c τ + R η ( τ )sin ω c τ . (9-11) The same procedure can be used to compute an identical result for R ηs ; this leads to the conclusion that R ηc ( τ) ≡ R ηs ( τ) (9-12) for all τ. A somewhat non-intuitive result can be obtained from (9-11) and (9-12). Set τ = 0 in the last two equations to conclude that Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-5 EE603 Class Notes 09/04/09 John Stensby R η (0) = R ηc (0) = R ηs (0) , (9-13) an observation that leads to 2 E[η2 (t)] = E[ηc (t)] = E[ηs2(t)] (9-14) Avg Pwr in η(t) = Avg Pwr in ηc (t) = Avg Pwr in ηs (t). The frequency domain counterpart of (9-11) relates the spectrums Sη , Sηc and Sηs . Take the Fourier transform of (9-11) to obtain Sηc( ω) = Sηs( ω) = 1 ( Sη (ω + ωc ) + Sη (ω − ωc ) ) 2 − (9-15) 1 (sgn(ω − ωc )Sη (ω − ωc ) − sgn(ω + ωc )Sη (ω + ωc ) ) . 2 Since ηc and ηs are low-pass processes, Equation (9-15) can be simplified to produce Sηc( ω) = Sηs( ω) = Sη ( ω + ωc ) + Sη (ω − ωc ), −ωc ≤ ω ≤ ωc = 0, (9-16) otherwise, a relationship that is easier to grasp and remember than is (9-11). Equation (9-16) provides an easy method for obtaining Sηc and/or Sηs given only Sη . First, make two copies of Sη(ω). Shift the first copy to the left by ωc, and shift the second copy to the right by ωc. Add together both shifted copies, and truncate the sum to the interval −ωc ≤ ω ≤ ωc to get Sηc . This “shift and add” procedure for creating Sηc is illustrated by Fig. 9-2. Given only Sη(ω), it is always possible to determine Sηc (which is equal to Sηs ) in this manner. The converse is not true; given only Sηc , it is not always possible to create Sη(ω) (Why? Think Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-6 EE603 Class Notes 09/04/09 John Stensby Sη(ω) −ωc ωc Sη(ω+ωc) , ⎮ω⎮ < ωc −ωc ωc Sη(ω−ωc) , ⎮ω⎮ < ωc −ωc ωc Sη (ω) = Sη(ω+ωc) + Sη(ω−ωc) , ⎮ω⎮ < ωc c −ωc ωc Fig. 9-2: Creation of Sηc from shifting and adding copies of Sη . about the fact that Sηc(ω ) must be even, but Sη(ω) may not satisfy (9-5)). The Crosscorrelation R ηc ηs It is easy to compute the cross-correlation of the quadrature components. From (9-9) it follows that Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-7 EE603 Class Notes 09/04/09 John Stensby R ηc ηs ( τ ) = E[ηc (t + τ )ηs ( t )] = E[η(t + τ )η(t)]cos ω c (t + τ )cos ω c t − E[η(t + τ )η(t)]cos ω c (t + τ )sin ω c t (9-17) + E[η(t + τ )η(t)]sin ω c (t + τ )cos ω c t − E[η(t + τ )η(t)]sin ω c (t + τ )sin ω c t . By using (9-7), Equation (9-17) can be simplified to obtain R ηc ηs ( τ ) = R η ( τ )[− sin ω c t cos ω c (t + τ ) + cos ω c t sin ω c (t + τ )] − R η ( τ )[cos ω c t cos ω c (t + τ ) + sin ω c t sin ω c (t + τ )] , a result that can be written as R ηc ηs ( τ ) = R η ( τ )sin ω cτ − R η ( τ )cos ω cτ . (9-18) The cross-correlation of the quadrature components is an odd function of τ. This follows directly from inspection of (9-18) and the fact that an even function has an odd Hilbert transform. Finally, the fact that this cross-correlation is odd implies that R ηc ηs (0) = 0; taken at the same time, the samples of ηc and ηs are uncorrelated and independent. However, as discussed below, the quadrature components ηc(t1) and ηs(t2) may be correlated for t1 ≠ t2. The autocorrelation Rη of the narrow-band noise can be expressed in terms of the autocorrelation and cross-correlation of the quadrature components ηc and ηs . This important result follows from using (9-11) and (9-18) in R ηc ( τ )cos ω c τ + R ηc ηs ( τ )sin ω cτ = R η ( τ )cos ω cτ + R η ( τ )sin ω cτ cos ω cτ (9-19) + R η ( τ )sin ω c τ − R η ( τ ) cos ω c τ sin ω c τ . Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-8 EE603 Class Notes 09/04/09 John Stensby However, Rη results from simplification of the right hand side of (9-19), and the desired relationship R η ( τ ) = R ηc ( τ ) cos ω c τ + R ηc ηs ( τ )sin ω cτ (9-20) follows. Comparison of (9-16) with the Fourier transform of (9-20) reveals an “unsymmetrical” aspect in the relationship between Sη , Sηc and Sηs . In all cases, both Sηc and Sηs can be obtained by simple translations of Sη as is shown by (9-16). However, in general, Sη cannot be expressed in terms of a similar, simple translation of Sηc (or Sηs ), a conclusion reached by inspection of the Fourier transform of (9-20). But, as shown next, there is an important special case where R ηc ηs( τ) is identically zero for all τ, and Sη can be expresses as simple translations of Sηc . Symmetrical Bandpass Processes Narrow-band process η(t) is said to be a symmetrical band-pass process if Sη (ω + ω c ) = Sη ( − ω + ω c ) (9-21) for 0 < ω < ωc. Such a bandpass process has its center frequency ωc as an axis of local symmetry. In nature, symmetry usually leads to simplifications, and this is true of Gaussian narrow-band noise. In what follows, we show that the local symmetry stated by (9-21) is equivalent to the condition R ηc ηs ( τ ) = 0 for all τ (not just at τ = 0). The desired result follows from inspecting the Fourier transform of (9-18); this transform is the cross spectrum of the quadrature components, and it vanishes when the narrow-band process has spectral symmetry as defined by (9-21). To compute this cross spectrum, first note the Fourier transform pairs Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-9 EE603 Class Notes 09/04/09 John Stensby R η ( τ ) ↔ Sη ( ω ) (9-22) R η ( τ ) ↔ − jSgn( ω )Sη ( ω ) , where R +1 | Sgn( ω ) ≡ S | −1 T for ω > 0 (9-23) for ω < 0 is the commonly used “sign” function. Now, use Equation (9-22) and the Frequency Shifting Theorem to obtain the Fourier transform pairs R η ( τ) sin ω cτ ↔ 1 S η ( ω − ω c ) − Sη ( ω + ω c ) 2j (9-24) R η ( τ) cos ω cτ ↔ 1 Sgn(ω − ω c ) Sη (ω − ω c ) + Sgn(ω + ω c ) Sη (ω + ω c ) . 2j Finally, use this last equation and (9-18) to compute the cross spectrum Sηc ηs (ω ) = F [ R ηc ηs ( τ )] 1 = Sη (ω − ω c )[1 − Sgn(ω − ω c )] − Sη ( ω + ω c )[1 + Sgn( ω + ω c )] . 2j (9-25) Figure 9-3 depicts example plots useful for visualizing important properties of (9-25). From parts b) and c) of this plot, note that the products on the right-hand side of (9-25) are low pass processes. Then it is easily seen that Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-10 EE603 Class Notes 09/04/09 a) John Stensby Sη(ω) UL -2ωc LU ωc -ωc 2ω c Sη(ω-ωc) b) 1-Sgn(ω-ωc) -2ωc -ωc UL ωc LU 2ω c Sη(ω+ωc) c) 1+Sgn(ω+ωc) UL -2ωc LU -ωc ωc 2ω c Figure 9-3: Symmetrical bandpass processes have ηc(t1) and ηs(t2) uncorrelated for all t1 and t2. 0 , ⎧ ⎪ ⎪ Sηcηs ( ω) = ⎨ − j[ Sη ( ω − ωc ) − Sη (ω + ωc )], ⎪ ⎪ 0 , ⎩ ω > ωc −ωc < ω < ωc (9-26) ω < −ωc . Finally, note that Sηc ηs ( ω ) = 0 is equivalent to the narrow-band process η satisfying the symmetry condition (9-21). Since the cross spectrum is the Fourier transform of the crosscorrelation, this last statement implies that, for all t1 and t2 (not just t1 = t2), ηc(t1) and ηs(t2) are uncorrelated if and only if (9-21) holds. On Fig. 9-3, symmetry implies that the spectral components labeled with U can be obtained from those labeled with L by a simple folding operation. System analysis is simplified greatly if the noise encountered has a symmetrical Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-11 EE603 Class Notes 09/04/09 John Stensby spectrum. Under these conditions, the quadrature components are uncorrelated, and (9-20) simplifies to R η ( τ ) = R ηc ( τ ) cos ω c τ . (9-27) Also, the spectrum Sη of the noise is obtained easily by scaling and translating Sηc ≡ F [R ηc ] as shown by 1 Sη (ω ) = [Sηc ( ω − ω c ) + Sηc ( ω + ω c )] . 2 (9-28) This result follows directly by taking the Fourier transform of (9-27). Hence, when the process is symmetrical, it is possible to express Sη in terms of a simple translations of Sηc (see the comment after (9-20)). Finally, for a symmetrical bandpass process, Equation (9-16) simplifies to Sηc(ω ) = Sηs(ω ) = 2 Sη (ω + ω c ), −ω c ≤ ω ≤ ω c = 0, . (9-29) otherwise Example 9-1: Figure 9-4 depicts a simple RLC bandpass filter that is driven by white Gaussian noise with a double sided spectral density of N0/2 watts/Hz. The spectral density of the output is L C + S(ω) = N0/2 + watts/Hz (WGN) R η Figure 9-4: A simple band-pass filter driven by white Gaussian noise (WGN). Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-12 EE603 Class Notes 09/04/09 John Stensby given by Sη (ω ) = 2 N0 2α 0 ( jω ) 2N H bp ( jω ) = 0 , 2 2 2 ( α 0 + jω )2 + ω c (9-30) where α0 = R/2L, ωc = (ωn2 - α02)1/2 and ωn = 1/(LC)1/2. In this result, frequency can be normalized, and (9-30) can be written as Sη (ω ′ ) = 2 N0 2α ′ ( jω ′ ) 0 , 2 ( α ′ + jω ′ ) 2 + 1 0 (9-31) ′ where α0′ = α0/ωc and ω′ = ω/ωc. Figure 9-5 illustrates a plot of the output spectrum for α o = .5; ′ note that the output process is not symmetrical. Figure 9-6 depicts the spectrum for α o = .1 (a ′ ′ much “sharper” filter than the α o = .5 case). As the circuit Q becomes large (i.e., α o becomes small), the filter approximates a symmetrical filter, and the output process approximates a symmetrical bandpass process. Envelope and Phase of Narrow-Band Noise Zero-mean quadrature components ηc(t) and ηs(t) are jointly Gaussian, and they have the Sη(ω) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 -2 -1 0 1 2 ω′ (radians/second) ′ Figure 9-5: Output Spectrum for α o = .5 Updates at http://www.ece.uah.edu/courses/ee420-500/ Sη(ω) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 -2 -1 0 1 2 ω′ (radians/second) ′ Figure 9-6: Output Spectrum for α o = .1 9-13 EE603 Class Notes 09/04/09 same variance σ2 = R η ( 0) = R ηc ( 0) = R ηs ( 0) . John Stensby Also, taken at the same time t, they are independent. Hence, taken at the same time, processes ηc(t) and ηs(t) are described by the joint density LM− ηc2 + ηs2 OP . f ( ηc , ηs ) = exp 2πσ 2 NM 2σ2 QP 1 (9-32) We are guilty of a common abuse of notation. Here, symbols ηc and ηs are used to denote random processes, and sometimes they are used as algebraic variables, as in (9-32). However, always, it should be clear from context the intended use of ηc and ηs. The narrow-band noise signal can be represented as η(t) = ηc (t) cos ωc t − ηs (t) sin ωc t = Γ1 (t) cos(ωc t + ϕ1 (t)) (9-33) where 2 2 Γ1 (t) = ηc (t) + ηs (t) ⎛ η (t) ⎞ ϕ1 (t) = Tan −1 ⎜ s ⎟ , -π<ϕ1 ≤ π , ⎝ ηc (t) ⎠ (9-34) are the envelope and phase, respectively. Note that (9-34) describes a transformation of ηc(t) and ηs(t). The inverse is given by ηc = Γ1 cos(ϕ1 ) ηs = Γ1 sin(ϕ1 ) Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-35) 9-14 EE603 Class Notes 09/04/09 John Stensby The joint density of Γ1 and ϕ1 can be found by using standard techniques. Since (9-35) is the inverse of (9-33) and (9-34), we can write f (Γ1, ϕ1 ) = f (ηc , ηs ) det ∂ (ηc , ηs ) ∂ (Γ1, ϕ1 ) ηc =Γ1 cos ϕ1 (9-36) ηs =Γ1 sinϕ1 ∂ (ηc , ηs ) ⎡cos ϕ1 = ⎣ ∂ (Γ1, ϕ1 ) ⎢ sinϕ1 −Γ1sinϕ1 ⎤ Γ1 cos ϕ1 ⎥ ⎦ (again, the notation is abusive). Finally, substitute (9-32) into (9-36) to obtain f ( Γ1, ϕ1 ) = = Γ1 ⎡1 ⎤ exp ⎢ − 2 Γ12 (sin 2 ϕ1 + cos2 ϕ1 ) ⎥ 2πσ2 ⎣ 2σ ⎦ . (9-37) Γ1 ⎡1 ⎤ exp ⎢ − 2 Γ12 ⎥ , 2 2πσ ⎣ 2σ ⎦ for Γ1 ≥ 0 and -π < ϕ1 ≤ π. Finally, note that (9-37) can be represented as f ( Γ1, ϕ1 ) = f ( Γ1 )f(ϕ1 ) , (9-38) where LM N OP Q 1 Γ f ( Γ1 ) = 1 exp − 2 Γ12 U( Γ1 ) 2σ σ2 (9-39) describes a Rayleigh distributed envelope, and f(ϕ1 ) = 1 , - π < ϕ1 ≤ π 2π Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-40) 9-15 EE603 Class Notes 09/04/09 John Stensby Fig. 9-7: A hypothetical sample function of narrow-band Gaussian noise. The envelope is Rayleigh and the phase is uniform. describes a uniformly distributed phase. Finally, note that the envelope and phase are independent. Figure 9-7 depicts a hypothetical sample function of narrow-band Gaussian noise. Envelope and Phase of a Sinusoidal Signal Plus Noise - the Rice Density Function Many communication problems involve deterministic signals embedded in random noise. The simplest such combination of signal and noise is that of a constant frequency sinusoid added to narrow-band Gaussian noise. In the 1940s, Steven Rice analyzed this combination and published his results in the paper Statistical Properties of a Sine-wave Plus Random Noise, Bell System Technical Journal, 27, pp. 109-157, January 1948. His work is outlined in this section. Consider the sinusoid s(t) = A0 cos( ωc t + θ0 ) = A0 cos θ0 cos ωc t − A0 sin θ0 sin ωc t , (9-41) where A0, ωc, and θ0 are known constants. To signal s(t) we add noise η(t) given by (9-1), a zero-mean WSS band-pass process with power σ2 = E[η2] = E[ηc2] = E[ηs2]. This sum of signal and noise can be written as s(t) + η(t) = [A0 cos θ0 + ηc (t)]cos ωc t − [A0 sin θ0 + ηs (t)]sin ωc t = Γ2 (t) cos[ωc t + ϕ 2 ] , Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-42) 9-16 EE603 Class Notes 09/04/09 John Stensby where Γ 2 (t) = [A 0 cos θ0 + ηc (t)]2 + [A 0 sin θ0 + ηs (t)]2 ϕ 2 (t) = tan −1 ⎡ A 0 sin θ0 + ηs (t) ⎤ ⎢ ⎥, A 0 cos θ0 + ηc (t) ⎦ ⎣ (9-43) −π < ϕ 2 ≤ π , are the envelope and phase, respectively, of the signal+noise process. Note that the quantity (A0 / 2 )2 / σ2 is the signal-to-noise ratio, a ratio of powers. Equation (9-43) represents a transformation from the components ηc and ηs into the envelope Γ2 and phase ϕ2. The inverse of this transformation is given by ηc (t) = Γ2 (t) cos ϕ 2 (t) − A0 cos θ0 ηs (t) = Γ 2 (t)sin ϕ 2 (t) − A0 sin θ0 . (9-44) Note that constants A0cosθ0 and A0sinθ0 only influence the mean of ηc and ηs. In the remainder of this section, we describe the statistical properties of envelope Γ2 and phase ϕ2. At the same time t, processes ηc(t) and ηs(t) are statistically independent (however, for τ ≠ 0, ηc(t) and ηs(t+τ) may be dependent). Hence, for ηc(t) and ηs(t) we can write the joint density f ( ηc , ηs ) = exp[−( ηc2 + ηs2 ) / 2σ 2 ] 2πσ 2 (9-45) (we choose to abuse notation for our convenience: ηc and ηs are used to denote both random processes and, as in (9-45), algebraic variables). The joint density f(Γ2, ϕ2) can be found by transforming (9-45). To accomplish this, the Jacobian Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-17 EE603 Class Notes 09/04/09 ∂ ( ηc , ηs ) ⎡cos ϕ 2 = ⎣ ∂ ( Γ 2 , ϕ 2 ) ⎢ sinϕ 2 John Stensby −Γ2sinϕ 2 ⎤ Γ2 cos ϕ 2 ⎥ ⎦ (9-46) can be used to write the joint density f (Γ 2 , ϕ 2 ) = f (ηc , ηs ) det ∂ (ηc , ηs ) ∂ (Γ 2 , ϕ 2 ) ηc =Γ 2 cos ϕ 2 − A0 cos θ0 (9-47) ηs =Γ 2 sinϕ 2 − A 0 sin θ0 Γ2 f (Γ 2 , ϕ 2 ) = 2πσ 2 { } exp − 1 2 [Γ 22 − 2A 0Γ 2 cos(ϕ 2 − θ0 ) + A 02 ] U(Γ 2 ) . 2σ Now, the marginal density f(Γ2) can be found by integrating out the ϕ 2 variable to obtain f ( Γ2 ) = ∫ 2π 0 f ( Γ 2 , ϕ 2 ) dϕ 2 { Γ2 } A0Γ 2 2π (9-48) = 2 exp − 1 2 [ Γ22 + A02 ] U( Γ 2 ) 21 ∫ exp{ 2 cos(ϕ 2 − θ0 )}dϕ 2 . π0 2σ σ σ This result can be written by using the tabulated function z 2π I0 (β ) ≡ 21 exp{β cos(θ )}dθ , π (9-49) 0 the modified Bessel function of order zero. Now, use definition (9-49) in (9-48) to write FH IK R S T U V W Γ ΓA f ( Γ2 ) = 2 I0 2 2 0 exp − 1 2 [Γ22 + A 02 ] U( Γ2 ) , 2 σ 2σ σ (9-50) a result known as the Rice probability density. As expected, θ0 does not enter into f(Γ2). Equation (9-50) is an important result. Updates at http://www.ece.uah.edu/courses/ee420-500/ It is the density function that statistically 9-18 EE603 Class Notes 09/04/09 John Stensby describes the envelope Γ2 at time t; for various values of A0/σ, the function σ f( Γ2) is plotted on Figure 9-8 (the quantity (A0 / 2)2 / σ2 is the signal-to-noise ratio). For A0/σ = 0, the case of no sinusoid, only noise, the density is Rayleigh. For large A0/σ the density becomes Gaussian. To observe this asymptotic behavior, note that for large β the approximation I 0 (β ) ≈ eβ , β >> 1, 2πβ (9-51) becomes valid. Hence, for large Γ2A0/σ2 Equation (9-50) can be approximated by f ( Γ2 ) ≈ Γ2 2πA 0σ 2 R S T U V W exp − 1 2 [Γ2 − A 0 ]2 U ( Γ2 ) . 2σ (9-52) 0.8 σ f (Γ2) 0.7 A 0 /σ = 0 0.6 A 0 /σ = 1 A 0 /σ = 2 0.5 A0/σ = 3 A0/σ = 4 0.4 0.3 0.2 0.1 0.0 0 1 2 3 4 5 6 Γ2/σ Figure 9-8: Rice density function for sinusoid plus noise. Plots are given for several values of A0/σ. Note that f is approximately Rayleigh for small, positive A0/σ; density f is approximately Gaussian for large A0/σ. Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-19 EE603 Class Notes 09/04/09 John Stensby For A0 >> σ, this function has a very sharp peak at Γ2 = A0, and it falls off rapidly from its peak value. Under these conditions, the approximation 1 f ( Γ2 ) ≈ 2πσ 2 R S T exp − 1 2 [Γ2 − A 0 ]2 2σ U V W (9-53) holds for values of Γ2 near A0 (i.e., Γ2 ≈ A0) where f(Γ2) is significant. Hence, for large A0/σ, envelope Γ2 is approximately Gaussian distributed. The marginal density f(ϕ2) can be found by integrating Γ 2 out of (9-47). Before integrating, complete the square in Γ 2 , and express (9-47) as f ( Γ2 , ϕ 2 ) = Γ2 2πσ S }R T { U V W 2 exp − 1 2 [Γ2 − A 0 cos(ϕ 2 − θ0 )]2 exp − A 02 sin 2 (ϕ 2 − θ0 ) U( Γ2 ) . 2 2σ 2σ (9-54) Now, integrate Γ 2 out of (9-54) to obtain ∞ f (ϕ 2 ) = ∫ f ( Γ 2 , ϕ 2 )dΓ2 0 { } 2 ⎧ ⎫ ∞ Γ2 exp − 12 [Γ2 − A0 cos(ϕ 2 − θ0 )]2 dΓ 2 . = exp ⎨ − A0 sin 2 (ϕ 2 − θ0 ) ⎬ 2 0 2 πσ2 2σ ⎩ 2σ ⎭ ∫ (9-55) On the right-hand-side of (9-55), the integral can be expressed as the two integrals z ∞ Γ2 2πσ { } 2 exp − 1 2 [Γ2 − A 0 cos(ϕ 2 − θ0 )]2 dΓ2 2σ = 0 z ∞ 2{Γ2 0 − A 0 cos(ϕ 2 − θ0 )} 4πσ + 2 { z A 0 cos(ϕ 2 − θ0 ) ∞ 2πσ } exp − 1 2 [Γ2 − A 0 cos(ϕ 2 − θ0 )]2 dΓ2 2σ 2 Updates at http://www.ece.uah.edu/courses/ee420-500/ 0 { (9-56) } exp − 1 2 [Γ2 − A 0 cos(ϕ 2 − θ0 )]2 dΓ2 2σ 9-20 EE603 Class Notes 09/04/09 John Stensby After a change of variable ν = [Γ2 - A0cos(ϕ2 - θ0)]2 , the first integral on the right-hand-side of (9-56) can be expressed as − A0 cos(ϕ 2 − θ0 )} ∞ 2{Γ 2 ∫0 4πσ 2 { } exp − 1 2 [ Γ 2 − A0 cos(ϕ 2 − θ0 )]2 dΓ 2 2σ = ∞ exp[ − υ2 ]dν 2 A 2 cos2 (ϕ 2 − θ0 ) 2σ 4πσ 0 = ⎡ A2 cos2 (ϕ 2 − θ0 ) ⎤ 1 exp ⎢ − 0 ⎥. 2π 2 σ2 ⎣ ⎦ 1 ∫ (9-57) After a change of variable ν = [Γ2 - A0cos(ϕ2 - θ0)]/σ, the second integral on the right-hand-side of (9-56) can be expressed as { } ∞ 1 2 1 ∫0 exp − 2σ2 [Γ2 − A0 cos(ϕ 2 − θ0 )] dΓ2 2 πσ { } 1∞ 2 ∫ −(A0 / σ) cos[ϕ2 − θ0 ] exp − 1 ν dν 2 2π = { (9-58) } 1 −(A0 / σ) cos[ϕ 2 − θ0 ] exp − 1 ν2 dν ∫−∞ 2 2π = 1− A = F( σ0 cos[ϕ 2 − θ0 ]), where z x 2 exp − ν dν 2 2 π −∞ F( x ) ≡ 1 is the distribution function for a zero-mean, unit variance Gaussian random variable (the identity F(-x) = 1 - F(x) was used to obtain (9-58)). Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-21 EE603 Class Notes 09/04/09 John Stensby Finally, we are in a position to write f(ϕ 2), the density function for the instantaneous phase. This density can be written by using (9-57) and (9-58) in (9-55) to write f (ϕ 2 ) = ⎡ A2 ⎤ 1 exp ⎢ − 0 ⎥ 2π ⎣ 2 σ2 ⎦ , (9-59) 2 A cos(ϕ 2 − θ0 ) ⎧ ⎫A exp ⎨ − A0 sin 2 (ϕ 2 − θ0 ) ⎬ F( σ0 cos[ϕ 2 − θ0 ]) +0 2 πσ ⎩ 2 σ2 ⎭ the density function for the phase of a sinusoid embedded in narrow-band noise. For various values of SNR and for θ0 = 0, density f(ϕ2) is plotted on Fig. 9-9. For a SNR of zero (i.e., A0 = 0), the phase is uniform. As SNR A02/σ2 increases, the density becomes more sharply peaked (in general, the density will peak at θ0, the phase of the sinusoid). As SNR A02/σ2 approaches infinity, the density of the phase approaches a delta function at θ0. 2.0 1.8 1.6 f (ϕ2) 1.4 1.2 A0/σ = 4 1.0 0.8 0.6 A0/σ = 2 0.4 A0/σ = 1 A0/ σ = 0 0.2 0.0 -π -π/2 0 π/2 π Phase Angle ϕ2 Figure 9-9: Density function for phase of signal plus noise A0cos(ω0t+θ0) + {ηc(t)cos(ω0t) - ηs(t)sin(ω0t)} for the case θ0 = 0. Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-22 EE603 Class Notes 09/04/09 John Stensby Shot Noise Shot noise results from filtering a large number of independent and randomly-occuring- in-time impulses. For example, in a temperature-limited vacuum diode, independent electrons reach the anode at independent times to produce a shot noise process in the diode output circuit. A similar phenomenon occurs in diffusion-limited pn junctions. To understand shot noise, you must first understand Poisson point processes and Poisson impulses. Recall the definition and properties of the Poissson point process that was discussed in Chapters 2 and 7 (also, see Appendix 9-B). The Poisson points occur at times ti with an average density of λd points per unit length. In an interval of length τ, the number of points is distributed with a Poisson density with parameter λdτ. Use this Poisson process to form a sequence of Poisson Impulses, a sequence of impulses located at the Poisson points and expressed as z(t) = ∑ δ(t − t i ) , (9-60) i where the ti are the Poisson points. Note that z(t) is a generalized random process; like the delta function, it can only be characterized by its behavior under an integral sign. When z(t) is integrated, the result is the Poisson random process ⎧ n(0, t), t > 0 ⎪ t ⎪ x(t) = ∫ z(τ)dτ = ⎨ 0, t=0 0 ⎪ ⎪-n(0, t) t < 0, ⎩ (9-61) where n(t1,t2) is the number of Poisson points in the interval (t1,t2]. Likewise, by passing the Poisson process x(t) through a generalized differentiator (as illustrated by Fig. 9-10), it is possible to obtain z(t). Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-23 EE603 Class Notes 09/04/09 x(t) ... × John Stensby z(t) × × × ... d/dt Poisson Process ... ... Poisson Impulses Figure 9-10: Differentiate the Poisson Process to get Poisson impulses. The mean of z(t) is simply the derivative of the mean value of x(t). Since E[x(t)]=λdt, we can write ηz = E[z(t)] = d E[x(t)] = λ d . dt (9-62) This formal result needs a physical interpretation. One possible interpretation is to view ηz as t/2 ηz = limit 1 ∫ z( τ)dτ = limit 1 ( λ d t + random fluctuation with increasing t ) = λ d . t −t / 2 t t →∞ t →∞ (9-63) For large t, the integral in (9-63) fluctuates around mean λdt with a variance of λdt (both the mean and variance of the number of Poisson points in (-t/2, t/2] is λdt). But, the integral is multiplied by 1/t; the product has a mean of λd and a variance like λd/t. Hence, as t becomes large, the random temporal fluctuations become insignificant compared to λd, the infinite-timeinterval average ηz. Important correlations involving z(t) can be calculated easily. Because Rx(t1,t2) = λ d2 t1t2 + λdmin(t1,t2) (see Chapter 7), we obtain R xz (t1, t 2 ) = R z (t1, t 2 ) = ∂ 2 R x (t1, t 2 ) = λ d t1 + λ d U(t1 − t 2 ) ∂t 2 (9-64) ∂ 2 R xz (t1, t 2 ) = λ d + λ d δ(t1 − t 2 ) . ∂t1 Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-24 EE603 Class Notes 09/04/09 John Stensby The Fourier transform of Rz(τ) yields 2 Sz ( ω) = λ d + 2π λ d δ(ω) , (9-65) the power spectrum of the Poisson impulse process. Let h(t) be a real-valued function of time and define s(t) = ∑ h(t − t i ) , (9-66) i a sum known as shot noise. The basic idea here is illustrated by Fig. 9-11. A sequence of δ functions described by (9-60) (i.e., process z(t)) is input to system h(t) to form output shot noise process s(t). The idea is simple: process s(t) is the output of a system activated by a sequence of impulses (that model electrons arriving at an anode, for example) that occur at the random Poisson points ti. Determined easily are the elementary properties of shot noise s(t). Using the method discussed in Chapter 7, we obtain the mean ∞ ηs = E [s(t) ] = E [z(t) ∗ h(t)] = h(t) ∗ E [z(t)] = λ d ∫ h(t)dt = λ d H(0) . (9-67) 0 Shot noise s(t) has the power spectrum 2 2 2 2 2 Ss ( ω) = H(ω) Sz ( ω) = 2πλ d H 2 (0)δ(ω) + λ d H(ω) = 2πηs δ(ω) + λ d H(ω) . (9-68) Finally, the autocorrelation is λ∞ 2 2 2 R s ( τ) = F -1 [ Ss ( ω)] = λ d H 2 (0) + d ∫ H(ω) e jωτdω = λ d H 2 (0) + λ d ρ( τ) , 2π −∞ Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-69) 9-25 EE603 Class Notes 09/04/09 John Stensby h(t) s(t) = h(t)*z(t) z(t) ... ... ti-1 ti ... h(t) ti+1 ti+2 ... ti-1 ti Poisson Impulses ti+1 ti+2 Shot Noise Figure 9-11: Converting Poisson impulses z(t) into shot noise s(t) where ρ(τ) = ∞ 1∞ 2 jωτ ∫−∞ H(ω) e dω = ∫−∞ h(t)h(t + τ)dt . 2π (9-70) From (9-67) and (9-69), shot noise has a mean and variance of ηs = λ d H(0) σs2 2 = [λ d H 2 (0) + λ d ρ(0)] - [λ d H(0)]2 λ∞ 2 = λ d ρ(0) = d ∫ H(ω) dω , 2π −∞ (9-71) respectively (Equation (9-71) is known as Campbell’s Theorem). Example: Let h(t) = e-βtU(t) so that H(ω) = 1/(β + jω), ρ(t) = e λ ηs = E[s(t)] = d β λ −β τ ⎡ λ d ⎤ R s ( τ) = d e +⎢ ⎥ 2β ⎣β⎦ −β t / 2β and 2 (9-72) 2λ σs = d 2β 2 ⎡λ ⎤ λ Ss (ω) = 2π ⎢ d ⎥ δ(ω) + 2 d 2 β +ω ⎣β⎦ First-Order Density Function for Shot Noise In general, the first-order density function fs(x;t) that describes shot noise s(t) cannot be calculated easily. Before tackling the difficult general case, we first consider a simpler special Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-26 EE603 Class Notes 09/04/09 John Stensby case where it is assumed that h(t) is of finite duration T. That is, we assume initially that h(t) = 0, t < 0 and t ≥ T . (9-73) Because of (9-73), shot noise s at time t depends only on the Poisson impulses in the interval (t - T, t]. To see this, note that s(t) = ∫ ∞ −∞ h(t − τ)∑ δ( τ − t i )dτ = i ∑ t −T < t i ≤ t h(t − t i ) , (9-74) so that only the impulses in (t - T, t] influence the output at time t. Let random variable nT denote the number of Poisson impulses during (t - T, t]. From Chapter 1, we know that P[n T = k] = e −λ d T (λ dT) k . k! (9-75) Now, the Law of Total Probability (see Ch. 1 and Ch. 2 of these notes) can be applied to write the first-order density function of the shot noise process s(t) as fs (x) = ∞ ∑ fs (x⎮n k =0 T = k)P[n T = k] = ∞ ∑ fs (x⎮n k =0 T = k)e −λ d T (λ d T) k k! (9-76) (note that fs(x) is independent of absolute time t). We must find fs(x⎮nT = k), the density of shot noise s(t) conditioned on there being exactly k Poisson impulses in the interval (t - T, t]. For each fixed value of k that is used on the right-hand-side of (9-76), conditional density fs(x⎮nT = k) describes the filter output due to an input of exactly k impulses on (t - T, t]. That is, we have conditioned on there being exactly k impulses in (t - T, t]. As a result of the conditioning, the k impulse locations can be modeled as k independent, identically distributed Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-27 EE603 Class Notes 09/04/09 John Stensby (iid) random variables (all locations ti, 1 ≤ i ≤ k, are uniform on the interval). For the case k = 1, at any fixed time t, fs(x⎮nT = 1) is actually equal to the density g1(x) of the random variable x1 (t) ≡ h(t − t1 ) , (9-77) where random variable t1 is uniformly distributed on (t - T, t). That is, g1(x) ≡ fs(x⎮nT = 1) describes the result that is obtained by transforming a uniform density (used to describe t1) by the transformation h(t - t1). Convince yourself that density g1(x) = fs(x⎮nT = 1) does not depend on time. Note that for any given time t, random variable t1 is uniform on (t-T, t), and x1(t) ≡ h(t-t1) is assigned values in the set {h(α) : 0 < α < T}, the assignment not depending on t. Hence, density g1(x) ≡ fs(x⎮nT = 1) does not depend on t. The density fs(x⎮nT = 2) can be found in a similar manner. Let t1 and t2 denote independent random variables, each of which is uniformly distributed on (t - T, t), and define x 2 (t) ≡ h(t − t1 ) + h(t − t 2 ) . (9-78) At fixed time t, the random variable x2(t) is described by the density fs(x⎮nT = 2) = g1∗g1 (i.e., the convolution of g1 with itself) since h(t - t1) and h(t - t2) are independent and identically distributed with density g1. The general case fs(x⎮nT = k) is similar. At fixed time t, the density that describes x k (t) ≡ h(t − t1 ) + h(t − t 2 ) + + h(t − t k ) (9-79) is Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-28 EE603 Class Notes 09/04/09 g k (x) ≡ fs (x⎮nT = k) = g1(x) ∗ g1(x) ∗ ∗ g1(x) , John Stensby (9-80) k convolutions the density g1 convolved with itself k times. The desired density can be expressed in terms of results given above. Simply substitute (9-80) into (9-76) and obtain fs (x) = e −λ d T ∞ ∑ gk (x) k =0 (λ d T) k . k! (9-81) When nT = 0, there are no Poisson points in (t - T, t], and we have g 0 (x) ≡ fs (x⎮nT = 0) = δ(x) (9-82) since the output is zero. Convergence is fast, and (9-81) is useful for computing the density fs when λdT is small (the case for low density shot noise), say on the order of 1, so that, on the average, there are only a few Poisson impulses in the interval (t - T, t]. For the case of low density shot noise, (9-81) cannot be approximated by a Gaussian density. fs(x) For An Infinite Duration h(t) The first-order density function fs(x) is much more difficult to calculate for the general case where h(t) is of infinite duration (not subject to the restriction (9-73)). We show that shot noise is approximately Gaussian distributed when λd is large compared to the time interval over which h(t) is significant (so that, on the average, many Poisson impulses are filtered to form s(t)). To establish this fact, consider first a finite duration interval (-T/2, T/2), and let random variable nT, described by (9-75), denote the number of Poisson impulses that are contained in the interval. Also, define the time-limited shot noise Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-29 EE603 Class Notes sT (t) ≡ 09/04/09 nT ∑ h(t − t k ), −T / 2 < t < T / 2 , John Stensby (9-83) k =1 where the random variables ti denote the times at which the Poisson impulses occur in the interval. Shot noise s(t) is the limit of sT(t) as T approaches infinity. In our analysis of s(t), we first consider the characteristic function Φs (ω) = E ⎡e jωs ⎤ = limit E ⎡e jω sT ⎤ . ⎣ ⎦ T →∞ ⎣ ⎦ (9-84) Now, write the characteristic function of sT as E ⎡ e jω sT ⎤ = ⎣ ⎦ ∞ ∑ E ⎡e jωsT⎮nT = k ⎤ P [nT = k ] , ⎣ ⎦ (9-85) k =0 where P[nT = k] is given by (9-75). In the conditional expectation used in (9-85), output sT results from filtering exactly k impulses (this is different from the sT that appears on the lefthand-side of the equation). Due to the conditioning, we can model the impulse locations as k independent, identically distributed (iid – they are uniform on (-T/2, T/2)) random variables. As a result, the terms h(t - ti) in sT(t) are independent so that ( E ⎡e jω sT ⎮nT = k ⎤ = E ⎡ e jω sT ⎮nT = 1⎤ ⎣ ⎦ ⎣ ⎦ ), k (9-86) where 1 T / 2 jωh(t − x) E ⎡ e jω sT ⎮nT = 1⎤ = ∫ e dx, ⎣ ⎦ T −T / 2 Updates at http://www.ece.uah.edu/courses/ee420-500/ −T / 2 < t < T / 2 , (9-87) 9-30 EE603 Class Notes 09/04/09 John Stensby since each ti is uniformly distributed on (-T/2, T/2). Finally, by using (9-84) through (9-87), we can write Φ s ( ω) = limit E ⎡e jω sT ⎤ = limit ⎦ T →∞ T →∞ ⎣ ∞ ⎡ ⎤ ∑ E ⎣e jωsT⎮nT = k ⎦ P [nT = k ] k =0 ∞ ⎛ 1 T / 2 jωh(t − x) ⎞ k −λ d T (λ dT) e dx ⎟ e = limit ∑ ⎜ ∫ k! T →∞ k = 0 ⎝ T − T / 2 ⎠ k (9-88) k ⎛ λ T / 2 e jωh(t − x)dx ⎞ ⎜ d∫ ⎟ −λ d T ⎠. = limit e ∑ ⎝ −T / 2 T →∞ k! k =0 ∞ Recalling the Taylor series of the exponential function, we can write (9-88) as ( ) T / 2 jωh(t − x) ⎞ ∞ Φ s ( ω) = limit exp{−λ d T}exp ⎛ λ d ∫ e dx ⎟ = exp ⎡λ d ∫ e jωh(t − x) − 1 dx ⎤ , ⎜ ⎢ ⎥ −T / 2 −∞ ⎝ ⎠ ⎣ ⎦ T →∞ (9-89) a general formula for the characteristic function of the shot noise process. In general, Equation (9-89) is impossible to evaluate in closed form. However, this formula can be used to show that shot noise is approximately Gaussian distributed when λd is large compared to the time constants in h(t) (i.e., compared to the time duration where h(t) is significant). First, this task will be made simpler if we standardize s(t) to s(t) ≡ s(t)-λ d H(0) , λd (9-90) so that Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-31 EE603 Class Notes 09/04/09 John Stensby E [ s] = 0 R s (τ) = ρ(τ) = ∫ ∞ −∞ . (9-91) h(t)h(t + τ)dt (see (9-67) and (9-69)). The characteristic functions of s and s are related by ⎡ ⎡ s - λ d H(0) ⎤ ⎤ Φ s ( ω) = E ⎡e jωs ⎤ = E ⎢ exp ⎢ jω ⎥ ⎥ = exp ⎡ − jω λ d H(0) ⎤ Φ s (ω ⎣ ⎦ ⎣ ⎦ λd ⎥ ⎥ ⎢ ⎢ ⎣ ⎦⎦ ⎣ λd ) . (9-92) Use (9-89) in (9-92) to write ∞⎧ ⎡ ⎫⎤ jω jω Φ s ( ω) = exp ⎢λ d ∫ ⎨exp ⎡ h(t − x) ⎤ − 1 − h(t − x) ⎬ dx ⎥ . ⎣ λd ⎦ −∞ ⎩ λd ⎭⎦ ⎣ (9-93) Now, in the integrand, expand the exponential in a power series, and cancel out the zero and first-order terms to obtain ⎡ ∞ ∞ ( jω) k Φ s ( ω) = exp ⎢λ d ∫ ∑ k! −∞ ⎢ k =2 ⎣ k k ⎤ ⎡ ⎤ ∞ k ⎧ h(t − x) ⎫ ⎧ ⎫ ⎪ ⎪ ⎥ = exp ⎢λ ∑ ( jω) ∞ ⎪ h(x) ⎪ dx ⎥ . dx ⎨ ⎬ k! ∫−∞ ⎨ λ ⎬ ⎥ ⎢d ⎥ ⎪ λd ⎪ ⎪ d⎪ ⎩ ⎭ ⎩ ⎭ k =2 ⎦ ⎣ ⎦ (9-94) Finally, assume that λd is large compared to the time duration during which h(t) is significant. Equivalently, λd is large compared to all of the filter time constants. This insures that, on the average and at any give time, shot noise s(t) results from filtering a large number of random Poisson impulses. For this case, only the first term in the sum is significant; for large λd, Equation (9-94) can be approximated as ⎡ ( jω)2 ∞ 2 ⎤ 22 Φ s (ω) ≈ exp ⎢ ∫−∞ h (x)dx ⎥ = exp ⎡− 1 σs ω ⎤ , 2 ⎣2 ⎦ ⎣ ⎦ Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-95) 9-32 EE603 Class Notes 09/04/09 John Stensby where σs2 = R s (0) (9-96) is the variance of standardized shot noise s(t) (see (9-91)). Note that Equation (9-95) is the characteristic function of a zero-mean, Gaussian random variable with variance (9-96). Hence, shot noise is approximately Gaussian distributed when λd is large compared to the time interval over which h(t) is significant (so that, on the average, a large number of Poisson impulses are filtered to form s(t)). Example: Temperature-Limited Vacuum Diode In classical communications system theory, a temperature-limited vacuum diode is the quintessential example of a shot noise generator (the phenomenon was first predicted and analyzed theoretically by Schottky in his 1918 paper: Theory of Shot Effect, Ann. Phys., Vol 57, Dec. 1918, pp. 541-568). In fact, over the years, noise generators (used for testing/aligning communication receivers, low noise preamplifiers, etc.) based on vacuum diodes (i.e., Sylvania 5722 special purpose noise generator diode) have been offered on a commercial basis. Vacuum noise generating diodes are operated in a temperature-limited, or saturated, mode. Essentially, all of the available electrons are collected by the plate (few return to the cathode) so that increasing plate voltage does not increase plate current (i.e., the tube is saturated). The only way to increase plate current is to increase filament/cathode temperature. Under this condition, between electrons, space charge effects can be negligible so that individual electrons are, more or less, independent of each other. The basic circuit is illustrated by Figure 9-12. In a random manner, electrons are emitted by the cathode, and they flow a distance d to the plate to form the current i(t). If emitted at t = 0, an independent electron contributes a current h(t), and the aggregate plate current is given by Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-33 EE603 Class Notes 09/04/09 John Stensby i(t) d RL + - +- Vf Filament Vp Plate Figure 9-12: Temperature-limited vacuum diode used as a shot noise generator. i(t) = ∑ h(t − t k ) , (9-97) k where tk are the Poisson-distributed independent times at which electrons are emitted by the cathode (see Equation (9-66)). In what follows, we approximate h(t). As discussed above, space charge effects are negligible and the electrons are independent. Since there is no space charge between the cathode and plate, the potential distribution V in this region satisfies Laplace’s equation ∂ 2V ∂2x = 0. (9-98) The potential must satisfy the boundary conditions V(0) = 0 and V(d ) = Vp. Hence, simple integration yields V= Vp d x, 0≤x≤d. (9-99) As an electron flows from the cathode to the plate, its velocity and energy increase. At point x between the cathode and plate, the energy increase is given by E n (x) = eV (x) = e Vp d x, Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-100) 9-34 EE603 Class Notes 09/04/09 John Stensby where e is the basic electronic charge. Power is the rate at which energy changes. Hence, the instantaneous power flowing from the battery into the tube is Vp dx dE n dE n dx = =e = Vp h , dt dx dt d dt (9-101) where h(t) is current due to the flow of a single electron (note that d -1dx/dt has units of sec-1 so that (e/d ) dx/dt has units of charge/sec, or current). Equation (9-101) can be solved for current to obtain h= e dx e = vx , d dt d (9-102) where vx is the instantaneous velocity of the electron. Electron velocity can be found by applying Newton’s laws. The force on an electron is just e(Vp/d), the product of electronic charge and electric field strength. Since force is equal to the product of electron mass m and acceleration ax, we have ax = e Vp . md (9-103) As it is emitted by the cathode, an electron has an initial velocity that is Maxwellian distributed. However, to simplify this example we will assume that the initial velocity is zero. With this assumption, electron velocity can be obtained by integrating (9-103) to obtain vx = e Vp t. md Updates at http://www.ece.uah.edu/courses/ee420-500/ (9-104) 9-35 EE603 Class Notes 09/04/09 John Stensby h(t) 2e/tT tT t Figure 9-13: Current due to a single electron emitted by the cathode at t = 0. Over transition time tT the average velocity is vx = 1 tT e Vp d ∫0 v x dt = 2m d tT = tT . tT (9-105) Finally, combine these last two equations to obtain ⎛ 2d ⎞ vx = ⎜ ⎟ t, 0 ≤ t ≤ tT . ⎜t2 ⎟ ⎝T ⎠ (9-106) With the aid of this last relationship, we can determine current as a function of time. Simply combine (9-102) and (9-106) to obtain ⎛ 2e ⎞ h(t) = ⎜ ⎟ t, 0 ≤ t ≤ tT , ⎜t2⎟ ⎝T⎠ (9-107) the current pulse generated by a single electron as it travels from the cathode to the plate. This current pulse is depicted by Figure 9-13. The bandwidth of shot noise s(t) is of interest. For example, we may use the noise generator to make relative measurements on a communication receiver, and we may require the Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-36 EE603 Class Notes 09/04/09 John Stensby 10Log{S(ω)/S(0)} Rs(τ) 2 Relative Power (dB) 4e 2 3tT 0 -2 -4 -6 -8 -tT τ tT Figure 9-14: Autocorrelation function of normalized shot noise process. -10 π/tT ω (Rad/Sec) Figure 9-15: Relative power spectrum of normalized shot noise process. noise spectrum to be “flat” (or “white”) over the receiver bandwidth (the noise spectrum amplitude is not important since we are making relative measurements). To a certain “flatness”, we can compute and examine the power spectrum of standardized s(t) described by (9-90). As given by (9-91), the autocorrelation of s(t) is ( )∫ R s ( τ) = 2 ( )( ) e 2 tT −τ 4 e2 τ2 τ t(t + τ)dt = 1− 1+ , 0 ≤ τ ≤ tT tT 3 tT tT 2tT 0 = R s (−τ), −tT ≤ τ ≤ 0 . = 0, otherwise (9-108) The power spectrum of s(t) is the Fourier transform of (9-108), a result given by ∞ S (ω) = 2∫ R s (τ) cos(ωτ)dτ = 0 4 (ωtT ) 4 ( (ωtT )2 + 2(1 − cos ωtT − ωtT sin ωtT ) ) . (9-109) Plots of the autocorrelation and relative power spectrum (plotted in dB relative to peak power at ω = 0) are given by Figures 9-14 and 9-15, respectively. To within 3dB, the power spectrum is “flat” from DC to a little over ω = π/tT. For the Sylvania 5722 noise generator diode, the cathode-to-plate spacing is .0375 inches and the transit Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-37 EE603 Class Notes 09/04/09 John Stensby time is about 3×10-10 seconds. For this diode, the 3dB cutoff would be about 1/2tT = 1600Mhz. In practical application, where electrode/circuit stray capacitance/inductance limits frequency range, the Sylvania 5722 has been used in commercial noise generators operating at over 400Mhz. Updates at http://www.ece.uah.edu/courses/ee420-500/ 9-38 ...
View Full Document

This note was uploaded on 10/12/2009 for the course EE EE603 taught by Professor Johnstensby during the Spring '09 term at University of Alabama in Huntsville.

Ask a homework question - tutors are online