chapter1 - Chapter 1 Mathematical Methods In this chapter...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 1 Mathematical Methods In this chapter we will study basic mathematical methods for characterizing noise pro- cesses. The two important analytical methods, probability distribution functions and Fourier analysis, are introduced here. These two methods will be used frequently through- out this text not only for classical systems but also for quantum systems. We try to present the two mathematical methods in a compact and succinct way as much as possible. The readers may find more detailed discussions in excellent texts [1]-[6]. In particular, most of the discussions in this chapter follow the texts by M.J. Buckingham [1] and by A.W. Drake [2]. 1.1 Time Average vs. Ensemble Average Noise is a stochastic process consisting of a randomly varying function of time and space, and thus is only statistically characterized. One cannot argue a single event at a certain time or position; one can only discuss the averaged quantity of a single system over a certain time (or space) interval or the averaged quantity of many identical systems at a certain time instance (or spatial position). The former is called time (or space) average and the latter ensemble average. Let us consider N systems which produce noisy waveforms x ( i ) ( t ), as shown in Fig. 1.1. 1 Figure 1.1: Ensemble average vs. time average. One can define the following time-averaged quantities for the i-th member of the en- semble: x ( i ) ( t ) = lim T →∞ 1 T Z T 2- T 2 x ( i ) ( t ) dt , (mean = first-order time average) (1.1) x ( i ) ( t ) 2 = lim T →∞ 1 T Z T 2- T 2 h x ( i ) ( t ) i 2 dt , (mean square = second-order time average) (1.2) φ ( i ) x ( τ ) ≡ x ( i ) ( t ) x ( i ) ( t + τ ) = lim T →∞ 1 T Z T 2- T 2 x ( i ) ( t ) x ( i ) ( t + τ ) dt . (autocorrelation function) (1.3) One can also define the following ensemble-averaged quantities for all members of the ensemble at a certain time: h x ( t 1 ) i = lim N →∞ 1 N N X i =1 x ( i ) ( t 1 ) = Z ∞-∞ x 1 p 1 ( x 1 ,t 1 ) dx 1 , (mean = first-order ensemble average) (1.4) h x ( t 1 ) 2 i = lim N →∞ 1 N N X i =1 h x ( i ) ( t 1 ) i 2 = Z ∞-∞ x 2 1 p 1 ( x 1 ,t 1 ) dx 1 , (mean square = second-order ensemble average) (1.5) 2 h x ( t 1 ) x ( t 2 ) i = lim N →∞ 1 N N X i =1 x ( i ) ( t 1 ) x ( i ) ( t 2 ) (1.6) = Z ∞-∞ x 1 x 2 p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) dx 1 dx 2 . (covariance ) Here, x 1 = x ( t 1 ), x 2 = x ( t 2 ), p 1 ( x 1 ,t 1 ) is the first-order probability density function (PDF), and p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) is the second-order joint probability density function. p 1 ( x 1 ,t 1 ) dx 1 is the probability that x is found in the range between x 1 and x 1 + dx 1 at a time t 1 and p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) dx 1 dx 2 is the probability that x is found in the range between x 1 and x 1 + dx 1 at a time t 1 and also in the range between x 2 and x 2 + dx 2 at a different time t 2 ....
View Full Document

This note was uploaded on 12/29/2011 for the course PHYSICS 731 taught by Professor Appelbaum during the Fall '11 term at Maryland.

Page1 / 33

chapter1 - Chapter 1 Mathematical Methods In this chapter...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online