This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 1 Mathematical Methods In this chapter we will study basic mathematical methods for characterizing noise pro cesses. The two important analytical methods, probability distribution functions and Fourier analysis, are introduced here. These two methods will be used frequently through out this text not only for classical systems but also for quantum systems. We try to present the two mathematical methods in a compact and succinct way as much as possible. The readers may find more detailed discussions in excellent texts [1][6]. In particular, most of the discussions in this chapter follow the texts by M.J. Buckingham [1] and by A.W. Drake [2]. 1.1 Time Average vs. Ensemble Average Noise is a stochastic process consisting of a randomly varying function of time and space, and thus is only statistically characterized. One cannot argue a single event at a certain time or position; one can only discuss the averaged quantity of a single system over a certain time (or space) interval or the averaged quantity of many identical systems at a certain time instance (or spatial position). The former is called time (or space) average and the latter ensemble average. Let us consider N systems which produce noisy waveforms x ( i ) ( t ), as shown in Fig. 1.1. 1 Figure 1.1: Ensemble average vs. time average. One can define the following timeaveraged quantities for the ith member of the en semble: x ( i ) ( t ) = lim T →∞ 1 T Z T 2 T 2 x ( i ) ( t ) dt , (mean = firstorder time average) (1.1) x ( i ) ( t ) 2 = lim T →∞ 1 T Z T 2 T 2 h x ( i ) ( t ) i 2 dt , (mean square = secondorder time average) (1.2) φ ( i ) x ( τ ) ≡ x ( i ) ( t ) x ( i ) ( t + τ ) = lim T →∞ 1 T Z T 2 T 2 x ( i ) ( t ) x ( i ) ( t + τ ) dt . (autocorrelation function) (1.3) One can also define the following ensembleaveraged quantities for all members of the ensemble at a certain time: h x ( t 1 ) i = lim N →∞ 1 N N X i =1 x ( i ) ( t 1 ) = Z ∞∞ x 1 p 1 ( x 1 ,t 1 ) dx 1 , (mean = firstorder ensemble average) (1.4) h x ( t 1 ) 2 i = lim N →∞ 1 N N X i =1 h x ( i ) ( t 1 ) i 2 = Z ∞∞ x 2 1 p 1 ( x 1 ,t 1 ) dx 1 , (mean square = secondorder ensemble average) (1.5) 2 h x ( t 1 ) x ( t 2 ) i = lim N →∞ 1 N N X i =1 x ( i ) ( t 1 ) x ( i ) ( t 2 ) (1.6) = Z ∞∞ x 1 x 2 p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) dx 1 dx 2 . (covariance ) Here, x 1 = x ( t 1 ), x 2 = x ( t 2 ), p 1 ( x 1 ,t 1 ) is the firstorder probability density function (PDF), and p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) is the secondorder joint probability density function. p 1 ( x 1 ,t 1 ) dx 1 is the probability that x is found in the range between x 1 and x 1 + dx 1 at a time t 1 and p 2 ( x 1 ,x 2 ; t 1 ,t 2 ) dx 1 dx 2 is the probability that x is found in the range between x 1 and x 1 + dx 1 at a time t 1 and also in the range between x 2 and x 2 + dx 2 at a different time t 2 ....
View
Full
Document
This note was uploaded on 12/29/2011 for the course PHYSICS 731 taught by Professor Appelbaum during the Fall '11 term at Maryland.
 Fall '11
 Appelbaum
 Solid State Physics, The Land

Click to edit the document details