12 Expectation is discussed in the third chapter of the basic probability facts

12 expectation is discussed in the third chapter of

This preview shows page 14 - 17 out of 181 pages.

1.2Expectation is discussed in the third chapter of thebasic probability factspdf mentioned in thepreface. For continuous-valued finite variance processes, the mean isμt=E(xt) =R-x ft(x)dxand the variance isσ2t=E(xt-μt)2=R-(x-μt)2ft(x)dx, whereftis the density ofxt. Ifxtis Gaussian with meanμtand varianceσ2t, abbreviated asxtN(μt,σ2t), the marginal densityis given byft(x) =1σt2πexp-12σ2t(x-μt)2forxR.
Background image
1.4 Measures of Dependence15Example 1.12 Mean Function of a Random Walk with DriftConsider the random walk with drift model given in (1.4),xt=δt+tj=1wj,t=1,2,. . . .BecauseE(wt) =0for allt, andδis a constant, we haveμxt=E(xt) =δt+tj=1E(wj) =δtwhich is a straight line with slopeδ. A realization of a random walk with driftcan be compared to its mean function inFigure 1.9.Example 1.13 Mean Function of Signal Plus NoiseA great many practical applications depend on assuming the observed data havebeen generated by a fixed signal waveform superimposed on a zero-mean noiseprocess, leading to an additive signal model of the form (1.5). It is clear,because the signal in (1.5) is a fixed function of time, we will haveμxt=E2 cos(2πt+1550) +wt=2 cos(2πt+1550) +E(wt)=2 cos(2πt+1550),and the mean function is just the cosine wave.The mean function describes only the marginal behavior of a time series. Thelack of independence between two adjacent valuesxsandxtcan be assessednumerically, as in classical statistics, using the notions of covariance andcorrelation. Assuming the variance ofxtis finite, we have the followingdefinition.Definition 1.2Theautocovariance functionis defined as the second momentproductγx(s,t) =cov(xs,xt) =E[(xs-μs)(xt-μt)],(1.8)for allsandt. When no possible confusion exists about which time series we arereferring to, we will drop the subscript and writeγx(s,t)asγ(s,t).Note thatγx(s,t) =γx(t,s)for all time pointssandt. The autocovariancemeasures thelineardependence between two points on the same series observedat different times. Recall from classical statistics that ifγx(s,t) =0, thenxsandxtare not linearly related, but there still may be some dependence structurebetween them. If, however,xsandxtare bivariate normal,γx(s,t) =0ensurestheir independence. It is clear that, fors=t, the autocovariance reduces to the(assumed finite) variance, becauseγx(t,t) =E[(xt-μt)2] =var(xt).(1.9)
Background image
161 Time Series CharacteristicsExample 1.14 Autocovariance of White NoiseThe white noise serieswthasE(wt) =0andγw(s,t) =cov(ws,wt) =(σ2ws=t,0s6=t.(1.10)A realization of white noise withσ2w=1is shown in the top panel ofFigure 1.7.We often have to calculate the autocovariance between filtered series. Auseful result is given in the following proposition.
Background image
Image of page 17

You've reached the end of your free preview.

Want to read all 181 pages?

  • Fall '08
  • Schoolfield
  • The Land, White Noise, Autocorrelation, Stationary process, Autoregressive moving average model

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture