randomprocDec05

randomprocDec05 - Notes for ECE 534 An Exploration of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Notes for ECE 534 An Exploration of Random Processes for Engineers Bruce Ha jek December 31, 2005 c 2005 by Bruce Ha jek All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes are left intact and not reproduced for commercial purp oses. Email to b-ha jek@uiuc.edu, p ointing out errors or hard to understand passages or providing comments, is welcome. Contents 1 Getting Started 1.1 The axioms of probability theory . . . . . 1.2 Independence and conditional probability 1.3 Random variables and their distribution . 1.4 Functions of a random variable . . . . . . 1.5 Expectation of a random variable . . . . . 1.6 Frequently used distributions . . . . . . . 1.7 Jointly distributed random variables . . . 1.8 Cross moments of random variables . . . . 1.9 Conditional densities . . . . . . . . . . . . 1.10 Transformation of random vectors . . . . 1.11 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Convergence of a Sequence of Random Variables 2.1 Four definitions of convergence of random variables . 2.2 Cauchy criteria for convergence of random variables 2.3 Limit theorems for sequences of independent random 2.4 Convex functions and Jensen’s Inequality . . . . . . 2.5 Chernoff bound and large deviations theory . . . . . 2.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 3 5 8 13 16 19 20 22 22 25 ...... ...... variables ...... ...... ...... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 33 40 42 44 45 49 . . . . . . 55 55 56 63 65 67 71 . . . . . 77 77 80 82 83 84 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Random Vectors and Minimum Mean Squared Error Estimation 3.1 Basic definitions and properties . . . . . . . . . . . . . . . . . . . . . . . 3.2 The orthogonality principle for minimum mean square error estimation . 3.3 Gaussian random vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Linear Innovations Sequences . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Discrete-time Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . 3.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Random Pro cesses 4.1 Definition of a random process . . . . . . . . . . . . . . 4.2 Random walks and gambler’s ruin . . . . . . . . . . . . 4.3 Processes with independent increments and martingales 4.4 Brownian motion . . . . . . . . . . . . . . . . . . . . . . 4.5 Counting processes and the Poisson process . . . . . . . iii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 4.7 4.8 4.9 4.10 Stationarity . . . . . . . . . . . . . . . . . . . . . Joint properties of random processes . . . . . . . Conditional independence and Markov processes Discrete state Markov processes . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 89 89 92 98 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 105 106 108 112 117 118 125 6 Random pro cesses in linear systems and sp ectral analysis 6.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fourier transforms, transfer functions and power spectral densities 6.3 Discrete-time processes in linear systems . . . . . . . . . . . . . . . 6.4 Baseband random processes . . . . . . . . . . . . . . . . . . . . . . 6.5 Narrowband random processes . . . . . . . . . . . . . . . . . . . . 6.6 Complexification, Part II . . . . . . . . . . . . . . . . . . . . . . . 6.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 131 135 141 142 145 151 153 5 Basic Calculus of Random Pro cesses 5.1 Continuity of random processes . . . 5.2 Differentiation of random processes . 5.3 Integration of random process . . . . 5.4 Ergodicity . . . . . . . . . . . . . . . 5.5 Complexification, Part I . . . . . . . 5.6 The Karhunen-Lo`ve expansion . . . e 5.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Wiener filtering 7.1 Return of the orthogonality principle . . . . . . . . . . . . . . . . 7.2 The causal Wiener filtering problem . . . . . . . . . . . . . . . . 7.3 Causal functions and spectral factorization . . . . . . . . . . . . 7.4 Solution of the causal Wiener filtering problem for rational power 7.5 Discrete time Wiener filtering . . . . . . . . . . . . . . . . . . . . 7.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... ..... ..... spectral ..... ..... ..... ..... ..... densities ..... ..... . . . . . . 159 159 161 162 167 170 175 8 App endix 8.1 Some notation . . . . . . . 8.2 Convergence of sequences of 8.3 Continuity of functions . . . 8.4 Derivatives of functions . . 8.5 Integration . . . . . . . . . 8.6 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 181 182 183 184 184 186 ..... numbers ..... ..... ..... ..... . . . . . . . . . . . . 9 Solutions to even numb ered problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 v vi Preface From an applications viewpoint, the main reason to study the sub ject of these notes is to help deal with the complexity of describing random, time-varying functions. A random variable can be interpreted as the result of a single measurement. The distribution of a single random variable is fairly simple to describe. It is completely specified by the cumulative distribution function F (x), a function of one variable. It is relatively easy to approximately represent a cumulative distribution function on a computer. The joint distribution of several random variables is much more complex, for in general, it is described by a joint cumulative probability distribution function, F (x1 , x2 , . . . , xn ), which is much more complicated than n functions of one variable. A random process, for example a model of time-varying fading in a communication channel, involves many, possibly infinitely many (one for each time instant t within an observation interval) random variables. Woe the complexity! These notes help prepare the reader to understand and use the following methods for dealing with the complexity of random processes: • Work with moments, such as means and covariances. • Use extensively processes with special properties. Most notably, Gaussian processes are characterized entirely be means and covariances, Markov processes are characterized by one-step transition probabilities or transition rates, and initial distributions. Independent increment processes are characterized by the distributions of single increments. • Appeal to models or approximations based on limit theorems for reduced complexity descriptions, especially in connection with averages of independent, identically distributed random variables. The law of large numbers tells us that, in a certain context, a probability distribution can be characterized by its mean alone. The central limit theorem, similarly tells us that a probability distribution can be characterized by its mean and variance. These limit theorems are analogous to, and in fact examples of, perhaps the most powerful tool ever discovered for dealing with the complexity of functions: Taylor’s theorem, in which a function in a small interval can be approximated using its value and a small number of derivatives at a single point. • Diagonalize. A change of coordinates reduces an arbitrary n-dimensional Gaussian vector into a Gaussian vector with n independent coordinates. In the new coordinates the joint probability distribution is the product of n one-dimensional distributions, representing a great reduction of complexity. Similarly, a random process on an interval of time, is diagonalized by the Karhunen-Lo`ve representation. A periodic random process is diagonalized by a Fourier e series representation. Stationary random processes are diagonalized by Fourier transforms. vii • Sample. A narrowband continuous time random process can be exactly represented by its samples taken with sampling rate twice the highest frequency of the random process. The samples offer a reduced complexity representation of the original process. • Work with baseband equivalent. The range of frequencies in a typical radio transmission is much smaller than the center frequency, or carrier frequency, of the transmission. The signal could be represented directly by sampling at twice the largest frequency component. However, the sampling frequency, and hence the complexity, can be dramatically reduced by sampling a baseband equivalent random process. These notes were written for the first semester graduate course on random processes, offered by the Department of Electrical and Computer Engineering at the University of Illinois at UrbanaChampaign. Students in the class are assumed to have had a previous course in probability, which is briefly reviewed in the first chapter of these notes. Students are also expected to have some familiarity with real analysis and elementary linear algebra, such as the notions of limits, definitions of derivatives, Riemann integration, and diagonalization of symmetric matrices. These topics are reviewed in the appendix. Finally, students are expected to have some familiarity with transform methods and complex analysis, though the concepts used are reviewed in the relevant chapters. Each chapter represents roughly two weeks of lectures, and includes homework problems. Solutions to the even numbered problems without stars can be found at the end of the notes. Students are encouraged to first read a chapter, then try doing the even numbered problems before looking at the solutions. Problems with stars, for the most part, investigate additional theoretical issues, and solutions are not provided. Hopefully some students reading these notes will find them useful for understanding the diverse technical literature on systems engineering, ranging from control systems, image processing, communication theory, and communication network performance analysis. Hopefully some students will go on to design systems, and define and analyze stochastic models. Hopefully others will be motivated to continue study in probability theory, going on to learn measure theory and its applications to probability and analysis in general. A brief comment is in order on the level of rigor and generality at which these notes are written. Engineers and scientists have great intuition and ingenuity, and routinely use methods that are not typically taught in undergraduate mathematics courses. For example, engineers generally have good experience and intuition about transforms, such as Fourier transforms, Fourier series, and z -transforms, and some associated methods of complex analysis. In addition, they routinely use generalized functions, in particular the delta function is frequently used. The use of these concepts in these notes leverages on this knowledge, and it is consistent with mathematical definitions, but full mathematical justification is not given in every instance. The mathematical background required for a full mathematically rigorous treatment of the material in these notes is roughly at the level of a second year graduate course in measure theoretic probability, pursued after a course on measure theory. The author gratefully acknowledges the students and faculty (Andrew Singer and Christoforos Hadjicostis) in the past six semesters for their comments and corrections, and to secretaries Terri Hovde, Francie Bridges, and Deanna Zachary, for their expert typing. Bruce Ha jek December 2005 viii Chapter 1 Getting Started This chapter reviews many of the main concepts in a first level course on probability theory with more emphasis on axioms and the definition of expectation than is typical of a first course. 1.1 The axioms of probability theory Random processes are widely used to model systems in engineering and scientific applications. These notes adopt the most widely used framework of probability and random processes, namely the one based on Kolmogorov’s axioms of probability. The idea is to assume a mathematically solid definition of the model. This structure encourages a modeler to have a consistent, if not accurate, model. A probability space is a triplet (Ω, F , P ). The first component, Ω, is a nonempty set. Each element ω of Ω is called an outcome and Ω is called the sample space. The second component, F , is a set of subsets of Ω called events. The set of events F is assumed to be a σ -algebra, meaning it satisfies the following axioms: A.1 Ω ∈ F A.2 If A ∈ F then Ac ∈ F A.3 If A, B ∈ F then A ∪ B ∈ F . Also, if A1 , A2 , . . . is a sequence of elements in F then ∞ i=1 Ai ∈ F Events Ai , i ∈ I , indexed by a set I are called mutually exclusive if the intersection Ai Aj = ∅ for all i, j ∈ I with i = j . The final component, P , of the triplet (Ω, F , P ) is a probability measure on F satisfying the following axioms: P.1 P [A] ≥ 0 for all A ∈ F P.2 If A, B ∈ F and if A and B are mutually exclusive, then P [A ∪ B ] = P [A] + P [B ]. Also, if A1 , A2 , . . . is a sequence of mutually exclusive events in F then P ( ∞ Ai ) = ∞ P [Ai ]. i=1 i=1 P.3 P [Ω] = 1. The axioms imply a host of properties including the following. For any subsets A, B , C of F : 1 • AB ∈ F and P [A ∪ B ] = P [A] + P [B ] − P [AB ] • P [A ∪ B ∪ C ] = P [A] + P [B ] + P [C ] − P [AB ] − P [AC ] − P [B C ] + P [AB C ] • P [A] + P [Ac ] = 1 • P [∅] = 0. Example 1.1 (Toss of a fair coin) Using “H ” for “heads” and “T ” for “tails,” the toss of a fair coin is modelled by Ω = {H , T } F = {{H }, {T }, {H, T }, ∅} 1 P {H } = P {T } = , 2 P {H, T } = 1, P [∅] = 0 Note that for brevity, we omitted the square brackets and wrote P {H } instead of P [{H }]. Example1.2 (Uniform phase) Take Ω = {θ : 0 ≤ θ ≤ 2π }. It turns out to not be obvious what the set of events F should be. Certainly we want any interval [a, b], with 0 ≤ a ≤ b < 2π , to be in F , and we want the probability assigned to such an interval to be given by P [ [a, b] ] = b−a 2π (1.1) The single point sets {a} and {b} will also be in F so that F contains all the open intervals (a, b) in Ω also. Any open subset of Ω is the union of a finite or countably infinite set of open intervals, so that F should contain all open and all closed subsets of [0, 2π ]. But then F must contain the intersection of any set that is the intersection of countably many open sets, and so on. The specification of the probability function P must be extended from intervals to all of F . It is tempting to take F to be the set of all subsets of Ω. However, that idea doesn’t work, because it is mathematically impossible to extend the definition of P to all subsets of [0, 2π ] in such a way that the axioms P 1 − P 3 hold. The problem is resolved by taking F to be the smallest σ -algebra containing all the subintervals of [0, 2π ], or equivalently, containing all the open subsets of [0, 2π ]. This σ -algebra is called the Borel σ -algebra for [0, 2π ] and the sets in it are called Borel sets. While not every subset of Ω is a Borel subset, any set we are likely to encounter in applications is a Borel set. The existence of the Borel σ -algebra is discussed in an extra credit problem. Furthermore, extension theorems of measure theory imply that P can be extended from (1.1) for interval sets to all Borel sets. Similarly, the Borel σ -algebra B n of subsets of Rn is the smallest σ -algebra containing all sets of the form [a1 , b1 ] × [a2 , b2 ] × · · · × [an , bn ]. Sets in B n are called Borel subsets of Rn . The class of Borel sets includes not only rectangle sets and countable unions of rectangle sets, but all open sets and all closed sets. Virtually any subset of Rn arising in applications is a Borel set. Lemma 1.1.1 (Continuity of Probability) Suppose B1 , B2 , . . . is a sequence of events. (a) If B1 ⊂ B2 ⊂ · · · then limj →∞ P [Bj ] = P [ ∞ Bi ] i=1 (b) If B1 ⊃ B2 ⊃ · · · then limj →∞ P [Bj ] = P [ ∞ Bi ] i=1 2 B1=D 1 D2 D3 ... Figure 1.1: A sequence of nested sets. Pro of Suppose B1 ⊂ B2 ⊂ · · · . Let D1 = B1 , D2 = B2 − B1 , and, in general, let Di = Bi − Bi−1 for i ≥ 2, as shown in Figure 1.1. Then P [Bj ] = j=1 P [Di ] for each j ≥ 1, so i j lim P [Bj ] j →∞ = (a ) = lim j →∞ ∞ P [Di ] i=1 P [Di ] i=1 (b) = P ∞ Di =P i=1 ∞ Bi i=1 where (a) is true by the definition of the sum of an infinite0. series, and (b) is true by axiom P.2. This proves Lemma 1.1.1(a). Lemma 1.1.1(b) is proved similarly. Example 1.3 (Selection of a point in a square) Take Ω to be the square region in the plane, Ω = {(x, y ) : 0 ≤ x, y ≤ 1}. Let F be the Borel σ -algebra for Ω, which is the smallest σ -algebra containing all the rectangular subsets of Ω that are aligned with the axes. Take P so that for any rectangle R, P [R] = area of R. (It can be shown that F and P exist.) Let T be the triangular region T = {(x, y ) : 0 ≤ y ≤ x ≤ 1}. Since T is not rectangular, it is not immediately clear that T ∈ F , nor is it clear what P [T ] is. That is where the axioms come in. For n ≥ 1, let Tn denote the region shown in Figure 1.2. Since Tn can be written as a union of finitely many mutually exclusive rectangles, it follows that Tn ∈ F ··· +1 and it is easily seen that P [Tn ] = 1+2+2 +n = n2n . Since T1 ⊃ T2 ⊃ T4 ⊃ T8 · · · and ∩j T2j = T , it n follows that T ∈ F and P [T ] = limn→∞ P [Tn ] = 1 . 2 The reader is encouraged to show that if C is the diameter one disk inscribed within Ω then P [C ] = (area of C) = π . 4 1.2 Indep endence and conditional probability Events A1 and A2 are defined to be independent if P [A1 A2 ] = P [A1 ]P [A2 ]. More generally, events A1 , A2 , . . . , Ak are defined to be independent if P [Ai1 Ai2 · · · Aij ] = P [Ai1 ]P [Ai2 ] · · · P [Aij ] 3 Tn 0 1 n 2 n 1 Figure 1.2: Approximation of a triangular region. whenever j and i1 , i2 , . . . , ij are integers with j ≥ 1 and 1 ≤ i1 < i2 < · · · < ij ≤ k . For example, events A1 , A2 , A3 are independent if the following four conditions hold: P [A1 A2 ] = P [A1 ]P [A2 ] P [A1 A3 ] = P [A1 ]P [A3 ] P [A2 A3 ] = P [A2 ]P [A3 ] P [A1 A2 A3 ] = P [A1 ]P [A2 ]P [A3 ] A weaker condition is sometimes useful: Events A1 , . . . , Ak are defined to be pairwise independent if Ai is independent of Aj whenever 1 ≤ i < j ≤ k . Independence of k events requires that 2k − k − 1 equations hold: one for each subset of {1, 2, . . . , k } of size at least two. Pairwise − independence only requires that k = k(k2 1) equations hold. 2 If A and B are events and P [B ] = 0, then the conditional probability of A given B is defined by P [A | B ] = P [AB ] . P [B ] It is not defined if P [B ] = 0, which has the following meaning. If you were to write a computer routine to compute P [A | B ] and the inputs are P [AB ] = 0 and P [B ] = 0, your routine shouldn’t simply return the value 0. Rather, your routine should generate an error message such as “input error–conditioning on event of probability zero.” Such an error message would help you or others find errors in larger computer programs which use the routine. As a function of A for B fixed with P [B ] = 0, the conditional probability of A given B is itself a probability measure for Ω and F . More explicitly, fix B with P [B ] = 0. For each event A define P [A] = P [A | B ]. Then (Ω, F , P ) is a probability space, because P satisfies the axioms P 1 − P 3. (Try showing that). If A and B are independent then Ac and B are independent. Indeed, if A and B are independent then P [Ac B ] = P [B ] − P [AB ] = (1 − P [A])P [B ] = P [Ac ]P [B ]. Similarly, if A, B , and C are independent events then AB is independent of C . More generally, suppose E1 , E2 , . . . , En are independent events, suppose n = n1 + · · · + nk with ni > 1 for each i, and suppose F1 is defined by Boolean operations (intersections, complements, and unions) of the first n1 4 events E1 , . . . , En1 , F2 is defined by Boolean operations on the next n2 events, En1 +1 , . . . , En1 +n2 , and so on, then F1 , . . . , Fk are independent. Events E1 , . . . , Ek are said to form a partition of Ω if the events are mutually exclusive and Ω = E1 ∪ · · · ∪ Ek . Of course for a partition, P [E1 ] + · · · + P [Ek ] = 1. More generally, for any event A, the law of total probability holds because A is the union of the mutually exclusive sets AE1 , AE2 , . . . , AEk : P [A] = P [AE1 ] + · · · + P [AEk ]. If P [Ei ] = 0 for each i, this can be written as P [A] = P [A | E1 ]P [E1 ] + · · · + P [A | Ek ]P [Ek ]. Figure 1.3 illustrates the condition of the law of total probability. E1 E2 E3 A E 4 ! Figure 1.3: Partitioning a set A using a partition of Ω Judicious use of the definition of conditional probability and the law of total probability leads to Bayes formula for P [Ei | A] (if P [A] = 0) in simple form P [Ei | A] = P [AEi ] P [A] = P [A | Ei ]P [Ei ] , P [A] or in expanded form: P [Ei | A] = 1.3 P [A | Ei ]P [Ei ] . P [A | E1 ]P [E1 ] + · · · + P [A | Ek ]P [Ek ] Random variables and their distribution Let a probability space (Ω, F , P ) be given. By definition, a random variable is a function X from Ω to the real line R that is F measurable, meaning that for any number c, {ω : X (ω ) ≤ c} ∈ F . (1.2) If Ω is finite or countably infinite, then F can be the set of all subsets of Ω, in which case any real-valued function on Ω is a random variable. If (Ω, F , P ) is given as in the uniform phase example with F equal to the Borel subsets of [0, 2π ], then the random variables on (Ω, F , P ) are called the Borel measurable functions on Ω. Since the Borel σ -algebra contains all subsets of [0, 2π ] that come up in applications, for practical purposes we can think of any function on [0, 2π ] as being a random variable. For example, any piecewise 5 continuous or piecewise monotone function on [0, 2π ] is a random variable for the uniform phase example. The cumulative distribution function (CDF) of a random variable X is denoted by FX . It is the function, with domain the real line R, defined by FX (c) = P {ω : X (ω ) ≤ c} (1.3) = P {X ≤ c} (for short) (1.4) If X denotes the outcome of the roll of a fair die (“die” is singular of “dice”) and if Y is uniformly distributed on the interval [0, 1], then FX and FY are shown in Figure 1.4 FX 1 0 1 F Y 1 2 3 4 5 6 0 1 Figure 1.4: Examples of CDFs. The CDF of a random variable X determines P {X ∈ A} for Borel sets A. For example, by definition FX (c) = P {X ≤ c} for any real number c. But what about P {X < c} and P {X = c}? Let c1 , c2 , . . . be a monotone nondecreasing sequence that converges to c from the left. This means ci ≤ cj < c for i < j and limj →∞ cj = c. Then the events {X ≤ cj } are nested: {X ≤ ci } ⊂ {X ≤ cj } for i < j , and the union of all such events is the event {X < c}. Thus, by Lemma 1.1.1 P {X < c} = lim P {X ≤ ci } = i→∞ lim FX (ci ) = FX (c−). i→∞ Therefore, P {X = c} = FX (c) − FX (c−) = FX (c), where FX (c) is defined to be the size of the jump of F at c. 1 For example, if X has the CDF shown in Figure 1.5 then P {X = 0} = 2 . The requirement that FX be right continuous implies that for any number c (such as c = 0 for this example), if the value FX (c) is changed to any other value, the resulting function would no longer be a valid CDF. 1 0.5 !1 0 Figure 1.5: An example of a CDF. Prop osition 1.3.1 A function F is the CDF of some random variable if and only if it has the fol lowing three properties: F.1 F is nondecreasing 6 F.2 limx→+∞ F (x) = 1 and limx→−∞ F (x) = 0 F.3 F is right continuous Pro of The “only if” part is proved first. Suppose that F is the CDF of some random variable X . Then if x < y , F (y ) = P {X ≤ y } = P {X ≤ x} + P {x < X ≤ y } ≥ P {X ≤ x} = F (x) so that F.1 is true. Consider the events Bn = {X ≤ n}. Then Bn ⊂ Bm for n ≤ m. Thus, by Lemma 1.1.1, lim F (n) = lim P [Bn ] = P n→∞ n→∞ ∞ Bn = P [Ω] = 1. n=1 This and the fact F is nondecreasing imply the following. Given any > 0, there exists N so large that F (x) ≥ 1 − for all x ≥ N . That is, F (x) → 1 as x → +∞. Similarly, lim F (n) = n→−∞ lim P [B−n ] = P n→∞ ∞ n=1 B−n = P [∅] = 0. so that F (x) → 0 as x → −∞. Property F.2 is proved. The proof of F.3 is similar. Fix an arbitrary real number x. Define the sequence of events An 1 for n ≥ 1 by An = {X ≤ x + n }. Then An ⊂ Am for n ≥ m so lim F (x + n→∞ 1 ) = lim P [An ] = P n→∞ n ∞ k=1 Ak = P {X ≤ x} = FX (x). 1 Convergence along the sequence x + n , together with the fact that F is nondecreasing, implies that F (x+) = F (x). Property F.3 is thus proved. The proof of the “only if” portion of Proposition 1.3.1 is complete To prove the “if” part of Proposition 1.3.1, let F be a function satisfying properties F.1-F.3. It must be shown that there exists a random variable with CDF F . Let Ω = R and let F be the set ˜ ˜ B of Borel subsets of R. Define P on intervals of the form (a, b] by P [(a, b]] = F (b) − F (a). It can ˜ be shown by an extension theorem of measure theory that P can be extended to all of F so that ˜ (ω ) = ω for all ω ∈ Ω. the axioms of probability are satisfied. Finally, let X Then ˜˜ ˜ P [X ∈ (a, b]] = P [(a, b]] = F (b) − F (a). ˜ Therefore, X has CDF F . The vast ma jority of random variables described in applications are one of two types, to be described next. A random variable X is a discrete random variable if there is a finite or countably infinite set of values {xi : i ∈ I } such that P {X ∈ {xi : i ∈ I }} = 1. The probability mass function (pmf ) of a discrete random variable X , denoted pX (x), is defined by pX (x) = P {X = x}. Typically the pmf of a discrete random variable is much more useful than the CDF. However, the pmf and CDF of a discrete random variable are related by pX (x) = FX (x) and conversely, FX (x) = pX (y ), y :y ≤ x 7 (1.5) where the sum in (1.5) is taken only over y such that pX (y ) = 0. If X is a discrete random variable with only finitely many mass points in any finite interval, than FX is a piecewise constant function. A random variable X is a continuous random variable if the CDF is the integral of a function: x FX (x) = fX (y )dy −∞ The function fX is called the probability density function (pdf ). If the pdf fX is continuous at a point x, then the value fX (x) has the following nice interpretation: 1 x+ ε fX (y )dy ε→0 ε x 1 = lim P {x ≤ X ≤ x + ε}. ε→0 ε fX (x) = lim If A is any Borel subset of R, then P { X ∈ A} = fX (x)dx. (1.6) A The integral in (1.6) can be understood as a Riemann integral if A is a finite union of intervals and f is piecewise continuous or monotone. In general, fX is required to be Borel measurable and the integral is defined by Lebesgue integration. Any random variable X on an arbitrary probability space has a CDF FX . As noted in the proof ˜ of Proposition 1.3.1 there exists a probability measure PX (called P in the proof ) on the Borel subsets of R such that for any interval (a, b], PX [(a, b]] = P {X ∈ (a, b]}. We define the probability distribution of X to be the probability measure PX . The distribution PX is determined uniquely by the CDF FX . The distribution is also determined by the pdf fX if X is continuous type, or the pmf pX if X is discrete type. In common usage, the response to the question “What is the distribution of X ?” is answered by giving one or more of FX , fX , or pX , or possibly a transform of one of these, whichever is most convenient. 1.4 Functions of a random variable Recall that a random variable X on a probability space (Ω, F , P ) is a function mapping Ω to the real line R , satisfying the condition {ω : X (ω ) ≤ a} ∈ F for all a ∈ R. Suppose g is a function mapping R to R that is not too bizarre. Specifically, suppose for any constant c that {x : g (x) ≤ c} is a Borel subset of R. Let Y (ω ) = g (X (ω )). Then Y maps Ω to R and Y is a random variable. See Figure 1.6. We write Y = g (X ). Often we’d like to compute the distribution of Y from knowledge of g and the distribution of X . In case X is a continuous random variable with known distribution, the following three step procedure works well: (1) Examine the ranges of possible values of X and Y . Sketch the function g . (2) Find the CDF of Y , using FY (c) = P {Y ≤ c} = P {g (X ) ≤ c}. The idea is to express the event {g (X ) ≤ c} as {X ∈ A} for some set A depending on c. 8 X g ! X(") g(X(")) Figure 1.6: A function of a random variable as a composition of mappings. (3) If FY has a piecewise continuous derivative, and if the pmf fY is desired, differentiate FY . If instead X is a discrete random variable then step 1 should be followed. After that the pmf of Y can be found from the pmf of X using pY (y ) = P {g (X ) = y } = pX (x) x:g (x)=y Example 1.4 Suppose X is a N (µ = 2, σ 2 = 3) random variable (see Section 1.6 for the definition) and Y = X 2 . Let us describe the density of Y . Note that Y = g (X ) where g (x) = x2 . The support of the distribution of X is the whole real line, and the range of g over this support is R+ . Next we find the CDF, FY . Since P {Y ≥ 0} = 1, FY (c) = 0 for c < 0. For c ≥ 0, √ √ FY (c) = P {X 2 ≤ c} = P {− c ≤ X ≤ c} √ √ − c−2 X −2 c−2 = P{ √ ≤√ ≤√} 3 3 3 √ √ c−2 − c−2 = Φ( √ ) − Φ( √ ) 3 3 Differentiate with respect to c, using the chain rule and the fact, Φ (s) = fY (c) = √ c √ 1 {exp(−[ √−2 ]2 ) 24π c 6 0 √ − + exp(−[ − √c6 2 ]2 )} √1 2π 2 exp(− s2 ) to obtain if y ≥ 0 if y < 0 (1.7) Example 1.5 Suppose a vehicle is traveling in a straight line at speed a, and that a random direction is selected, subtending an angle Θ from the direction of travel which is uniformly distributed over the interval [0, π ]. See Figure 1.7. Then the effective speed of the vehicle in the random direction is B = a cos(Θ). Let us find the pdf of B . The range of a cos(Θ) as θ ranges over [0, π ] is the interval [−a, a]. Therefore, FB (c) = 0 for c ≤ −a and FB (c) = 1 for c ≥ a. Let now −a < c < a. Then, because cos is monotone nonincreasing on the interval [0, π ], c } a c = P {Θ ≥ cos−1 ( )} a c cos−1 ( a ) = 1− π FB (c) = P {a cos(Θ) ≤ c} = P {cos(Θ) ≤ 9 B ! a Figure 1.7: Direction of travel and a random direction. 1 Therefore, because cos−1 (y ) has derivative, −(1 − y 2 )− 2 , fB (c) = √1 π a 2 − c2 0 | c |< a | c |> a A sketch of the density is given in Figure 1.8. fB !a 0 a Figure 1.8: The pdf of the effective speed in a uniformly distributed direction. Example 1.6 Suppose Y = tan(Θ), as illustrated in Figure 1.9, where Θ is uniformly distributed over the interval (− π , π ) . Let us find the pdf of Y . The function tan(θ) increases from −∞ to ∞ 22 ! 0 Y Figure 1.9: A horizontal line, a fixed point at unit distance, and a line through the point with random direction. 10 as θ ranges over the interval (− π , π ). For any real c, 22 FY (c) = P {Y ≤ c} = P {tan(Θ) ≤ c} = P {Θ ≤ tan−1 (c)} = tan−1 (c) + π π 2 Differentiating the CDF with respect to c yields that Y has the Cauchy pdf: fY (c) = 1 π (1 + c2 ) −∞<c<∞ Example 1.7 Given an angle θ expressed in radians, let (θ mod 2π ) denote the equivalent angle in the interval [0, 2π ]. Thus, (θ mod 2π ) is equal to θ + 2π n, where the integer n is such that 0 ≤ θ + 2π n < 2π . Let Θ be uniformly distributed over [0, 2π ], let h be a constant, and let ˜ Θ = (Θ + h mod 2π ) ˜ Let us find the distribution of Θ. ˜ takes values in the interval [0, 2π ], so fix c with 0 ≤ c < 2π and seek to find Clearly Θ ˜ P {Θ ≤ c}. Let A denote the interval [h, h + 2π ]. Thus, Θ + h is uniformly distributed over A. Let ˜ B = n [2π n, 2π n + c]. Thus Θ ≤ c if and only if Θ + h ∈ B . Therefore, ˜ P {Θ ≤ c} = A T B 1 dθ 2π By sketching the set B , it is easy to see that A B is either a single interval of length c, or the ˜ ˜ union of two intervals with lengths adding to c. Therefore, P {Θ ≤ c} = 2c , so that Θ is itself π uniformly distributed over [0, 2π ] Example 1.8 Let X be an exponentially distributed random variable with parameter λ. Let Y = X , which is the integer part of X , and let R = Y − X , which is the remainder. We shall describe the distributions of Y and R. Clearly Y is a discrete random variable with possible values 0, 1, 2, . . . , so it is sufficient to find the pmf of Y . For integers k ≥ 0, pY (k ) = P {k ≤ X < k + 1} = k+1 k λe−λx dx = e−λk (1 − e−λ ) and pY (k ) = 0 for other k . Turn next to the distribution of R. Clearly R takes values in the interval [0, 1]. So let 0 < c < 1 and find FR (c): FR (c) = P {X − x ≤ c} = P {X ∈ = ∞ k=0 P {k ≤ X ≤ k + c} = 11 ∞ k=0 ∞ k=0 [k , k + c]} e−λk (1 − e−λc ) = 1 − e−λc 1 − e−λ where we used the fact 1 + α + α2 + · · · = 1 1− α for | α |< 1. Differentiating FR yields the pmf: λe−λc 1−e−λ fR (c) = 0 0≤c≤1 otherwise What happens to the density of R as λ → 0 or as λ → ∞? By l’Hospital’s rule, 1 0≤c≤1 0 otherwise lim fR (c) = λ→0 That is, in the limit as λ → 0, the density of X becomes more and more “evenly spread out,” and R becomes uniformly distributed over the interval [0, 1]. If λ is very large then the factor e−λ is nearly zero, and the density of R is nearly the same as the exponential density with parameter λ. Example 1.9 (Generating a random variable with specified CDF) The following problem is rather important for computer simulations. Let F be a function satisfying the three properties required of a CDF, and let U be uniformly distributed over the interval [0, 1]. Find a function g so that F is the CDF of g (U ). An appropriate function g is given by the inverse function of F . Although F may not be strictly increasing, a suitable version of F −1 always exists, defined for 0 < u < 1 by F −1 (u) = min{x : F (x) ≥ u} If the graphs of F and F −1 are closed up by adding vertical lines at jump points, then the graphs are reflections of each other about the x = y line, as illustrated in Figure 1.10. !1 F (p) F(c) 1 p c 1 Figure 1.10: A CDF and its inverse. It is not hard to check that for any real xo and uo with 0 < uo < 1, F (xo ) ≥ uo if and only if xo ≥ F −1 (uo ) Thus, if X = F −1 (U ) then P {F −1 (U ) ≤ x} = P {U ≤ F (x)} = F (x) so that indeed F is the CDF of X 12 1.5 Exp ectation of a random variable The expected value, alternatively called the mean, of a random variable X can be defined in several different ways. Before giving a general definition, we shall consider a straight forward case. A random variable X is called simple if there is a finite set {x1 , . . . , xm } such that X (ω ) ∈ {x1 , . . . , xm } for all ω . The expectation of such a random variable is defined by m E [X ] = i=1 xi P {X = xi } (1.8) The definition (1.8) clearly shows that E [X ] for a simple random variable X depends only on the pmf of X . Like all random variables, X is a function on a probability space (Ω, F , P ). Figure 1.11 illustrates that the sum defining E [X ] in (1.8) can be viewed as an integral over Ω. This suggests writing E [X ] = X (ω )P [dω ] (1.9) Ω X(" )=x 1 X(" )=x 2 ! X(" )=x 3 Figure 1.11: A simple random variable with three possible values. Let Y be another simple random variable on the same probability space as X , with Y (ω ) ∈ n {y1 , . . . , yn } for all ω . Of course E [Y ] = i=1 yi P {Y = yi }. One learns in any elementary probability class that E [X + Y ] = E [X ] + E [Y ]. Note that X + Y is again a simple random variable, so that E [X + Y ] can be defined in the same way as E [X ] was defined. How would you prove E [X + Y ] = E [X ] + E [Y ]? Is (1.8) helpful? We shall give a proof that E [X + Y ] = E [X ] + E [Y ] motivated by (1.9). The sets {X = x1 }, . . . , {X = xm } form a partition of Ω. A refinement of this partition consists of another partition C1 , . . . , Cm such that X is constant over each Cj . If we let xj denote the value of X on Cj , then clearly E [X ] = xj P [Cj ] j 13 Now, it is possible to select the partition C1 , . . . , Cm so that both X and Y are constant over each Cj . For example, each Cj could have the form {X = xi } ∩ {Y = yk } for some i, k . Let yj denote the value of Y on Cj . Then xj + yj is the value of X + Y on Cj . Therefore, E [X + Y ] = (xj + yj )P [Cj ] = j xj P [Cj ] + j yj P [Cj ] = E [X ] + E [Y ] j While the expression (1.9) is rather suggestive, it would be overly restrictive to interpret it as a Riemann integral over Ω. For example, let (Ω, F , P ) be a probability space with Ω = [0, 1], F the Borel σ -algebra of [0, 1], and P such that P [ [a, b] ] = b − a for 0 ≤ a ≤ b ≤ 1. A random variable X is a function on [0, 1]. It is tempting to define E [X ] by Riemann integration (see the appendix): 1 E [X ] = X (ω )dω (1.10) 0 However, suppose X is the simple random variable such that X (w) = 1 for rational values of ω and X (ω ) = 0 otherwise. Since the set of rational numbers in Ω is countably infinite, such X satisfies P {X = 0} = 1. Clearly we’d like E [X ] = 0, but the Riemann integral (1.10) is not convergent for this choice of X . The expression (1.9) can be used to define E [X ] in great generality if it is interpreted as a Lebesgue integral, defined as follows: Suppose X is an arbitrary nonnegative random variable. Then there exists a sequence of simple random variables X1 , X2 , . . . such that for every ω ∈ Ω, X1 (ω ) ≤ X2 (ω ) ≤ · · · and Xn (ω ) → X (ω ) as n → ∞. Then E Xn is well defined for each n and is nondecreasing in n, so the limit of E Xn as n → ∞ exists with values in [0, +∞]. Furthermore it can be shown that the value of the limit depends only on (Ω, F , P ) and X , not on the particular choice of the approximating simple sequence. We thus define E [X ] = limn→∞ E [Xn ]. Thus, E [X ] is always well defined in this way, with possible value +∞, if X is a nonnegative random variable. Suppose X is an arbitrary random variable. Define the positive part of X to be the random variable X+ defined by X+ (ω ) = max{0, X (ω )} for each value of ω . Similarly define the negative part of X to be the random variable X− (ω ) = max{0, −X (ω )}. Then X (ω ) = X+ (ω ) − X− (ω ) for all ω , and X+ and X− are both nonnegative random variables. As long as at least one of E [X+ ] or E [X− ] is finite, define E [X ] = E [X+ ] − E [X− ]. The expectation E [X ] is undefined if E [X+ ] = E [X− ] = +∞. This completes the definition of E [X ] using (1.9) interpreted as a Lebesgue integral. We will prove that E [X ] defined by the Lebesgue integral (1.9) depends only on the CDF of X . It suffices to show this for a nonnegative random variable X . For such a random variable, and 0 = tn < t1 < · · · < tn , a simple random variable Xn can be defined by n 0 if tn−1 k 0 Xn (ω ) = tn−1 ≤ X (ω ) < tn k k else Then n E [Xn ] = k=1 tn−1 (F (tn ) − F (tn−1 )) k k k so that E [Xn ] is determined only by and the CDF FX . Selecting the t’s appropriately as n → ∞ results in the Xn ’s being nondecreasing and converging to X . Thus, the limit E [X ] = limn→∞ E [Xn ] depends only on FX . tn , . . . , t n n 0 14 In Section 1.3 we defined the probability distribution PX of a random variable such that the ˜ canonical random variable X (ω ) = ω on (R, B , PX ) has the same CDF as X . Therefore E [X ] = ˜ ], or E [X ∞ E [X ] = xPX (dx) (Lebesgue) (1.11) −∞ By definition, the integral (1.11) is the Lebesgue-Stieltjes integral of x with respect to FX , so that E [X ] = ∞ xdFX (x) (Lebesgue-Stieltjes) (1.12) −∞ Expectation has the following properties. Let X, Y be random variables and c a constant. E.1 (Linearity) E [cX ] = cE [X ]. If E [X ], E [Y ] and E [X ] + E [Y ] are well defined, then E [X + Y ] is well defined and E [X + Y ] = E [X ] + E [Y ]. E.2 (Preservation of order) If P {X ≥ Y } = 1 and E [Y ] is well defined then E [X ] is well defined and E [X ] ≥ E [Y ]. E.3 If X has pdf fX then ∞ E [X ] = xfX (x)dx (Lebesgue) −∞ E.4 If X has pmf pX then xpX (x). xpX (x) + E [X ] = x>0 x<0 E.5 (Law of the unconscious statistician) If g is Borel measurable, E [g (X )] = = g (X (ω ))P [dω ] Ω ∞ g (x)dFX (x) (Lebesgue) (Lebesgue-Stieltjes) −∞ and in case X is a continuous type random variable E [g (X )] = ∞ g (x)fX (x)dx (Lebesgue) −∞ Properties E.1 and E.2 are true for simple random variables and they carry over to general random variables in the limit defining the Lebesgue integral (1.9). Properties E.3 and E.4 follow from the equivalent definition (1.11) and properties of Lebesgue-Stieltjes integrals. Property E.5 can be proved by approximating g by piecewise constant functions. The variance of a random variable X with E X finite is defined by Var(X ) = E [(X − E X )2 ]. By the linearity of expectation, if E X is finite, the variance of X satisfies the useful relation: Var(X ) = E [X 2 − 2X (E X ) + (E X )2 ] = E [X 2 ] − (E X )2 . 15 The characteristic function ΦX of a random variable X is defined by ΦX (u) = E [ej uX ] for real values of u, where j = √ −1. For example, if X has pdf f , then ΦX (u) = ∞ exp(j ux)fX (x)dx, −∞ which is 2π times the inverse Fourier transform of fX . Two random variables have the same probability distribution if and only if they have the same characteristic function. If E [X k ] exists and is finite for an integer k ≥ 1, then the derivatives of ΦX up to order k exist and are continuous, and (k ) ΦX (0) = j k E [X k ] For a nonnegative integer-valued random variable X it is often more convenient to work with the z transform of the pmf, defined by ΨX (z ) = E [z X ] = ∞ z k pX (k ) k=0 for real or complex z with | z |≤ 1. Two such random variables have the same probability distribution if and only if their z transforms are equal. If E [X k ] is finite it can be found from the derivatives of ΨX up to the k th order at z = 1, (k ) ΨX (1) = E [X (X − 1) · · · (X − k + 1)] 1.6 Frequently used distributions The following is a list of the most basic and frequently used probability distributions. For each distribution an abbreviation, if any, and valid parameter values are given, followed by either the CDF, pdf or pmf, then the mean, variance, a typical example and significance of the distribution. The constants p, λ, µ, σ , a, b, and α are real-valued, and n and i are integer-valued, except n can be noninteger-valued in the case of the gamma distribution. Bernoulli: B e(p), 0 ≤ p ≤ 1 pmf: p(i) = mean: p variance: p(1 − p) p i=1 1−p i=0 0 else z -transform: 1 − p + pz Example: Number of heads appearing in one flip of a coin. The coin is called fair if p = biased otherwise. 16 1 2 and B i(n, p), n ≥ 1, 0 ≤ p ≤ 1 Binomial: ni p (1 − p)n−i i z -transform: (1 − p + pz )n pmf: : p(i) = mean: np 0≤i≤n variance: np(1 − p) Example: Number of heads appearing in n independent flips of a coin. Poisson: P oi(λ), λ ≥ 0 λi e−λ i≥0 i! z -transform: exp(λ(z − 1)) pmf: p(i) = mean: λ variance: λ Example: Number of phone calls placed during a ten second interval in a large city. Significance: The Poisson pmf is the limit of the binomial pmf as n → +∞ and p → 0 in such a way that np → λ. Geometric: Geo(p), 0 < p ≤ 1 pmf: p(i) = (1 − p)i−1 p i≥1 pz z -transform: 1 − z + pz 1 1−p mean: variance: p p2 Example : Number of independent flips of a coin until heads first appears. Significant property: If X has the geometric distribution, P {X > i} = (1 − p)i for integers i ≥ 1. So X has the memoryless property: P {X > i + j | X > i} = P {X > j } for i, j ≥ 1. Any positive integer-valued random variable with this property has a geometric distribution. Gaussian (also called Normal): N (µ, σ 2 ), µ ∈ R, σ ≥ 0 pdf (if σ 2 > 0): f (x) = √ pmf (if σ 2 = 0): p(x) = 1 exp − 2π σ 2 1 x=µ 0 else characteristic function: exp(j uµ − mean: µ variance: σ 2 17 (x − µ)2 2σ 2 u2 σ 2 ) 2 Example: Instantaneous voltage difference (due to thermal noise) measured across a resister held at a fixed temperature. Notation: The notation Φ is often used for the CDF of a N (0, 1) random variable, and Q is often used for the complementary CDF: ∞ Q(c) = 1 − Φ(c) = c x2 1 √ e− 2 dx 2π Significant property (Central limit theorem): If X1 , X2 , . . . are independent and identically distributed with mean µ and nonzero variance σ 2 , then for any constant c, lim P n→∞ Exp onential: X1 + · · · + Xn − nµ √ ≤c nσ 2 = Φ(c) Exp (λ) λ > 0 pdf: f (x) = λe−λx x≥0 λ characteristic function: λ − ju 1 1 mean: variance: 2 λ λ Example: Time elapsed between noon sharp and the first telephone call placed in a large city, on a given day. Significance: If X has the Exp(λ) distribution, P {X ≥ t} = e−λt for t ≥ 0. So X has the memoriless property: P {X ≥ s + t | X ≥ s} = P {X ≥ t} s, t ≥ 0 Any nonnegative random variable with this property is exponentially distributed. Uniform: U (a, b) − ∞ < a < b < ∞ pdf: f (X )= 1 b−a 0 a≤x≤b else ej ub − ej ua j u(b − a) (b − a)2 variance: 12 characteristic function: mean: a+b 2 Example: The phase difference between two independent oscillators operating at the same frequency may be modelled as uniformly distributed over [0, 2π ] Significance: Uniform is uniform. 18 Gamma(n, α): n, α > 0 (n real valued) pdf: f (x) = αn xn−1 e−αx Γ(n) ∞ where Γ(n) = x≥0 sn−1 e−s ds 0 characteristic function: mean: n α variance: n α2 α α − ju n Significance: If n is a positive integer then Γ(n) = (n − 1)! and a Gamma (n, α) random variable has the same distribution as the sum of n independent, Exp(α) distributed random variables. Rayleigh (σ 2 ): σ2 > 0 r r2 exp − 2 r>0 σ2 2σ r2 CDF : 1 − exp − 2 2σ π π mean: σ variance: σ 2 2 − 2 2 pdf: f (r) = Example: Instantaneous value of the envelope of a mean zero, narrow band noise signal. 1 Significance: If X and Y are independent, N (0, σ 2 ) random variables, then (X 2 + Y 2 ) 2 has the Rayleigh(σ 2 ) distribution. Also notable is the simple form of the CDF. 1.7 Jointly distributed random variables Let X1 , X2 , . . . , Xm be random variables on a single probability space (Ω, F , P ). The joint cumulative distribution function CDF is the function on Rm defined by FX1 X2 ···Xm (x1 , . . . , xm ) = P {X1 ≤ x1 , X2 ≤ x2 , . . . , Xm ≤ xm } The CDF determines the probabilities of all events concerning X1 , . . . , Xm . For example, if R is the rectangular region (a, b] × (a , b ] in the plane, then P {(X1 , X2 ) ∈ R} = FX1 X2 (b, b ) − FX1 X2 (a, b ) − FX1 X2 (b, a ) + FX1 X2 (a, a ) We write +∞ as an argument of FX in place of xi to denote the limit as xi → +∞. By the countable additivity axiom of probability, lim FX1 X2 (x1 , x2 ) = FX1 (x1 ) FX1 X2 (x1 , +∞) = x2 →∞ The random variables are jointly continuous if there exists a function fX1 X2 ···Xm , called the joint probability density function (pdf ), such that FX1 X2 ···Xm (x1 , . . . , xm ) = x1 −∞ xm ··· −∞ 19 fX1 X2 ···Xm (u1 , . . . , um )dum · · · du. Note that if X1 and X2 are jointly continuous, then FX1 (x1 ) = FX1 X2 (x1 , +∞) so that X1 has pdf given by x1 ∞ −∞ = −∞ fX1 (u1 ) = ∞ −∞ fX1 X2 (u1 , u2 )du2 du1 . fX1 X2 (u1 , u2 )du2 . If X1 , X2 , . . . , Xm are each discrete random variables, then they have a joint pmf pX1 X2 ···Xm defined by pX1 X2 ···Xm (u1 , u2 , . . . , um ) = P [{X1 = u1 } ∩ {X2 = u2 } ∩ · · · ∩ {Xm = um }] The sum of the probability masses is one, and for any subset A of Rm P {(X1 , . . . , Xm ) ∈ A} = pX (u1 , u2 , . . . , um ) (u1 ,...,um )∈A The joint pmf of subsets of X1 , . . . Xm can be obtained by summing out the other coordinates of the joint pmf. For example, pX1 X2 (u1 , u2 ) pX1 (u1 ) = u2 The joint characteristic function of X1 , . . . , Xm is the function on Rm defined by ΦX1 X2 ···Xm (u1 , u2 , . . . , um ) = E [ej (X1 u1 +X2 ux +···+Xm um ) ] Random variables X1 , . . . , Xn are defined to be independent if for any Borel subsets A1 , . . . , Am of R, the events {X1 ∈ A1 }, . . . , {Xm ∈ Am } are independent. The random variables are independent if and only if the joint CDF factors. FX1 X2 ···Xm (x1 , . . . , xm ) = FX1 (x1 ) · · · FXm (xm ) If the random variables are jointly continuous, independence is equivalent to the condition that the joint pdf factors. If the random variables are discrete, independence is equivalent to the condition that the joint pmf factors. Similarly, the random variables are independent if and only if the joint characteristic function factors. 1.8 Cross moments of random variables Let X and Y be random variables on the same probability space with finite second moments. Three important related quantities are: the correlation: E [X Y ] the covariance: Cov(X, Y ) = E [(X − E [X ])(Y − E [Y ])] Cov(X, Y ) the correlation coefficient: ρX Y = Var(X )Var(Y ) 20 A fundamental inequality is Schwarz’s inequality: E [X 2 ]E [Y 2 ] | E [X Y ] | ≤ (1.13) Furthermore, if E Y 2 = 0, equality holds if and only if P [X = cY ] = 1 for some constant c. Schwarz’s inequality can be proved as follows. If P {Y = 0} = 1 the inequality is trivial, so suppose E [Y 2 ] > 0. By the inequality (a + b)2 ≤ 2a2 + 2b2 it follows that E [(X − λY )2 ] < ∞ for any constant λ. Take λ = E [X Y ]/E Y 2 and note that 0 ≤ E [(X − λY )2 ] = E [X 2 ] − 2λE [X Y ] + λ2 E [Y 2 ] E [X Y ]2 = E [X 2 ] − , E [Y 2 ] which is clearly equivalent to the Schwarz inequality. If P [X = cY ] = 1 for some c then equality holds in (1.13), and conversely, if equality holds in (1.13) then P [X = cY ] = 1 for c = λ. Application of Schwarz’s inequality to X − E [X ] and Y − E [Y ] in place of X and Y yields that | Cov(X, Y ) | ≤ Var(X )Var(Y ) Furthermore, if Var(Y ) = 0 then equality holds if and only if X = aY + b for some constants a and b. Consequently, if Var(X ) and Var(Y ) are not zero, so that the correlation coefficient ρX Y is well defined, then | ρX Y |≤ 1 with equality if and only if X = aY + b for some constants a, b. The following alternative expressions for Cov(X, Y ) are often useful in calculations: Cov(X, Y ) = E [X (Y − E [Y ])] = E [(X − E [X ])Y ] = E [X Y ] − E [X ]E [Y ] In particular, if either X or Y has mean zero then E [X Y ] = Cov(X, Y ). Random variables X and Y are called orthogonal if E [X Y ] = 0 and are called uncorrelated if Cov(X, Y ) = 0. If X and Y are independent then they are uncorrelated. The converse is far from true. Independence requires a large number of equations to be true, namely FX Y (x, y ) = FX (x)FY (y ) for every real value of x and y . The condition of being uncorrelated involves only a single equation to hold. Covariance generalizes variance, in that Var(X ) = Cov(X, X ). Covariance is linear in each of its two arguments: Cov(X + Y , U + V ) = Cov(X, U ) + Cov(X, V ) + Cov(Y , U ) + Cov(Y , V ) Cov(aX + b, cY + d) = acCov(X, Y ) for constants a, b, c, d. For example, consider the sum Sm = X1 + · · · + Xm , such that X1 , · · · , Xm are (pairwise) uncorrelated with E [Xi ] = µ and Var(Xi ) = σ 2 for 1 ≤ i ≤ n. Then E [Sm ] = mµ and Var(Sm ) = Cov(Sm , Sm ) = Var(Xi ) + i i,j :i=j = mσ . 2 Therefore, S√ −mµ m mσ 2 Cov(Xi , Xj ) has mean zero and variance one. 21 1.9 Conditional densities Suppose that X and Y have a joint pdf fX Y . Recall that the pdf fY , the second marginal density of fX Y , is given by fY (y ) = ∞ fX Y (x, y )dx −∞ The conditional pdf of X given Y , denoted by fX |Y (x | y ), is undefined if fY (y ) = 0. It is defined for y such that fY (y ) > 0 by fX |Y (x | y ) = fX Y (x, y ) fY (y ) − ∞ < x < +∞ If y is fixed and fY (y ) > 0, then as a function of x, fX |Y (x | y ) is itself a pdf. The mean of the conditional pdf is called the conditional mean (or conditional expectation) of X given Y = y , written as E [X | Y = y ] = ∞ −∞ xfX |Y (x | y )dx If the deterministic function E [X | Y = y ] is applied to the random variable Y , the result is a random variable denoted by E [X | Y ]. Note that conditional pdf and conditional expectation were so far defined in case X and Y have a joint pdf. If instead, X and Y are both discrete random variables, the conditional pmf pX |Y and the conditional expectation E [X | Y = y ] can be defined in a similar way. More general notions of conditional expectation are considered in a later chapter. 1.10 Transformation of random vectors A random vector X of dimension m has the form X= X1 X2 . . . Xm where X1 , . . . , Xm are random variables. The joint distribution of X1 , . . . , Xm can be considered to be the distribution of the vector X . For example, if X1 , . . . , Xm are jointly continuous, the joint pdf fX1 X2 ···Xm (x1 , . . . , xn ) can as well be written as fX (x), and be thought of as the pdf of the random vector X . Let X be a continuous type random vector on Rn . Let g be a one-to-one mapping from Rn to Rn . Think of g as mapping x-space (here x is lower case, representing a coordinate value) into y -space. As x varies over Rn , y varies over the range of g . All the while, y = g (x) or, equivalently, x = g −1 (y ). ∂y Suppose that the Jacobian matrix of derivatives ∂ x (x) is continuous in x and nonsingular for all x. By the inverse function theorem of vector calculus, it follows that the Jacobian matrix of the ∂y inverse mapping (from y to x) exists and satisfies ∂ x (y ) = ( ∂ x (x))−1 . Use | K | for a square matrix ∂y K to denote | det(K ) |. 22 Prop osition 1.10.1 Under the above assumptions, Y is a continuous type random vector and for y in the range of g : fY (y ) = Example 1.10 fX (x) | ∂y ∂ x (x) = fX (x) | ∂x (y ) ∂y Let U , V have the joint pdf: u + v 0 ≤ u, v ≤ 1 0 else fU V (u, v ) = and let X = U 2 and Y = U (1 + V ). Let’s find the pdf fX Y . The vector (U, V ) in the u − v plane is transformed into the vector (X, Y ) in the x − y plane under a mapping g that maps u, v to x = u2 and y = u(1 + v ). The image in the x − y plane of the square [0, 1]2 in the u − v plane is the set A given by A = {(x, y ) : 0 ≤ x ≤ 1, and √ √ x ≤ y ≤ 2 x} See Figure 1.12 The mapping from the square is one to one, for if (x, y ) ∈ A then (u, v ) can be y 2 v 1 1 u x 1 Figure 1.12: Transformation from the u − v plane to the x − y plane. recovered by u = √ x and v = y √ x − 1. The Jacobian determinant is ∂x ∂u ∂y ∂u ∂x ∂v ∂y ∂v 2u 0 1+v u = = 2u 2 Therefore, using the transformation formula and expressing u and V in terms of x and y yields fX Y (x, y ) = √ y x+( √x −1) 2x 0 if (x, y ) ∈ A else Example 1.11 Let U and V be independent continuous type random variables. Let X = U + V and Y = V . Let us find the joint density of X, Y and the marginal density of X . The mapping g: u → v x y = 23 u+v v is invertible, with inverse given by u = x − y and v = y . The absolute value of the Jacobian determinant is given by ∂x ∂u ∂y ∂u ∂x ∂v ∂y ∂u 11 01 = =1 Therefore fX Y (x, y ) = fU V (u, v ) = fU (x − y )fV (y ) The marginal density of X is given by fX (x) = ∞ fX Y (x, y )dy = ∞ −∞ −∞ fU (x − y )fV (y )dy That is fX = fU ∗ fV . Example 1.12 Let X1 and X2 be independent N (0, σ 2 ) random variables, and let X = (X1 , X2 )T denote the two-dimensional random vector with coordinates X1 and X2 . Any point of x ∈ R2 can 1 be represented in polar coordinates by the vector (r, θ)T such that r = x = (x2 + x2 ) 2 and 1 2 θ = tan−1 ( x1 ) with values r ≥ 0 and 0 ≤ θ < 2π . The inverse of this mapping is given by x2 x1 = r cos(θ) x2 = r sin(θ) We endeavor to find the pdf of the random vector (R, Θ)T , the polar coordinates of X . The pdf of X is given by fX (x) = fX1 (x1 )fX2 (x2 ) = 1 − r22 e 2σ 2π σ 2 The range of the mapping is the set r > 0 and 0 < θ ≤ 2π . On the range, ∂x r ∂( θ ) = ∂ x1 ∂r ∂ x2 ∂r ∂ x1 ∂θ ∂ x2 ∂θ = cos(θ) −r sin(θ) sin(θ) r cos(θ) =r Therefore for (r, θ)T in the range of the mapping, fR,Θ (r, θ) = fX (x) ∂x r − r22 e 2σ = r ∂( θ ) 2π σ 2 Of course fR,Θ (r, θ) = 0 off the range of the mapping. The joint density factors into a function of r and a function of θ, so R and Θ are independent. Moreover, R has the Rayleigh density with parameter σ 2 , and Θ is uniformly distributed on [0, 2π ]. 24 1.11 Problems 1.1. Simple events A register contains 8 random binary digits which are mutually independent. Each digit is a zero or a one with equal probability. (a) Describe an appropriate probability space (Ω, F , P ) corresponding to looking at the contents of the register. (b) Express each of the following four events explicitly as subsets of Ω, and find their probabilities: E1=“No two neighboring digits are the same” E2=“Some cyclic shift of the register contents is equal to 01100110” E3=“The register contains exactly four zeros” E4=“There is a run of at least six consecutive ones” (c) Find P [E1 |E3 ] and P [E2 |E3 ]. 1.2. Indep endent vs. mutually exclusive (a) Suppose that an event E is independent of itself. Show that either P [E ] = 0 or P [E ] = 1. (b) Events A and B have probabilities P [A] = 0.3 and P [B ] = 0.4. What is P [A ∪ B ] if A and B are independent? What is P [A ∪ B ] if A and B are mutually exclusive? (c) Now suppose that P [A] = 0.6 and P [B ] = 0.8. In this case, could the events A and B be independent? Could they be mutually exclusive? 1.3. Congestion at output p orts Consider a packet switch with some number of input ports and eight output ports. Suppose four packets simultaneously arrive on different input ports, and each is routed toward an output port. Assume the choices of output ports are mutually independent, and for each packet, each output port has equal probability. (a) Specify a probability space (Ω, F , P ) to describe this situation. (b) Let Xi denote the number of packets routed to output port i for 1 ≤ i ≤ 8. Describe the joint pmf of X1 , . . . , X8 . (c) Find Cov(X1 , X2 ). (d) Find P {Xi ≤ 1 for all i}. (e) Find P {Xi ≤ 2 for all i}. 1.4. Frantic search At the end of each day Professor Plum puts her glasses in her drawer with probability .90, leaves them on the table with probability .06, leaves them in her briefcase with probability 0.03, and she actually leaves them at the office with probability 0.01. The next morning she has no recollection of where she left the glasses. She looks for them, but each time she looks in a place the glasses are actually located, she misses finding them with probability 0.1, whether or not she already looked in the same place. (After all, she doesn’t have her glasses on and she is in a hurry.) (a) Given that Professor Plum didn’t find the glasses in her drawer after looking one time, what is the conditional probability the glasses are on the table? (b) Given that she didn’t find the glasses after looking for them in the drawer and on the table once each, what is the conditional probability they are in the briefcase? (c) Given that she failed to find the glasses after looking in the drawer twice, on the table twice, and in the briefcase once, what is the conditional probability she left the glasses at the office? 25 1.5. Conditional probability of failed device given failed attempts A particular webserver may be working or not working. If the webserver is not working, any attempt to access it fails. Even if the webserver is working, an attempt to access it can fail due to network congestion beyond the control of the webserver. Suppose that the a priori probability that the server is working is 0.8. Suppose that if the server is working, then each access attempt is successful with probability 0.9, independently of other access attempts. Find the following quantities. (a) P [ first access attempt fails] (b) P [server is working | first access attempt fails ] (c) P [second access attempt fails | first access attempt fails ] (d) P [server is working | first and second access attempts fail ]. 1.6. Conditional probabilities–basic computations of iterative deco ding Suppose B1 , . . . , Bn , Y1 , . . . , Yn are discrete random variables with joint pmf p(b1 , . . . , bn , y1 , . . . , yn ) = n i=1 qi (yi |bi ) 2−n 0 if bi ∈ {0, 1} for 1 ≤ i ≤ n else where qi (yi |bi ) as a function of yi is a pmf for bi ∈ {0, 1}. Finally, let B = B1 ⊕ · · · ⊕ Bn represent the modulo two sum of B1 , · · · , Bn . Thus, the ordinary sum of the n + 1 random variables B1 , . . . , Bn , B is even. Express P [B = 1|Y1 = y1 , · · · .Yn = yn ] in terms of the yi and the functions qi . Simplify your answer. (b) Suppose B and Z1 , . . . , Zk are discrete random variables with joint pmf p(b, z1 , . . . , zk ) = 1 2 k j =1 rj (zj |b) 0 if b ∈ {0, 1} else where rj (zj |b) as a function of zj is a pmf for b ∈ {0, 1} fixed. Express P [B = 1|Z1 = z1 , . . . , Zk = zk ] in terms of the zj and the functions rj . 1.7. Conditional lifetimes and the memoryless prop erty of the geometric distribution (a) Let X represent the lifetime, rounded up to an integer number of years, of a certain car battery. Suppose that the pmf of X is given by pX (k ) = 0.2 if 3 ≤ k ≤ 7 and pX (k ) = 0 otherwise. (i) Find the probability, P [X > 3], that a three year old battery is still working. (ii) Given that the battery is still working after five years, what is the conditional probability that the battery will still be working three years later? (i.e. what is P [X > 8|X > 5]?) (b) A certain Illini basketball player shoots the ball repeatedly from half court during practice. Each shot is a success with probability p and a miss with probability 1 − p, independently of the outcomes of previous shots. Let Y denote the number of shots required for the first success. (i) Express the probability that she needs more than three shots for a success, P [Y > 3], in terms of p. (ii) Given that she already missed the first five shots, what is the conditional probability that she will need more than three additional shots for a success? (i.e. what is P [Y > 8|Y > 5])? (iii) What type of probability distribution does Y have? 1.8. Blue corners Suppose each corner of a cube is colored blue, independently of the other corners, with some probability p. Let B denote the event that at least one face of the cube has all four corners colored blue. (a) Find the conditional probability of B given that exactly five corners of the cube are colored 26 blue. (b) Find P (B ), the unconditional probability of B . 1.9. Distribution of the flow capacity of a network A communication network is shown. The link capacities in megabits per second (Mbps) are given by C1 = C3 = 5, C2 = C5 = 10 and C4 =8, and are the same in each direction. Information flow 2 1 Source Destination 4 5 3 from the source to the destination can be split among multiple paths. For example, if all links are working, then the maximum communication rate is 10 Mbps: 5 Mbps can be routed over links 1 and 2, and 5 Mbps can be routed over links 3 and 5. Let Fi be the event that link i fails. Suppose that F1 , F2 , F3 , F4 and F5 are independent and P [Fi ] = 0.2 for each i. Let X be defined as the maximum rate (in Mbits per second) at which data can be sent from the source node to the destination node. Find the pmf pX . 1.10. Recognizing cumulative distribution functions Which of the following are valid CDF’s? For each that is not valid, state at least one reason why. For each that is valid, find P {X 2 > 5}. F1 (x) = ( 2 1 e−x 4 −x2 −e4 x<0 x≥0 8 < 0 0.5 + e−x F2 (x) = : 1 x<0 0≤x<3 x≥3 F3 (x) = 8 < 0 0.5 + : 1 x 20 x≤0 0 < x ≤ 10 x ≥ 10 1.11. A CDF of mixed typ e Let X have the CDF shown. F X 1.0 0.5 0 1 2 (a) Find P {X ≤ 0.8}. (b) Find E [X ]. (c) Find Var(X ). 1.12. CDF and characteristic function of a mixed typ e random variable Let X = (U − 0.5)+ , where U is uniformly distributed over the interval [0, 1]. That is, X = U − 0.5 if U − 0.5 ≥ 0, and X = 0 if U − 0.5 < 0. 27 (a) Find and carefully sketch the CDF FX . In particular, what is FX (0)? (b) Find the characteristic function ΦX (u) for real values of u. 1.13. Poisson and geometric random variables with conditioning Let Y be a Poisson random variable with mean µ > 0 and let Z be a geometrically distributed random variable with parameter p with 0 < p < 1. Assume Y and Z are independent. (a) Find P [Y < Z ]. Express your answer as a simple function of µ and p. (b) Find P [Y < Z |Z = i] for i ≥ 1. (Hint: This is a conditional probability for events.) (c) Find P [Y = i|Y < Z ] for i ≥ 0. Express your answer as a simple function of p, µ and i. (Hint: This is a conditional probability for events.) (d) Find E [Y |Y < Z ], which is the expected value computed according to the conditional distribution found in part (c). Express your answer as a simple function of µ and p. 1.14. Conditional exp ectation for uniform density over a triangular region Let (X, Y ) be uniformly distributed over the triangle with coordinates (0, 0), (1, 0), and (2, 1). (a) What is the value of the joint pdf inside the triangle? (b) Find the marginal density of X , fX (x). Be sure to specify your answer for all real values of x. (c) Find the conditional density function fY |X (y |x). Be sure to specify which values of x the conditional density is well defined for, and for such x specify the conditional density for all y . Also, for such x briefly describe the conditional density of y in words. (d) Find the conditional expectation E [Y |X = x]. Be sure to specify which values of x this conditional expectation is well defined for. 1.15. Transformation of a random variable Let X be exponentially distributed with mean λ−1 . Find and carefully sketch the distribution functions for the random variables Y = exp(X ) and Z = min(X, 3). 1.16. Density of a function of a random variable Suppose X is a random variable with probability density function fX (x) = 2x 0 ≤ x ≤ 1 0 else (a) Find P [X ≥ 0.4|X ≤ 0.8]. (b) Find the density function of Y defined by Y = − log(X ). 1.17. Moments and densities of functions of a random variable Suppose the length L and width W of a rectangle are independent and each uniformly distributed over the interval [0, 1]. Let C = 2L + 2W (the length of the perimeter) and A = LW (the area). Find the means, variances, and probability densities of C and A. 1.18. Functions of indep endent exp onential random variables Let X1 and X2 be independent random varibles, with Xi being exponentially distributed with parameter λi . (a) Find the pdf of Z = min{X1 , X2 }. (b) Find the pdf of R = X1 . X2 1.19. Using the Gaussian Q function Express each of the given probabilities in terms of the standard Gaussian complementary CDF Q. 28 (a) P [X ≥ 16], where X has the N (10, 9) distribution. (b) P [X 2 ≥ 16], where X has the N (10, 9) distribution. (c) P [|X − 2Y | > 1], where X and Y are independent, N (0, 1) random variables. (Hint: Linear combinations of independent Gaussian random variables are Gaussian.) 1.20. Gaussians and the Q function Let X and Y be independent, N (0, 1) random variables. (a) Find Cov(3X + 2Y , X + 5Y + 10). (b) Express P {X + 4Y ≥ 2} in terms of the Q function. (c) Express P {(X − Y )2 > 9} in terms of the Q function. 1.21. Correlation of histogram values Suppose that n fair dice are independently rolled. Let Xi = 1 if a 1 shows on the ith roll 0 else 1 if a 2 shows on the ith roll 0 else Yi = Let X denote the sum of the Xi ’s, which is simply the number of 1’s rolled. Let Y denote the sum of the Yi ’s, which is simply the number of 2’s rolled. Note that if a histogram is made recording the number of occurrences of each of the six numbers, then X and Y are the heights of the first two entries in the histogram. (a) Find E [X1 ] and Var(X1 ). (b) Find E [X ] and Var(X ). (c) Find Cov(Xi , Yj ) if 1 ≤ i, j ≤ n (Hint: Does it make a difference if i = j ?) (d) Find Cov(X, Y ) and the correlation coefficient ρ(X, Y ) = Cov(X, Y )/ Var(X )Var(Y ). (e) Find E [Y |X = x] for any integer x with 0 ≤ x ≤ n. Note that your answer should depend on x and n, but otherwise your answer is deterministic. 1.22. Working with a joint density Suppose X and Y have joint density function fX,Y (x, y ) = c(1 + xy ) if 2 ≤ x ≤ 3 and 1 ≤ y ≤ 2, and fX,Y (x, y ) = 0 otherwise. (a) Find c. (b) Find fX and fY . (c) Find fX |Y . 1.23. A function of jointly distributed random variables Suppose (U, V ) is uniformly distributed over the square with corners (0,0), (1,0), (1,1), and (0,1), and let X = U V . Find the CDF and pdf of X . 1.24. Density of a difference Let X and Y be independent, exponentially distributed random variables with parameter λ, such that λ > 0. Find the pdf of Z = |X − Y |. 1.25. Working with a two dimensional density Let the random variables X and Y be jointly uniformly distributed over the region shown. 1 0 0 2 1 29 3 (a) Determine the value of fX,Y on the region shown. (b) Find fX , the marginal pdf of X. (c) Find the mean and variance of X. (d) Find the conditional pdf of Y given that X = x, for 0 ≤ x ≤ 1. (e) Find the conditional pdf of Y given that X = x, for 1 ≤ x ≤ 2. (f ) Find and sketch E [Y |X = x] as a function of x. Be sure to specify which range of x this conditional expectation is well defined for. 1.26. Some characteristic functions Find the mean and variance of random variables with the following characteristic functions: (a) Φ(u) = exp(−5u2 + 2j u) (b) Φ(u) = (ej u − 1)/j u, and (c) Φ(u) = exp(λ(ej u − 1)). 1.27. Uniform density over a union of two square regions Let the random variables X and Y be jointly uniformly distributed on the region {0 ≤ u ≤ 1, 0 ≤ v ≤ 1} ∪ {−1 ≤ u < 0, −1 ≤ v < 0}. (a) Determine the value of fX Y on the region shown. (b) Find fX , the marginal pdf of X . (c) Find the conditional pdf of Y given that X = a, for 0 < a ≤ 1. (d) Find the conditional pdf of Y given that X = a, for −1 ≤ a < 0. (e) Find E [Y |X = a] for |a| ≤ 1. (f ) What is the correlation coefficient of X and Y ? (g) Are X and Y independent? (h) What is the pdf of Z = X + Y ? 1.28. A transformation of jointly continuous random variables Suppose (U, V ) has joint pdf fU,V (u, v ) = 9u2 v 2 if 0 ≤ u ≤ 1 & 0 ≤ v ≤ 1 0 else Let X = 3U and Y = U V . Find the joint pdf of X and Y , being sure to specify where the joint pdf is zero. (b) Using the joint pdf of X and Y , find the conditional pdf, fY |X (y |x), o f Y given X . (Be sure to indicate which values of x the conditional pdf is well defined for, a nd for each such x specify the conditional pdf for all real values of y .) 1.29. Transformation of densities Let U and V have the joint pdf: fU V (u, v ) = c(u − v )2 0 ≤ u, v ≤ 1 0 else for some constant c. (a) Find the constant c. (b) Suppose X = U 2 and Y = U 2 V 2 . Describe the joint pdf fX,Y (x, y ) of X and Y . Be sure to indicate where the joint pdf is zero. 1.30. Jointly distributed variables Let U and V be independent random variables, such that U is uniformly distributed over the interval [0, 1], and V has the exponential probability density function 30 2 V (a) Calculate E [ 1+U ]. (b) Calculate P {U ≤ V }. (c) Find the joint probability density function of Y and Z, where Y = U 2 and Z = U V . 1.31*. On σ -algebras, random variables and measurable functions Prove the seven statements lettered (a)-(g) in what follows. Definition. Let Ω be an arbitrary set. A nonempty collection F of subsets of Ω is defined to be an algebra if: (i) Ac ∈ F whenever A ∈ F and (ii) A ∪ B ∈ F whenever A, B ∈ F . (a) If F is an algebra then ∅ ∈ F , Ω ∈ F , and the union or intersection of any finite collection of sets in F is in F . Definition. F is called a σ -algebra if F is an algebra such that whenever A1 , A2 , ... are each in F , so is the union, ∪Ai . (b) If F is a σ -algebra and B1 , B2 , . . . are in F , then so is the intersection, ∩Bi . (c) Let U be an arbitrary nonempty set, and suppose that Fu is a σ -algebra of subsets of Ω for each u ∈ U . Then the intersection ∩u∈U Fu is also a σ -algebra. (d) The collection of all subsets of Ω is a σ -algebra. (e) If Fo is any collection of subsets of Ω then there is a smallest σ -algebra containing Fo (Hint: use (c) and (d).) Definitions. B (R) is the smallest σ -algebra of subsets of R which contains all sets of the form (−∞, a]. Sets in B (R) are called Borel sets. A real-valued random variable on a probability space (Ω, F , P ) is a real-valued function X on Ω such that {ω : X (ω ) ≤ a} ∈ F for any a ∈ R. (f ) If X is a random variable on (Ω, F , P ) and A ∈ B(R) then {ω : X (ω ) ∈ A} ∈ F . (Hint: Fix a random variable X . Let D be the collection of all subsets A of B (R) for which the conclusion is true. It is enough (why?) to show that D contains all sets of the form (−∞, a] and that D is a σ -algebra of subsets of R. You must use the fact that F is a σ -algebra.) Remark. By (f ), P {ω : X (ω ) ∈ A}, or P {X ∈ A} for short, is well defined for A ∈ B (R). Definition. A function g mapping R to R is called Borel measurable if {x : g (x) ∈ A} ∈ B (R) whenever A ∈ B (R). (g) If X is a real-valued random variable on (Ω, F , P ) and g is a Borel measurable function, then Y defined by Y = g (X ) is also a random variable on (Ω, F , P ). 31 32 Chapter 2 Convergence of a Sequence of Random Variables Convergence to limits is central to calculus. Limits are used to define derivatives and integrals. We wish to consider derivatives and integrals of random functions, so it is natural to begin by examining what it means for a sequence of random variables to converge. See the Appendix for a review of the definition of convergence for a sequence of numbers. 2.1 Four definitions of convergence of random variables Recall that a random variable X is a function on Ω for some probability space (Ω, F , P ). A sequence of random variables (Xn (ω ) : n ≥ 1) is hence a sequence of functions. There are many possible definitions for convergence of a sequence of random variables. One idea is to require Xn (ω ) to converge for each fixed ω . However, at least intuitively, what happens on an event of probability zero is not important. Thus, we use the following definition. Let (X1 , X2 , · · · ) be a sequence of random variables and let X be a random variable, all on the same probability space. Then Xn converges almost surely to X if P {limn→∞ Xn = X } = 1. Conceptually, to check almost sure convergence, one can first find the set {ω : limn→∞ Xn (ω ) = X (ω )} and then see if it has probability one. Each of the following three notations is used to denote that Xn converges to X almost surely lim Xn = X a.s. n→∞ Xn → X a.s. lim a.s. Xn = X. n→∞ 2 We say Xn converges to X in the mean square sense if E [Xn ] < +∞ for all n and limn→∞ E [(Xn − X )2 ] = 0. We denote mean square convergence by each of the following three notations: lim Xn = X m.s. n→∞ Xn → X m.s. lim m.s.Xn = X. n→∞ We say (Xn ) converges to X in probability if for any > 0, limn→∞ P {| X − Xn |≥ } = 0. Convergence in probability is denoted by each of the three notations. lim = X p. n→∞ Xn → X p. 33 lim p. Xn = X. n→∞ A fourth definition of convergence is given later in the section. Two examples illustrate the three definitions of convergence given. Example 2.1 Suppose X0 is a random variable with P {X0 ≥ 0} = 1. Suppose Xn = 6 + for n ≥ 1. For example, if for some ω it happens that X0 (ω ) = 12, then √ X1 (ω ) = 6 + 12 = 9.465 . . . √ X2 (ω ) = 6 + 9.46 = 9.076 . . . √ X3 (ω ) = 6 + 9.076 = 9.0127 . . . √ Xn−1 Examining Figure 2.1, it is clear that for any ω with X0 (ω ) > 0, the sequence of numbers Xn (ω ) converges to 9. Therefore, Xn → 9 a.s. The rate of convergence can be bounded as follows. Note 6+ x 3 y 6+ x 9 6 x=y 0 x 0 9 Figure 2.1: Graph of the functions 6 + that for each x ≥ 0, | 6 + √ x−9| ≤ |6+ | Xn (ω ) − 9 | ≤ | 6 + x 3 √ x and 6 + x . 3 − 9 |. Therefore, Xn−1 (ω ) −9| = 3 1 | Xn−1 (ω ) − 9 | 3 so that by induction on n, | Xn (ω ) − 9 | ≤ 3−n | X0 (ω ) − 9 | (2.1) 2 Next, we investigate m.s. convergence under the assumption that E [X0 ] < +∞. By the in2 ≤ 2a2 + 2b2 , it follows that equality (a + b) 2 E [(X0 − 9)2 ] ≤ 2(E [X0 ] + 81) Squaring and taking expectations on each side of (2.1) and using (2.2) thus yields 2 E [| Xn − 9 |2 ] ≤ 2 · 3−2n {E [X0 ] + 81} Therefore, Xn → 9 m.s. Finally, we investigate convergence in probability. Given > 0, {ω :| Xn (ω ) − 9 |≥ } ⊂ {ω :| X0 (ω ) − 9 |≥ 3n } so limn→∞ p. Xn = 9. 34 (2.2) Example 2.2 Let Ω be the unit interval [0, 1], let F be the Borel σ -algebra of Ω, and let P be a probability measure on F such that P {ω : a ≤ ω ≤ b} = b − a for 0 ≤ a < b ≤ 1. This probability space corresponds to the experiment of selecting a point from the interval [0, 1] with the uniform distribution. Using this particular choice of (Ω, F , P ) is excellent for generating examples, because random variables, being functions on Ω, can be simply specified by their graphs. For example, consider the random variable X pictured in Figure 2.2. The probability mass function for such X(! ) 3 2 1 ! 1 4 0 3 4 1 2 1 Figure 2.2: A random variable on (Ω, F , P ) 1 1 X is given by P {X = 1} = P {X = 2} = 4 and P {X = 3} = 2 . Now define a sequence of random variables (Xn ) on (Ω, F , P ) as shown in Figure 2.3. The variable X1 is identically one. X1( ! ) 1 ! 0 0 1 X2( ! ) X3( ! ) 1 1 ! 0 0 ! 0 1 0 X4( ! ) 1 X5( ! ) 1 X6( ! ) 1 ! 0 0 1 X7( ! ) 1 ! 0 0 1 ! 0 1 0 1 ! 0 0 1 Figure 2.3: A sequence of random variables on (Ω, F , P ) 1 The variables X2 and X3 are one on intervals of length 2 . The variables X4 , X5 , X6 , and X7 are one on intervals of length 1 . In general, each n ≥ 1 can be written as n = 2k + j where k = log2 n 4 and 0 ≤ j < 2k . The variable Xn is one on the length 2−k interval (j 2−k , (j + 1)2−k ]. In what sense(s) does (Xn ) converge? To investigate a.s. convergence, fix an arbitrary value for ω . Then for each k ≥ 1, there is one value of n with 2k ≤ n < 2k+1 such that Xn (ω ) = 1, and Xn (ω ) = 0 for all other n. Therefore, 35 limn→∞ Xn (ω ) does not exist. That is P { lim Xn exists} = 0 n→∞ so that Xn does not converge a.s. However, for large n, P {Xn = 0} is close to one. This suggests that Xn converges to the zero random variable, written 0, in some sense. Testing for mean square convergence we compute E [| Xn − 0 |2 ] = 2− so that Xn → 0 m.s. Similarly, for any log2 n with 0 < < 1, P {| Xn − 0 |≥ } = 2− log2 n Therefore, limn→∞ p. Xn = 0. Three propositions will be given concerning convergence definitions. Prop osition 2.1.1 If Xn → X m.s. then Xn → X p. Pro of Suppose Xn → X m.s. and let > 0. By Chebychev’s inequality, P {| X − Xn |≥ } ≤ E [| X − Xn |2 ] (2.3) 2 The right side of (2.3), and hence the left side of (2.3), converges to zero as n goes to infinity. Therefore Xn → X p. as n → ∞. Prop osition 2.1.2 If Xn → X a.s. then Xn → X p. Pro of Suppose Xn → X a.s. and let > 0. Define a sequence of events An by An = {ω :| Xn (ω ) − X (ω ) |< } We only need to show that P [An ] → 1. Define Bn by Bn = {ω :| Xk (ω ) − X (ω ) |< for all k ≥ n} Note that Bn ⊂ An and B1 ⊂ B2 ⊂ · · · so limn→∞ P [Bn ] = P [B ] where B = ∞ n=1 Bn . Clearly B ⊃ {ω : lim Xn (ω ) = X (ω )} n→∞ so 1 = P [B ] = limn→∞ P [Bn ]. Since P [An ] is squeezed between P [Bn ] and 1, limn→∞ P [An ] = 1, so Xn → X p. Prop osition 2.1.3 If for some finite L, P {| Xn |≤ L} = 1 for al l n, and if Xn → X p., then Xn → X m.s. 36 Pro of Suppose Xn → X p. Then for any > 0, P {| X |≥ L + } ≤ P {| X − Xn |≥ } → 0 so that P {| X |≥ L + } = 0 for every > 0. Thus, P {| X |≤ L} = 1, so that P {| X − Xn |2 ≤ 4L2 } = 1. Therefore, with probability one, for any > 0, | X − Xn |2 ≤ 4L2 I{|X −Xn |≥ } + 2 so E [| X − Xn |2 ] ≤ 4L2 P {| X − Xn |≥ } + 2 Thus, for n large enough, E [| X − Xn |2 ] ≤ 2 2 . Since was arbitrary, Xn → X m.s. The following two theorems, stated without proof, are useful for determining convergence of expectations for random variables converging in the a.s. or p. sense. Theorem 2.1.4 (Monotone convergence theorem) Let X1 , X2 , . . . be a sequence of random variables such that E [X1 ] > −∞ and such that X1 (ω ) ≤ X2 (ω ) ≤ · · · . Then the limit X∞ given by X∞ (ω ) = limn→∞ Xn (ω ) for al l ω is an extended random variable (with possible value ∞) and E [Xn ] → E [X∞ ] as n → ∞. Theorem 2.1.5 (Dominated convergence theorem) If X1 , X2 , . . . is a sequence of random variables and X∞ and Y are random variables such that the fol lowing three conditions hold: (i) Xn → X∞ p. as n → ∞ (ii) P {| Xn |≤ Y } = 1 for al l n (iii) E [Y ] < +∞ then E [Xn ] → E [X∞ ]. Example 2.3 Let W0 , W1 , . . . be independent, normal random variables with mean 0 and variance 1. Let X−1 = 0 and Xn = (.9)Xn−1 + Wn n≥0 In what sense does Xn converge as n goes to infinity? For fixed ω , the sequence of numbers X0 (ω ), X1 (ω ), . . . might appear as in Figure 2.4. Intuitively speaking, Xn persistently moves. We claim that Xn does not converge in p. (so also not a.s. and not m.s.). Here is a proof of the claim. Examination of a table for the normal distribution yields that P {Wn ≥ 2} = P {Wn ≤ −2} ≥ 0.02. Then P {| Xn − Xn−1 |≥ 2} ≥ P {Xn−1 ≥ 0, Wn ≤ −2} + P {Xn−1 < 0, Wn ≥ 2} = P {Xn−1 ≥ 0}P {Wn ≤ −2} + P {Xn−1 < 0}P {Wn ≥ 2} = P {Wn ≥ 2} ≥ 0.02 37 Xk k Figure 2.4: A typical sample sequence of X . Therefore, for any random variable X , P {| Xn − X |≥ 1} + P {| Xn−1 − X |≥ 1} ≥ P {| Xn − X |≥ 1 or | Xn−1 − X |≥ 1} ≥ P {| Xn − Xn−1 |≥ 2} ≥ 0.02 so P {| Xn − X |≥ 1} does not converge to zero as n → ∞. So Xn does not converge in probability to any random variable X . The claim is proved. Although Xn does not converge in p. (or a.s. or m.s.) it nevertheless seems to asymptotically settle into an equilibrium. To probe this point further, let’s find the distribution of Xn for each n. X0 = W0 is N (0, 1) X1 = (.9)X0 + W1 is N (0, 1.81) X2 = (.9)X1 + W2 is N (0, (.81)(1.81 + 1)) 2 2 2 2 2 In general, Xn is N (0, σn ) where the variances satisfy the recursion σn = (0.81)σn−1 + 1 so σn → σ∞ 2 = 1 = 5.263. Therefore, the distribution of X converges. We’ll define convergence in where σ∞ 0.19 n distribution after one more example. Example 2.4 Let U be uniformly distributed on the interval [0, 1], and for n ≥ 1 let Xn = (−1)n U . Let X denote the random variable such that X = 0 for all ω . It is easy to verify that n Xn → X a.s.,and p. Does the CDF of Xn converge to the CDF of X ? The CDF of Xn is graphed in Figure 2.5. The CDF FXn (x) converges to 0 for x < 0 and to one for x > 0. However, FXn (0) FX n F X n even n odd n !1 0 01 n n Figure 2.5: CDF of Xn = −1 n n alternates between 0 and 1 and hence does not converge to anything. In particular, it doesn’t converge to FX (0). We now state a fourth definition of convergence, namely convergence in distribution. Convergence in distribution is fundamentally different from the other three types of convergence already 38 defined because it does not depend on the joint distribution of the Xn ’s. In fact, the Xn ’s don’t even need to be on the same probability space. Let (X1 , X2 , . . .) be a sequence of random variables and let X be a random variable. Then (Xn ) converges to X in distribution if lim FXn (x) = FX (x) at all continuity points x of FX n→∞ We denote convergence in distribution by each of the following three notations: lim Xn = X d. Xn → X d. n→∞ lim d. Xn = X n→∞ While the example showed that convergence in probability (or stronger convergence) does not imply convergence of the CDF’s everywhere, it does imply convergence at continuity points, as shown by the following proposition. Prop osition 2.1.6 If Xn → X p. then Xn → X d. Pro of Assume Xn → X p. Select any continuity point x of FX . It must be proved that limn→∞ FXn (x) = FX (x). Let > 0. Then there exists δ > 0 so that FX (x − δ ) ≥ FX (x) − 2 . (See Figure 2.6) F (x) X F (x)! " X 2 x!! x Figure 2.6: A CDF at a continuity point. Now {Xn ≤ x} ∪ {| Xn − X |≥ δ } ⊃ {X ≤ x − δ } so P {Xn ≤ x} + P {| Xn − X |≥ δ ]} ≥ P {X ≤ x − δ } or FXn (x) ≥ FX (x − δ ) − P {| Xn − X |≥ δ }. For all n sufficiently large, P {| Xn − X |≥ δ } ≤ 2 . So for all n sufficiently large, FXn (x) ≥ FX (x) − Similarly, for all n sufficiently large, FXn (x) ≤ FX (x) + . So for all n sufficiently large, | FXn (x) − FX (x) |≤ . Convergence is proved. Another way to investigate convergence in distribution is through the use of characteristic functions. 39 Prop osition 2.1.7 Let (Xn ) be a sequence of random variables and let X be a random variable. Then the fol lowing are equivalent: (i) Xn → X d. (ii) E [f (Xn )] → E [f (X )] for any bounded continuous function f . (iii) ΦXn (u) → ΦX (u) for each u ∈ R (i.e. pointwise convergence of characteristic functions) 2.2 Cauchy criteria for convergence of random variables It is important to be able to show that limits exist even if the limit value is not known. For example, it is useful to determine if the sum of an infinite series of numbers is convergent without needing to know the value of the sum. The Cauchy criteria gives a simple yet general condition that implies convergence of a deterministic sequence (see the appendix). The following proposition, stated without proof, gives a similar criteria for random variables. Prop osition 2.2.1 (Cauchy criteria for random variables) Let (Xn ) be a sequence of random variables on a probability space (Ω, F P ). (a) Xn converges a.s. to some random variable if and only if P {ω : lim m,n→∞ | Xm (ω ) − Xn (ω ) |= 0} = 1 (b) Xn converges in probability to some random variable if and only if for every > 0, lim P {| Xm − Xn |≥ } = 0 m,n→∞ 2 (c) Xn converges in m.s. sense to some random variable if and only if E [Xn < +∞ for al l n and lim E [(Xm − Xn )2 ] = 0 m,n→∞ Proposition 2.2.1(c), a Cauchy criteria for mean square convergence, is used extensively in these notes. In the next proposition a more convenient form of the Cauchy criteria for m.s. convergence is derived. 2 Prop osition 2.2.2 Let (Xn ) be a sequence of random variables with E [Xn ] < +∞ for each n. Then there exists a random variable X such that limn→∞ m.s.Xn = X if and only if the limit limm,n→∞ E [Xn Xm ] exists and is finite. Furthermore, if limn→∞ m.s. Xn = X , then lim E [Xn Xm ] = E [X 2 ] < +∞ m,n→∞ 40 Pro of Then The “if” part is proved first. Suppose limm,n→∞ E [Xn Xm ] = c for a finite constant c. E (Xn − Xm )2 = 2 2 E [Xn ] − 2E [Xn Xm ] + E [Xm ] → c − 2c + c = 0 as m, n → ∞ Thus, Xn is Cauchy in the m.s. sense, so limm,n→∞ m.s. Xn = X for some random variable X . To prove the “only if” part, suppose lim m.s.n→∞ Xn = X . Observe next that E [Xm Xn ] = E [(X + (Xm − X ))(X + (Xn − X ))] = E [X 2 + (Xm − X )X + X (Xn − X ) + (Xm − X )(Xn − X )] By the Cauchy-Schwarz inequality, 1 1 E [| (Xm − X )X |] ≤ E [(Xm − X )2 ] 2 E [X 2 ] 2 → 0 1 1 E [| (Xm − X )(Xn − X ) |] ≤ E [(Xm − X )2 ] 2 E [(Xn − X )2 ] 2 → 0 and similarly E [| X (Xn − X ) |] → 0. Thus E Xm Xn → E X 2 . This establishes both the “only if” part of the proposition and the last statement of the proposition. The proof of the proposition is complete. 2 Corollary 2.2.3 Suppose Xn → X m.s. and Yn → Y m.s. as n → ∞. Then E [Xn Yn ] → E [X Y ] and E [Xn ] → E [X ] as n → ∞. Pro of By the inequality (a + b)2 ≤ 2a2 + 2b2 , it follows that Xn + Yn → X + Y m.s. as n → ∞. 2 Proposition 2.2.2 therefore implies that E [(Xn + Yn )2 ] → E [(X + Y )2 ], E [Xn ] → E [X 2 ], and 2 ] → E [Y 2 ]. Since X Y = ((X + Y )2 − X 2 − Y 2 )/2, the first part of the corollary follows. E [Yn nn n n n n The second part follows from the first part by taking Yn = 1 for all n. Example 2.5 This example illustrates the use of Proposition 2.2.2. Let X1 , X2 , . . . be mean zero random variables such that 1 if i = j E [Xi Xj ] = 0 else Does the series ∞ Xk converge in the mean square sense to a random variable with a finite second k=1 k moment? Let Yn = n=1 Xk . The question is whether Yn converges in the mean square sense to k k a random variable with finite second moment. The answer is yes if and only if limm,n→∞ E [Ym Yn ] exists and is finite. Observe that min(m,n) E [Ym Yn ] = → k=1 ∞ k=1 ∞1 1 x2 d x 1 as m, n → ∞ k2 This sum is smaller than 1 + = 2 < ∞. ∞ Xk k=1 k indeed converges in the m.s. sense. 1 In fact, the sum is equal to is the main point here. π2 , 6 1 k2 1 Therefore, by Proposition 2.2.2, the series but the technique of comparing the sum to an integral to show the sum is finite 41 2.3 Limit theorems for sequences of indep endent random variables Sums of many independent random variables often have distributions that can be characterized by a small number of parameters. For engineering applications, this represents a low complexity method for describing the random variables. An analogous tool is the Taylor series approximation. A continuously differentiable function f can be approximated near zero by the first order Taylor’s approximation f (x) ≈ f (0) + xf (0) A second order approximation, in case f is twice continuously differentiable, is f (x) ≈ f (0) + xf (0) + x2 f (0) 2 Bounds on the approximation error are given by Taylor’s theorem, found in Section 8.2. In essence, Taylor’s approximation lets us represent the function by the numbers f (0), f (0) and f (0). We shall see that the law of large numbers and central limit theorem can be viewed not just as analogies of the first and second order Taylor’s approximations, but actually as consequences of them. Lemma 2.3.1 If xn → x as n → ∞ then (1 + xn n n) → ex as n → ∞. Pro of The basic idea is to note that (1 + s)n = exp(n log(1 + s)), and apply Taylor’s theorem to log(1+s) about the point s = 0. The details are given next. Since ln(1+s)|s=0 = 0, ln(1+s) |s=0 = 1, 1 and ln(1 + s) = − (1+s)2 , the mean value form of Taylor’s Theorem (see the appendix) yields that 2 if s > −1, then ln(1 + s) = s − 2(1s y)2 , where y lies in the closed interval with endpoints 0 and s. + Thus, if s ≥ 0, then y ≥ 0, so that s− s2 ≤ ln(1 + s) ≤ s 2 if s ≥ 0. 1 More to the point, if it is only known that s ≥ − 2 , then y ≥ − 1 , so that 2 s − 2s2 ≤ ln(1 + s) ≤ s Letting s = xn n, if s ≥ − 1 2 multiplying through by n, and applying the exponential function, yields that exp(xn − 2x2 xn n n ) ≤ (1 + ) ≤ exp(xn ) n n if xn ≥ − n 2 If xn → x as n → ∞ then the condition xn > − n holds for all large enough n, and xn − 2 yielding the desired result. 2 x2 n n → x, 2 A sequence of random variables (Xn ) is said to be independent and identically distributed (iid) if the Xi ’s are mutually independent and identically distributed. Prop osition 2.3.2 (Law of Large Numbers) Suppose that X1 , X2 , . . . is a sequence of random variables such that each Xi has finite mean m. Let Sn = X1 + · · · + Xn . Then 42 (a) → m m.s. (hence also p., d.) if for some constant c, Var(Xi ) ≤ c for al l i, and Cov(Xi , Xj ) = 0 for i = j (i.e. if the variances are bounded and the Xi ’s are uncorrelated). (b) Sn n → m p. if X1 , X2 , . . . are iid. (This version is the Weak Law of Large Numbers.) (c) Sn n → m a.s. if X1 , X2 , . . . are iid. (This version is the Strong Law of Large Numbers.) Sn n We give a proof of (a) and (b), but prove (c) only under an extra condition. Suppose the conditions of (a) are true. Then E Sn −m n 2 Sn n = Var 1 n2 = = 1 Var(Sn ) n2 Cov(Xi , Xj ) = i j 1 n2 i Var(Xi ) ≤ c n Therefore Sn → m m.s. as n → ∞. n Turn next to part (b). If in addition to the conditions of (b) it is assumed that Var(X1 ) < +∞, then the conditions of part (a) are true. Since mean square convergence implies convergence in probability, the conclusion of part (b) follows. An extra credit problem shows how to use the same approach to verify (b) even if Var(X1 ) = +∞. Here a second approach to proving (b) is given. The characteristic function of Xi is given by n E exp j uXi n u = E exp j ( )Xi n u = ΦX ( ) n where ΦX denotes the characteristic function of X1 . Since the characteristic function of the sum of independent random variables is the product of the characteristic functions, Φ Sn (u) = ΦX n u n n . Since E (X1 ) = m it follows that ΦX is differentiable with ΦX (0) = 1, ΦX (0) = j m and Φ is continuous. By Taylor’s theorem, for any u fixed, ΦX u n = 1+ uΦX (un ) n u for some un between 0 and n for all n. Since Φ (un ) → j m as n → ∞, Lemma 2.3.1 yields u ΦX ( n )n → exp(j um) as n → ∞. Note that exp(j um) is the characteristic function of a random variable equal to m with probability one. Since pointwise convergence of characteristic functions implies convergence in distribution, it follows that limn→∞ d. Sn = m. However, convergence in n distribution to a constant implies convergence in probability, so (b) is proved. 4 Part (c) is proved under the additional assumption that E [X1 ] < +∞. Without loss of generality 4 we assume that E X1 = 0. Consider expanding Sn . There are n terms of the form Xi4 and 3n(n − 1) 2 X 2 with 1 ≤ i, j ≤ n and i = j . The other terms have the form X 3 X , X 2 X X terms of the form Xi j ij ijk or Xi Xj Xk Xl for distinct i, j, k , l, and these terms have mean zero. Thus, 4 4 2 E [Sn ] = nE [X1 ] + 3n(n − 1)E [X1 ]2 43 Let Y = ∞ ( Sn )4 . The value of Y is well defined but it is a priori possible that Y (ω ) = +∞ for n=1 n some ω . However, by the monotone convergence theorem, the expectation of the sum of nonnegative random variables is the sum of the expectations, so that ∞ E [Y ] = E n=1 ∞ 4 Sn n = n=1 4 2 nE [X1 ] + 3n(n − 1)E [X1 ]2 < +∞ n4 Therefore, P {Y < +∞} = 1. However, {Y < +∞} is a subset of the event of convergence ( {w : Snnw) → 0 as n → ∞}, so the event of convergence also has probability one. Thus, part (c) under the extra fourth moment condition is proved. Prop osition 2.3.3 (Central Limit Theorem) Suppose that X1 , X2 , . . . are i.i.d., each with mean µ and variance σ 2 . Let Sn = X1 + · · · + Xn . Then the normalized sum Sn − nµ √ n converges in distribution to the N (0, σ 2 ) distribution as n → ∞. Pro of Without loss of generality, assume that µ = 0. Then the characteristic function of the S u normalized sum √n is given by ΦX ( √n )n , where ΦX denotes the characteristic function of X1 . n Since X1 has mean 0 and finite second moment σ 2 , it follows that ΦX is twice differentiable with ΦX (0) = 1, ΦX (0) = 0, ΦX (0) = −σ 2 , and ΦX is continuous. By Taylor’s theorem, for any u fixed, ΦX for some un between 0 and 22 u √ n u √ n = 1+ u2 Φ (un ) 2n for all n. Since Φ (un ) → −σ 2 as n → ∞, Lemma 2.3.1 yields σ u ΦX ( √n )n → exp(− u 2 ) as n → ∞. Since pointwise convergence of characteristic functions implies convergence in distribution, the proposition is proved. 2.4 Convex functions and Jensen’s Inequality Let ϕ be an extended function on R which possibly has ϕ(x) = +∞ for some x but suppose ϕ(x) > −∞ for all x. Then ϕ is said to be convex if for any a, b and λ with a < b and 0 ≤ λ ≤ 1 ϕ(aλ + b(1 − λ)) ≤ λϕ(a) + (1 − λ)ϕ(b) This means that the graph of ϕ on any interval [a, b] lies below the line segment equal to ϕ at the endpoints of the interval. A twice differentiable function ϕ is convex if and only if ϕ ≥ 0. Examples of convex functions include: ax2 + bx + c λx e for constants for λ constant, ϕ(x) = a, b, c with a ≥ 0, − ln x x > 0 +∞ x ≤ 0, 44 x ln x x > 0 0 x=0 ϕ(x) = +∞ x < 0. Theorem 2.4.1 (Jensen’s Inequality) Let ϕ be a convex function and let X be a random variable such that E [X ] is finite. Then E [ϕ(X )] ≥ ϕ(E [X ]). For example, Jensen’s inequality implies that E [X 2 ] ≥ E [X ]2 , which also follows from the fact Var(X ) = E [X 2 ] − E [X ]2 . Pro of. Since ϕ is convex, there is a tangent to the graph of ϕ at E [X ], meaning there is a function L of the form L(x) = a + bx such that ϕ(x) ≥ L(x) for all x and ϕ(E [X ]) = L(E [X ]). See the illustration in Figure 2.7. Therefore E [ϕ(X )] ≥ E [L(X )] = L(E [X ]) = ϕ(E [X ]), which establishes the theorem. !(x) L(x) x E[X] Figure 2.7: A convex function and a tangent linear function A function ϕ is called concave if −ϕ is convex. If ϕ is concave then E [ϕ(X )] ≤ ϕ(E [X ]). 2.5 Chernoff b ound and large deviations theory Let X1 , X2 , . . . be an iid sequence of random variables with finite mean µ, and let Sn = X1 +· · ·+Xn . The weak law of large numbers implies that for fixed a with a > µ, P { Sn ≥ a} → 0 as n → ∞. In n case the Xi ’s have finite variance, the central limit theorem offers a refinement of the law of large numbers, by identifying the limit of P { Sn ≥ an }, where (an ) is a sequence that converges to µ in n c the particular manner: an = µ + √n . For fixed c, the limit is not zero. One can think of the central limit theorem, therefore, to concern “normal” deviations of Sn from its mean. Large deviations theory, by contrast, addresses P { Sn ≥ a} for a fixed, and in particular it identifies how quickly n P { Sn ≥ a} converges to zero as n → ∞. We shall first describe the Chernoff bound, which is a n e simple upper bound on P { Sn ≥ a}. Then Cram´r’s theorem, to the effect that the Chernoff bound n is in a certain sense tight, is stated. The log moment generating function of X1 is defined by M (θ) = ln E [eθX1 ]. Since eθX1 is a positive random variable, the expectation, and hence M (θ) itself, is well-defined for all real values of θ, with possible value +∞. The Chernoff bound is simply given as P Sn ≥a n ≤ exp(−n[θa − M (θ)]) 45 for θ ≥ 0 (2.4) The bound (2.4), like the Chebychev inequality, is a consequence of Markov’s inequality applied to an appropriate function. For θ > 0: P Sn ≥a n = P {eθ(X1 +···+Xn −na) ≥ 1} ≤ E [eθ(X1 +···+Xn −na) ] = E [eθX1 ]n e−nθa = exp(−n[θa − M (θ)]) To make the best use of the Chernoff bound we can optimize the bound by selecting the best θ. Thus, we wish to select θ ≥ 0 to maximize aθ − M (θ). In general the moment generating function M is convex. Note that M (0) = 0. Let us suppose that M (θ) is finite for some θ > 0. Then E [X1 eθX1 ] E [eθX1 ] M (0) = = E [X1 ] θ=0 The sketch of a typical case is shown in Figure 2.8. Figure 2.8 also shows the line of slope a. M( !) l(a) ! a! Figure 2.8: A moment generating function and a line of slope a. Because of the assumption that a > E [X1 ], the line lies strictly above M (θ) for small enough θ and below M (θ) for all θ < 0. Therefore, the maximum value of θa − M (θ) over θ ≥ 0 is equal to l(a), defined by l(a) = max −∞<θ <∞ θa − M (θ) (2.5) Thus, the Chernoff bound in its optimized form, is P Sn ≥a n ≤ exp(−nl(a)) a > E [X1 ] There does not exist such a clean lower bound on the large deviation probability P { Sn ≥ a}, n but by the celebrated theorem of Cram´r stated next without proof, the Chernoff bound gives the e right exponent. Theorem 2.5.1 (Cram´r’s Theorem) Suppose E [X1 ] is finite, and that E [X1 ] < a. Then for > 0 e there exists a number n such that P Sn ≥a n ≥ exp(−n(l(a) + )) 46 for al l n ≥ n . Combining this bound with the Chernoff inequality yields 1 ln P n→∞ n Sn ≥a n lim = −l(a) In particular, if l(a) is finite (equivalently if P {X1 ≥ a} > 0) then Sn ≥a n P where ( n ) is a sequence with = exp(−n(l(a) + ≥ 0 and limn→∞ n n n )) = 0. Similarly, if a < E [X1 ] and l(a) is finite, then 1 P n where n is a sequence with n Sn ≥a n = exp(−n(l(a) + ≥ 0 and limn→∞ Sn ∈ da n P n n )) = 0. Informally, we can write for n large: ≈ e−nl(a) da (2.6) Example 2.6 let X1 , X2 , . . . be independent and exponentially distributed with parameter λ = 1. Then ∞ M (θ) = ln − ln(1 − θ) θ < 1 +∞ θ≥1 eθx e−x dx = 0 See Figure 2.9 + !! + !! l(a) M(" ) a " 0 0 1 1 Figure 2.9: M (θ) and l(a) for an Exp(1) random variable Therefore, for any a ∈ R, l(a) = max{aθ − M (θ)} θ = max{aθ + ln(1 − θ)} θ <1 If a ≤ 0 then l(a) = +∞. On the other hand, if a > 0 then setting the derivative of aθ + ln(1 − θ) 1 to 0 yields the maximizing value θ = 1 − a , and therefore l(a) = a − 1 − ln(a) a > 0 +∞ a≤0 The function l is shown in Figure 2.9. 47 Example 2.7 Let X1 , X2 , . . . be independent Bernoulli random variables with parameter p satisfying 0 < p < 1. Thus Sn has the binomial distribution. Then M (θ) = ln(peθ + (1 − p)), which has asymptotic slope 1 as θ → +∞ and converges to a constant as θ → −∞. Therefore, l(a) = +∞ if a(1−p) a > 1 or if a < 0. For 0 ≤ a ≤ 1, we find aθ − M (θ) is maximized by θ = ln( p(1−a) ), leading to l(a) = a ln( a ) + (1 − a) ln( 1−a ) 0 ≤ a ≤ 1 p 1− p +∞ else See Figure 2.10. + !! + !! M(" ) l(a) a " 0 0 p 1 Figure 2.10: M (θ) and l(a) for a Bernoulli distribution 48 2.6 Problems 2.1. Limits and infinite sums for deterministic sequences (a) Using the definition of a limit, show that limθ→0 θ(1 + cos(θ)) = 0. (b) Using the definition of a limit, show that limθ→0,θ>0 1+cos(θ) = +∞. θ (c) Determine whether the following sum is finite, and justify your answer: √ ∞ 1+ n n=1 1+n2 . 2.2. The limit of the pro duct is the pro duct of the limits Consider two (deterministic) sequences with finite limits: limn→∞ xn = x and limn→∞ yn = y . (a) Prove that the sequence (yn ) is bounded. (b) Prove that limn→∞ xn yn = xy . (Hint: Note that xn yn − xy = (xn − x)yn + x(yn − y ) and use part (a)). 2.3. On convergence of deterministic sequences and functions 2 (a) Let xn = 8n n+n for n ≥ 1. Prove that limn→∞ xn = 8 . 3 32 (b) Suppose fn is a function on some set D for each n ≥ 1 and f , and suppose f is also a function on D. Then fn is defined to converge to f uniformly if for any > 0, there exists an n such that |fn (x) − f (x)| ≤ for all x ∈ D whenever n ≥ n . A key point is that n does not depend on x. Show that the functions fn (x) = xn on the semi-open interval [0, 1) do not converge uniformly to the zero function. (c) The supremum of a function f on D, written supD f , is the least upper bound of f . Equivalently, supD f satisfies supD f ≥ f (x) for all x ∈ D, and given any c < supD f , there is an x ∈ D such that f (x) ≥ c. Show that | supD f − supD g | ≤ supD |f − g |. Conclude that if fn converges to f uniformly on D, then supD fn converges to supD f . 2.4. Convergence of sequences of random variables Let Θ be uniformly distributed on the interval [0, 2π ]. In which of the four senses (a.s., m.s., p., d.) do each of the following two sequences converge. Identify the limits, if they exist, and justify your answers. (a) (Xn : n ≥ 1) defined by Xn = cos(nΘ). (b) (Yn : n ≥ 1) defined by Yn = |1 − Θ |n . π 2.5. Convergence of random variables on (0,1] Let Ω = (0, 1], let F be the Borel σ algebra of subsets of (0, 1], and let P be the probability measure on F such that P ([a, b]) = b − a for 0 < a ≤ b ≤ 1. For the following two sequences of random variables on (Ω, F , P ), find and sketch the distribution function of Xn for typical n, and decide in which sense(s) (if any) each of the two sequences converges. (a) Xn (ω ) = nω − nω , where x is the largest integer less than or equal to x. (b) Xn (ω ) = n2 ω if 0 < ω < 1/n, and Xn (ω ) = 0 otherwise. 2.6. Convergence of a sequence of discrete random variables Let Xn = X + (1/n) where P [X = i] = 1/6 for i = 1, 2, 3, 4, 5 or 6, and let Fn denote the distribution function of Xn . (a) For what values of x does Fn (x) converge to F (x) as n tends to infinity? (b) At what values of x is FX (x) continuous? 49 (c) Does the sequence (Xn ) converge in distribution to X ? 2.7. Convergence in distribution to a nonrandom limit Let (Xn , n ≥ 1) be a sequence of random variables and let X be a random variable such that P [X = c] = 1 for some constant c. Prove that if limn→∞ Xn = X d., then limn→∞ Xn = X p. That is, prove that convergence in distribution to a constant implies convergence in probability to the same constant. 2.8. Convergence of a minimum Let U1 , U2 , . . . be a sequence of independent random variables, with each variable being uniformly distributed over the interval [0, 1], and let Xn = min{U1 , . . . , Un } for n ≥ 1. (a) Determine in which of the senses (a.s., m.s., p., d.) the sequence (Xn ) converges as n → ∞, and identify the limit, if any. Justify your answers. (b) Determine the value of the constant θ so that the sequence (Yn ) defined by Yn = nθ Xn converges in distribution as n → ∞ to a nonzero limit, and identify the limit distribution. 2.9. Convergence of a pro duct Let U1 , U2 , . . . be a sequence of independent random variables, with each variable being uniformly distributed over the interval [0, 2], and let Xn = U1 U2 · · · Un for n ≥ 1. (a) Determine in which of the senses (a.s., m.s., p., d.) the sequence (Xn ) converges as n → ∞, and identify the limit, if any. Justify your answers. (b) Determine the value of the constant θ so that the sequence (Yn ) defined by Yn = nθ log(Xn ) converges in distribution as n → ∞ to a nonzero limit. 2.10. Limits of functions of random variables Let g and h be functions defined as follows: −1 if x ≤ −1 x if − 1 ≤ x ≤ 1 g (x) = 1 if x ≥ 1 h(x) = −1 if x ≤ 0 1 if x > 0. Thus, g represents a clipper and h represents a hard limiter. Suppose that (Xn : n ≥ 0) is a sequence of random variables, and that X is also a random variable, all on the same underlying probability space. Give a yes or no answer to each of the four questions below. For each yes answer, identify the limit and give a justification. For each no answer, give a counterexample. (a) If limn→∞ Xn = X a.s., then does limn→∞ g (Xn ) a.s. necessarily exist? (b) If limn→∞ Xn = X m.s., then does limn→∞ g (Xn ) m.s. necessarily exist? (c) If limn→∞ Xn = X a.s., then does limn→∞ h(Xn ) a.s. necessarily exist? (d) If limn→∞ Xn = X m.s., then does limn→∞ h(Xn ) m.s. necessarily exist? 2.11. Sums of i.i.d. random variables, I A gambler repeatedly plays the following game: She bets one dollar and then there are three possible outcomes: she wins two dollars back with probability 0.4, she gets just the one dollar back with probability 0.1, and otherwise she gets nothing back. Roughly what is the probability that she is ahead after playing the game one hundred times? 50 2.12. Sums of i.i.d. random variables, I I Let X1 , X2 , . . . be independent random variable with P [Xi = 1] = P [Xi = −1] = 0.5. (a) Compute the characteristic function of the following random variables: X1 , Sn = X1 + · · · + Xn , √ and Vn = Sn / n. (b) Find the pointwise limits of the characteristic functions of Sn and Vn as n → ∞. (c) In what sense(s), if any, do the sequences (Sn ) and (Vn ) converge? 2.13. Sums of i.i.d. random variables, I I I Fix λ > 0. For each integer n > λ, let X1,n , X2,n , . . . , Xn,n be independent random variables such that P [Xi,n = 1] = λ/n and P [Xi,n = 0] = 1 − (λ/n). Let Yn = X1,n + X2,n + · · · + Xn,n . (a) Compute the characteristic function of Yn for each n. (b) Find the pointwise limit of the characteristic functions as n → ∞ tends. The limit is the characteristic function of what probability distribution? (c) In what sense(s), if any, does the sequence (Yn ) converge? 2.14. Limit b ehavior of a sto chastic dynamical system Let W1 , W2 , . . . be a sequence of independent, N (0, 0.5) random variables. Let X0 = 0, and define 2 X1 , X2 , . . . recursively by Xk+1 = Xk + Wk . Determine in which of the senses (a.s., m.s., p., d.) the sequence (Xn ) converges as n → ∞, and identify the limit, if any. Justify your answer. 2.15. Applications of Jensen’s inequality Explain how each of the inequalties below follows from Jensen’s inequality. Specifically, identify the convex function and random variable used. 1 (a) E [ X ] ≥ E [1 ] , for a positive random variable X with finite mean. X (b) E [X 4 ] ≥ E [X 2 ]2 , for a random variable X with finite second moment. (c) D(f |g ) ≥ 0, where f and g are positive probability densities on a set A, and D is the divergence ( x) distance defined by D(f |g ) = A f (x) log f (x) dx. (The base used in the logarithm is not relevant.) g 2.16. Convergence analysis of successive averaging Let U1 , U2 , ... be independent random variables, each uniformly distributed on the interval [0,1]. Let X0 = 0 and X1 = 1, and for n ≥ 1 let Xn+1 = (1 − Un )Xn + Un Xn−1 . Note that given Xn−1 and Xn , the variable Xn+1 is uniformly distributed on the interval with endpoints Xn−1 and Xn . (a) Sketch a typical sample realization of the first few variables in the sequence. (b) Find E [Xn ] for all n. (c) Show that Xn converges in the a.s. sense as n goes to infinity. Explain your reasoning. (Hint: Let Dn = |Xn − Xn−1 |. Then Dn+1 = Un Dn , and if m > n then |Xm − Xn | ≤ Dn .) 2.17. Understanding the Markov inequality Suppose X is a random variable with E [X 4 ] = 30. (a) Derive an upper bound on P [|X | ≥ 10]. Show your work. (b) (Your bound in (a) must be the best possible in order to get both parts (a) and (b) correct). Find a distribution for X such that the bound you found in part (a) holds with equality. 2.18. Mean square convergence of a random series The sum of infinitely many random variables, X1 + X2 + · · · is defined as the limit as n tends to 51 infinity of the partial sums X1 + X2 + · · · + Xn . The limit can be taken in the usual senses (in probability, in distribution, etc.). Suppose that the Xi are mutually independent with mean zero. Show that X1 + X2 + · · · exists in the mean square sense if and only if the sum of the variances, Var(X1 ) + Var(X2 ) + · · · , is finite. (Hint: Apply the Cauchy criteria for mean square convergence.) 2.19. Portfolio allo cation Suppose that you are given one unit of money (for example, a million dollars). Each day you bet a fraction α of it on a coin toss. If you win, you get double your money back, whereas if you lose, you get half of your money back. Let Wn denote the wealth you have accumulated (or have left) after n days. Identify in what sense(s) the limit limn→∞ Wn exists, and when it does, identify the value of the limit (a) for α = 0 (pure banking), (b) for α = 1 (pure betting), (c) for general α. (d) What value of α maximizes the expected wealth, E [Wn ]? Would you recommend using that value of α? (e) What value of α maximizes the long term growth rate of Wn (Hint: Consider log(Wn ) and apply the LLN.) 2.20. A large deviation Let X1 , X2 , ... be independent, N(0,1) random variables. Find the constant b such that 2 2 2 P {X1 + X2 + . . . + Xn ≥ 2n} = exp(−n(b + where n n )) → 0 as n → ∞. What is the numerical value of the approximation exp(−nb) if n = 100. 2.21. Sums of indep endent Cauchy random variables Let X1 , X2 , . . . be independent, each with the standard Cauchy density function. The standard 1 Cauchy density and its characteristic function are given by f (x) = π(1+x2 ) and Φ(u) = exp(−|u|). Let Sn = X1 + X2 + · · · + Xn . (a) Find the characteristic function of Sn for a constant θ. nθ (b) Does Sn converge in distribution as n → ∞? Justify your answer, and if the answer is yes, n identify the limiting distribution. (c) Does Sn converge in distribution as n → ∞? Justify your answer, and if the answer is yes, n2 identify the limiting distribution. S (d) Does √n converge in distribution as n → ∞? Justify your answer, and if the answer is yes, n identify the limiting distribution. 2.22. A rappro chement b etween the central limit theorem and large deviations Let X1 , X2 , . . . be independent, identically distributed random variables with mean zero, variance σ 2 , and probability density function f . Suppose the moment generating function M (θ) is finite for θ in an open interval I containing zero. (a) Show that for θ ∈ I , M (θ) is the variance for the “tilted” density function fθ defined by fθ (x) = f (x) exp(θx − M (θ)). In particular, since M is nonnegative, M is a convex function. (The interchange of expectation and differentiation with respect to θ can be justified for θ ∈ I . You needn’t give details.) 52 Let b > 0 and let Sn = X1 + · · · + Xn for n any positive integer. By the central limit the√ orem, P [Sn ≥ b n] → Q(b/σ ) as n → ∞. An upper bound on the Q function is given by 2 2 2 ∞ ∞ s 1 Q(u) = u √1 π e−s /2 ds ≤ u u√2π e−s /2 ds = u√2π e−u /2 . This bound is a good approximation 2 2 2 σ if u is moderately large. Thus, Q(b/σ ) ≈ b√2π e−b /2σ . if b/σ is moderately large. √ √ (b) The large deviations upper bound yields P [Sn ≥ b n] ≤ exp(−n (b/ n)). Identify the limit of the large deviations upper bound as n → ∞, and compare with the approximation given by the central limit theorem. (Hint: Approximate M near zero by its second order Taylor’s approximation. ) 2.23. Chernoff b ound for Gaussian and Poisson random variables (a) Let X have the N (µ, σ 2 ) distribution. Find the optimized Chernoff bound on P {X ≥ E [X ] + c} for c ≥ 0. (b) Let Y have the P oi(λ) distribution. Find the optimized Chernoff bound on P {Y ≥ E [Y ] + c} for c ≥ 0. (c) (The purpose of this problem is to highlight the similarity of the answers to parts (a) and (b).) c c2 Show that your answer to part (b) can be expressed as P {Y ≥ E [Y ] + c} ≤ exp(− 2λ ψ ( λ )) for c ≥ 0, 2 , with g (s) = s(log s − 1) + 1. (Note: Y has variance λ, so the essential where ψ (u) = 2g (1 + u)/u difference between the normal and Poisson bounds is the ψ term. The function ψ is strictly positive and strictly decreasing on the interval [−1, +∞), with ψ (−1) = 2 and ψ (0) = 1. Also, uψ (u) is strictly increasing in u over the interval [−1, +∞). ) 2.24. Large deviations of a mixed sum Let X1 , X2 , . . . have the E xp(1) distribution, and Y1 , Y2 , . . . have the P oi(1) distribution. Suppose all these random variables are mutually independent. Let 0 ≤ f ≤ 1, and suppose Sn = 1 e X1 + · · · + Xnf + Y1 + · · · + Y(1−f )n . Define l(f , a) = limn→∞ n ln P { Sn ≥ a} for a > 1. Cram´rs then orem can be extended to show that l(f , a) can be computed by replacing the probability P { Sn ≥ a} n by its optimized Chernoff bound. (For example, if f = 1/2, we simply view Sn as the sum of the 12 n n n 2 i.i.d. random variables, X1 + Y1 , . . . , X 2 + Y 2 .) Compute l(f , a) for f ∈ {0, 3 , 3 , 1} and a = 4. 2.25*. Distance measures (metrics) for random variables . For random variables X and Y , define d1 (X, Y ) = E [| X − Y | /(1+ | X − Y |)] d2 (X, Y ) = min{ ≥ 0 : FX (x + ) + ≥ FY (x) and FY (x + ) + ≥ FX (x) for all x} d3 (X, Y ) = (E [(X − Y )2 ])1/2 , where in defining d3 (X, Y ) it is assumed that E [X 2 ] and E [Y 2 ] are finite. (a) Show that di is a metric for i = 1, 2 or 3. Clearly di (X, X ) = 0 and di (X, Y ) = di (Y , X ). Verify in addition the triangle inequality. (The only other requirement of a metric is that di (X, Y ) = 0 only if X = Y . For this to be true we must think of the metric as being defined on equivalence classes of random variables.) (b) Let X1 , X2 , . . . be a sequence of random variables and let Y be a random variable. Show that Xn converges to Y (i) in probability if and only if d1 (X, Y ) converges to zero, 53 (ii) in distribution if and only if d2 (X, Y ) converges to zero, (iii) in the mean square sense if and only if d3 (X, Y ) converges to zero (assume E [Y 2 ] < ∞). (Hint for (i): It helps to establish that d1 (X, Y ) − /(1 + ) ≤ P {| X − Y |≥ } ≤ d1 (X, Y )(1 + )/ . The “only if ” part of (ii) is a little tricky. The metric d2 is called the Levy metric. 2.26*. Weak Law of Large Numb ers Let X1 , X2 , . . . be a sequence of random variables which are independent and identically distributed. Assume that E [Xi ] exists and is equal to zero for all i. If Var(Xi ) is finite, then Chebychev’s inequality easily establishes that (X1 + · · · + Xn )/n converges in probability to zero. Taking that result as a starting point, show that the convergence still holds even if Var(Xi ) is infinite. (Hint: Use “truncation” by defining Uk = Xk I {| Xk |≥ c} and Vk = Xk I {| Xk |< c} for some constant c. E [| Uk |] and E [Vk ] don’t depend on k and converge to zero as c tends to infinity. You might also find the previous problem helpful. 54 Chapter 3 Random Vectors and Minimum Mean Squared Error Estimation The reader is encouraged to review the section on matrices in the appendix before reading this chapter. 3.1 Basic definitions and prop erties A random vector X of dimension m has the form X= X1 X2 . . . Xm where the Xi ’s are random variables all on the same probability space. The mean of X (also called the expected value of X ) is the vector E X defined by E X1 E X2 EX = . . . E Xm The correlation matrix of X , denoted either by Cov(X ) or Cov(X, X ), is the m × m matrix defined by E [X X T ], which has ij th entry E [Xi Xj ]. The covariance matrix of X is the m × m matrix with ij th entry Cov(Xi , Xj ). That is, the correlation matrix is the matrix of correlations, and the covariance matrix is the matrix of covariances. Suppose Y is another random vector on the same probability space as X , with dimension n. The cross correlation matrix of X and Y is the m × n matrix E [X Y T ], which has ij th entry E [Xi Yj ]. The cross covariance matrix of X and Y , denoted by Cov(X, Y ), is the matrix with ij th entry Cov(Xi , Yj ). Note that Cov(X, X ) is the covariance matrix of X . Elementary properties of expectation, correlation and covariance for vectors follow immediately from similar properties for ordinary scalar random variables. These properties include the following (here A and C are nonrandom matrices and b and d are nonrandom vectors). 55 1. E [AX + b] = AE [X ] + b 2. Cov(X, Y ) = E [X (Y − E Y )T ] = E [(X − E X )Y T ] = E [X Y T ] − (E X )(E Y )T 3. E [(AX )(C Y )T ] = AE [X Y T ]C T 4. Cov(AX + b, C Y + d) = ACov(X, Y )C T Prop osition 3.1.1 Correlation matrices and covariance matrices are positive semidefinite. Conversely, if K is a positive semidefinite matrix, then K is the covariance matrix and correlation matrix for some mean zero random vector X . Pro of: If K is a correlation matrix, then K = E [X X T ] for some random vector X . Given any vector α, αT X is a scaler random variable, so αT K α = E [αT X X T α] = E [(αT X )(X T α)] = E [(αT X )2 ] ≥ 0. Similarly, if K = Cov(X, X ) then for any vector α, αT K α = αT Cov(X, X )α = Cov(αT X, αT X ) = Var(αT X ) ≥ 0. The first part of the proposition is proved. For the converse part, suppose that K is an arbitrary symmetric positive semidefinite matrix. Let λ1 , . . . , λn and U be the corresponding set of eigenvalues and orthonormal matrix formed by the eigenvectors. (See the section on matrices in the appendix.) Let Y1 , . . . , Yn be independent, mean 0 random variables with Var(Yi ) = λi , and let Y be the random vector Y = (Y1 , . . . , Yn )T . Then Cov(Y , Y ) = Λ, where Λ is the diagonal matrix with the λi ’s on the diagonal. Let X = U Y . Then E X = 0 and Cov(X, X ) = Cov(U Y , U Y ) = U ΛU T = K. Therefore, K is both the covariance matrix and the correlation matrix of X . The characteristic function ΦX of X is the function on Rm defined by ΦX (u) = E [exp(j uT X )]. 3.2 The orthogonality principle for minimum mean square error estimation Example 3.1. Let X be a random variable with some known distribution. Suppose X is not observed but that we wish to estimate X . If we use a constant b to estimate X , the estimation error will be X − b. The mean square error (MSE) is E [(X − b)2 ]. Since E [X − E X ] = 0 and E X − b is constant, E [(X − b)2 ] = E [((X − E X ) + (E X − b))2 ] = E [(X − E X )2 + 2(X − E X )(E X − b) + (E X − b)2 ] = Var(X ) + (E X − b)2 . From this expression it is easy to see that the mean square error is minimized with respect to b if and only if b = E X . The minimum possible value is Var(X ). 56 Random variables X and Y are called orthogonal if E [X Y ] = 0. Orthogonality is denoted by “X ⊥ Y .” The essential fact E [X − E X ] = 0 is equivalent to the following condition: X − E X is orthogonal to constants: (X − E X ) ⊥ c for any constant c. From Example 3.1 we can conclude the following: the choice of constant b yielding the minimum mean square error is the one that makes the error X − b orthogonal to all constants. This result is generalized by the orthogonality principle, stated next. Let X be a random variable with E [X 2 ] < +∞ and let V be a collection of random variables on the same probability space as X such that V.1 E [Z 2 ] < +∞ for Z ∈ V . V.2 V is a linear class: If Z1 ∈ V and Z2 ∈ V and a1 , a2 are constants, then a1 Z1 + a2 Z2 ∈ V . V.3 V is closed in the mean square sense: If Z1 , Z2 , . . . is a sequence of elements of V and if Zn → Z∞ m.s. for some random variable Z∞ , then Z∞ ∈ V . Example 3.1 corresponds to the case that V is the set of constant random variables. The orthogonality principle stated next is illustrated in Figure 3.1. X Z e Z# 0 V Figure 3.1: Illustration of the orthogonality principle Theorem 3.2.1 (The orthogonality principle) (a) (Existence and uniqueness) There exists a unique element Z ∗ in V so that E [(X − Z ∗ )2 ] ≤ E [(X − Z )2 ] for al l Z ∈ V . (Here, we consider two elements Z and Z of V to be the same if P {Z = Z } = 1). (b) (Characterization) Let W be a random variable. Then W = Z ∗ if and only if the fol lowing two conditions hold: (i) W ∈ V . (ii) (X − W ) ⊥ Z for al l Z in V . (c)(Error expression) The minimum mean square error (MMSE) is given by E [(X − Z ∗ )2 ] = E [X 2 ] − E [(Z ∗ )2 ]. 57 Pro of: The proof of part (a) is given in an extra credit homework problem. The technical condition V.3 on V is essential for the proof of existence. Here parts (b) and (c) are proved. To establish the “if” half of part (b), suppose W satisfies (i) and (ii) and let Z be an arbitrary element of V . Then W − Z ∈ V because V is a linear class. Therefore, (X − W ) ⊥ (W − Z ), which implies that E [(X − Z )2 ] = E [(X − W + W − Z )2 ] = E [(X − W )2 + 2(X − W )(W − Z ) + (W − Z )2 ] = E [(X − W )2 ] + E [(W − Z )2 ]. Thus E [(X − W )2 ] ≤ E [(X − Z )2 ]. Since Z is an arbitrary element of V , it follows that W = Z ∗ , and the ”if” half of (b) is proved. To establish the “only if” half of part (b), note that Z ∗ ∈ V by the definition of Z ∗ . Let Z ∈ V and let c ∈ R. Then Z ∗ + cZ ∈ V , so that E [(X − (Z ∗ + cZ ))2 ] ≥ E [(X − Z ∗ )2 ]. But E [(X − (Z ∗ + cZ ))2 ] = E [(X − Z ∗ ) − cZ )2 ] = E [(X − Z ∗ )2 ] − 2cE [(X − Z ∗ )Z ] + c2 E [Z 2 ], so that −2cE [(X − Z ∗ )Z ] + c2 E [Z 2 ] ≥ 0. (3.1) As a function of c the left side of (3.1) is a parabola with value zero at c = 0. Hence its derivative with respect to c at 0 must be zero, which yields that (X − Z ∗ ) ⊥ Z . The “only if” half of (b) is proved. The expression of part (c) is proved as follows. Since X − Z ∗ is orthogonal to all elements of V , including Z ∗ itself, E [X 2 ] = E [((X − Z ∗ ) + Z ∗ )2 ] = E [(X − Z ∗ )2 ] + E [(Z ∗ )2 ]. This proves part (c). Example 3.2a Suppose a random variable X is to be estimated using an observed random vector Y of dimension m. Suppose E [X 2 ] < +∞. For this example we consider the most general class of estimators based on Y , by setting V = {g (Y ) : g : Rm → R, E [g (Y )2 ] < +∞}. (There is also the implicit condition that g is Borel measurable so that g (Y ) is a random variable.) Let us first proceed to identify the optimal estimator by conditioning on the value of Y , thereby reducing this example to Example 3.1. For technical reasons we assume for now that X and Y have a joint pdf. Then, conditioning on Y , E [(X − g (Y ))2 ] = Rm E [(X − g (Y ))2 |Y = y ]fY (y )dy where E [(X − g (Y ))2 |Y = y ] = ∞ −∞ 58 (x − g (y ))2 fX |Y (x | y )dx Example 3.1 tells us that for each fixed y , the minimizing choice for g (y ) is g ∗ (y ) = E [X |Y = y ] = ∞ −∞ xfX |Y (x | y )dx Therefore, the optimal estimator in V is g ∗ (Y ) which, by definition, is equal to the random variable E [X | Y ]. What does the orthogonality principle imply for this example? It implies that there exists an optimal estimator g ∗ (Y ) which is the unique element of V such that (X − g ∗ (Y )) ⊥ g (Y ) for all g (Y ) ∈ V . If X, Y have a joint pdf then we can check that E [X | Y ] satisfies the required condition. Indeed, E [(X − E [X | Y ])g (Y )] = = (x − E [X | Y = y ])g (y )fX |Y (x | y )fY (y )dxdy (x − E [X | Y = y ])fX |Y (x | y )dx g (y )fY (y )dy = 0, since the expression within the braces is zero. In summary, if X and Y have a joint pdf (and similarly if they have a joint pmf ) then the MMSE estimator of X given Y is E [X | Y ]. Even if X and Y don’t have a joint pdf or joint pmf, we define E [X | Y ] to be the MMSE estimator of X given Y. By the orthogonality principle E [X | Y ] exists as long as E [X 2 ] < ∞, and it is the unique function of Y such that E [(X − E [X | Y ])g (Y )] = 0 for all g (Y ) in V . Estimation of a random variable has been discussed, but often we wish to estimate a random vector. A beauty of the MSE criteria is that the MSE for estimation of a random vector is the sum of the MSEs of the coordinates: m E [ X − g (Y ) 2 = i=1 E [(Xi − gi (Y ))2 ] Therefore, for most sets of estimators V typically encountered, finding the MMSE estimator of a random vector X decomposes into finding the MMSE estimators of the coordinates of X separately. Example 3.2b Suppose a random vector X is to be estimated using estimators of the form g(Y), where here g maps Rn into Rm . Assume E [ X 2 ] < +∞ and seek an estimator to minimize the MSE. Then by Example 3.2a, applied to each coordinate of X separately, the optimal estimator g ∗ (Y ) is given by E [X1 | Y ] E [X2 | Y ] g ∗ (Y ) = E [X | Y ] = . . . E [Xm | Y ] 59 Let the estimation error be denoted by e, e = X − E [X | Y ]. (Even though e is a random vector we use lower case for it for an obvious reason.) The mean of the error is given by E e = 0. As for the covariance of the error, note that E [Xj | Y ] is in V (the V defined in Example 3.2a) for each j , so ei ⊥ E [Xj | Y ] for each i, j . Since E ei = 0, it follows that Cov(ei , E [Xj | Y ]) = 0 for all i, j . Equivalently, Cov(e, E [X | Y ]) = 0. Using this and the fact X = E [X | Y ] + e yields Cov(X, X ) = Cov(E [X | Y ] + e, E [X | Y ] + e) = Cov(E [X | Y ], E [X | Y ]) + Cov(e, e) Thus, Cov(e, e) = Cov(X, X ) − Cov(E [X | Y ]). In practice, computation of E [X | Y ] may be too complex or may require more information about the joint distribution of X and Y than is available. For both of these reasons, it is worthwhile to consider classes of estimators that are smaller than those considered in Examples 3.2a-3.2b. Example 3.3 Let X and Y be random vectors with E [ X 2 ] < +∞ and E [ Y 2 ] < +∞. Seek estimators of the form AY + b to minimize the MSE. Such estimators are called linear estimators because each coordinate of AY + b is a linear combination of Y1 , Y2 , . . . , Ym and 1. Here “1” stands for the random variable that is always equal to 1. To identify the optimal linear estimator we shall apply the orthogonality principle for each coordinate of X with V = {c0 + c1 Y1 + c2 Y2 + . . . + cn Yn : c0 , c1 , . . . , cn ∈ R} Let e denote the estimation error e = X − (AY + b). We must select A and b so that ei ⊥ Z for all Z ∈ V . Equivalently, we must select A and b so that ei ⊥ 1 ei ⊥ Yj all i all i, j. The condition ei ⊥ 1, which means E ei = 0, implies that E [ei Yj ] = Cov(ei , Yj ). Thus, the required orthogonality conditions on A and b become E e = 0 and Cov(e, Y ) = 0. The condition E e = 0 requires that b = E X − AE Y , so we can restrict our attention to estimators of the form E X + A(Y − E Y ), so that e = X − E X − A(Y − E Y ). The condition Cov(e, Y ) = 0 becomes Cov(X, Y ) − ACov(Y , Y ) = 0. If Cov(Y , Y ) is not singular, then A must be given by A = Cov(X, Y )Cov(Y , Y )−1 . In this case the optimal linear estimator, denoted by E [X | Y ], is given by E [X | Y ] = E [X ] + Cov(X, Y )Cov(Y , Y )−1 (Y − E Y ) Proceeding as in Example 3.2b, we find that the covariance of the error vector satisfies Cov(e, e) = Cov(X, X ) − Cov(E [X | Y ], E [X | Y ]) which by (3.2) yields Cov(e, e) = Cov(X, X ) − Cov(X, Y )Cov(Y , Y )−1 Cov(Y , X ). 60 (3.2) The minimum of a function over a set V1 is less than or equal to the minimum of the same function over a subset V2 of V1 . Therefore, if V1 and V2 are two families of estimators for a random variable X and V2 ⊂ V1 , then min{E [(X − Z )2 ] : Z ∈ V1 } ≤ min{E [(X − Z )2 ] : Z ∈ V2 }. For example, E [(X − E [X | Y ])2 ] ≤ E [(X − E [X | Y ])2 ] ≤ Var(X ). Example 3.4 Let X, Y be jointly continuous random variables with the pdf x + y 0 ≤ x, y ≤ 1 0 else fX Y (x, y ) = Let us find E [X | Y ] and E [X | Y ]. To find E [X | Y ] we first identify fY (y ) and fX |Y (x|y ). fY (y ) = ∞ 1 2 fX Y (x, y )dx = −∞ +y 0≤y ≤1 0 else Therefore, fX |Y (x | y ) is defined only for 0 < y < 1, and for such y it is given by x+ y 1 +y 2 fX |Y (x | y ) = 0 So for 0 ≤ y ≤ 1, E [X | Y = y ] = 1 0 0≤x≤1 else xfX |Y (x | y )dx = 2 + 3y . 3 + 6y 2+3Y Therefore, E [X | Y ] = 3+6Y . To find E [X | Y ] we compute E X = E Y = 7 1 7 1 Cov(X, Y ) = − 144 so E [X | Y ] = 12 − 11 (Y − 12 ). 7 12 , Var(Y ) = 11 144 and Example 3.5 Suppose that Y = X U , where X and U are independent random variables, X has the Rayleigh density x − x2 / 2 σ 2 e σ2 fX (x) = 0 x≥0 else and U is uniformly distributed on the interval [0, 1]. We find E [X | Y ] and E [X | Y ]. To compute E [X | Y ] we find ∞ EX = 0 EY x2 −x2 /2σ2 1 e dx = σ2 σ = EX EU = σ 2 π 2 π 2 ∞ −∞ √ x2 2π σ 2 e−x 2 /2σ 2 dx = σ π 2 E [X 2 ] = 2σ 2 Var(Y ) = E [Y 2 ] − E [Y ]2 = E [X 2 ]E [U 2 ] − E [X ]2 E [U ]2 = σ 2 1 π Cov(X, Y ) = E [U ]E [X 2 ] − E [U ]E [X ]2 = Var(X ) = σ 2 1 − 2 4 61 2π − 38 Thus (1 − π ) π +2 4 2 (3 − π) 8 E [X | Y ] = σ Y− σ 2 π 2 To find E [X | Y ] we first find the joint density and then the conditional density. Now fX Y (x, y ) = fX (x)fY |X (y | x) = fY (y ) = 1 − x2 / 2 σ 2 e σ2 0 ∞ 0≤y≤x else ∞ 1 − x2 / 2 σ 2 dx y σ2 e fX Y (x, y )dx = 0 −∞ = √ 2π σQ y σ y≥0 y<0 where Q is the complementary CDF for the standard normal distribution. So for y ≥ 0 E [X | Y = y ] = = ∞ xfX Y (x, y )dx/fY (y ) −∞ ∞ x − x2 / 2 σ 2 dx σ exp(−y 2 /2σ 2 ) y σ2 e √ √ = y y 2π 2π Q( σ ) σ Q( σ ) Thus, E [X | Y ] = σ exp(−Y 2 /2σ 2 ) √ 2π Q( Y ) σ Two more examples are given to illustrate the definitions of this section. Example 3.6 Suppose that Y is a random variable and f is a Borel measurable function such that E [f (Y )2 ] < ∞. Let us show that E [f (Y )|Y ] = f (Y ). By definition, E [f (Y )|Y ] is the random variable of the form g (Y ) which is closest to f (Y ) in the mean square sense. If we take g (Y ) = f (Y ), then the mean square error is zero. No other estimator can have a smaller mean square error. Thus, E [f (Y )|Y ] = f (Y ). Similarly, if Y is a random vector with E [||Y ||2 ] < ∞, and if A is a matrix and b a vector, then E [AY + b|Y ] = AY + b. Example 3.7 Suppose X1 , X2 and Y are random vectors such that E [||X1 ||2 ] < ∞ and E [||X2 ||2 ] < ∞. Let us use the definitions and the orthogonality principle to prove that E [X1 + X2 |Y ] = E [X1 |Y ] + E [X2 |Y ]. It is sufficient to prove this claim in the special case that X1 and X2 are random variables, rather than random vectors, because the conditional expectation of a random vector is the vector of conditional expectations of the coordinates of the vector. By the definition and the orthogonality principle, the conditional expectation E [X1 + X2 |Y ] is the unique (up to a set of probability zero) random vector Z satisfying two properties: (i) Z can be expressed as f (Y ), where f is a Borel measurable function such that E [f (Y )2 ], and (ii) (X1 + X2 − Z ) ⊥ g (Y ), for every Borel measurable function g such that E [g (Y )2 ] < ∞. So it must be shown that E [X1 |Y ] + E [X2 |Y ] possesses these two properties. The key is to start with the fact that E [X1 |Y ] and E [X2 |Y ] satisfy the analogous properties. By the definitions of E [X1 |Y ] and E [X2 |Y ] and the orthogonality principle: there exist Borel 62 measurable functions f1 and f2 such that E [X1 |Y ] = f1 (Y ), E [X2 |Y ] = f2 (Y ), E [f1 (Y )2 ] < ∞, E [f2 (Y )2 ] < ∞, X1 − E [X1 |Y ] ⊥ g (Y ) and X2 − E [X2 |Y ] ⊥ g (Y ). Let f (y ) = f1 (y ) + f2 (y ). Then f is a Borel measurable function and, trivially, E [X1 |Y ] + E [X2 |Y ] = f (Y ). Also, E [(f1 (Y ) + f2 (Y ))2 ] ≤ 2E [f1 (Y )2 ] + 2E [f2 (Y )2 ] < ∞. Thus, E [X1 |Y ] + E [X2 |Y ] satisfies property (i). Since X1 − E [X1 |Y ] and X2 − E [X2 |Y ] are both perpendicular to Y , so is their sum, so that E [X1 |Y ] + E [X2 |Y ] also satisfies Property (ii). The proof that E [X1 + X2 |Y ] = E [X1 |Y ] + E [X2 |Y ] is complete. A similar proof shows that if also E [||Y ||2 ] < ∞, then E [X1 + X2 |Y ] = E [X1 |Y ] + E [X2 |Y ]. 3.3 Gaussian random vectors Recall that a random variable X is Gaussian (or normal) with mean µ and variance σ 2 > 0 if X has pdf fX (x) = √ 1 2π σ 2 e− (x−µ)2 2σ 2 . As a degenerate case, we say X is Gaussian with mean µ and variance 0 if P {X = µ} = 1. Equivalently, X is Gaussian with mean µ and variance σ 2 if its characteristic function is given by ΦX (u) = exp − u2 σ 2 + j µu . 2 Let (Xi : i ∈ I ) be random variables indexed by some set I , which possibly has infinite cardinality. A finite linear combination of (Xi : i ∈ I ) is a random variable of the form c1 Xi1 + c2 Xi2 + · · · + cn Xin where n is finite, ik ∈ I for each k and ck ∈ R for each k . The random variables (Xi : i ∈ I ) are said to be jointly Gaussian if every finite linear combination of them is a Gaussian random variable. Some elementary consequences of the definition of jointly Gaussian are in order. First, since each Xj itself is a linear combination of (Xi : i ∈ I ), it follows that if (Xi : i ∈ I ) are jointly Gaussian random variables, then each of the random variables itself is Gaussian. The property of being jointly Gaussian is much stronger than the condition that the individual variables be Gaussian. However, if the random variables (Xi : i ∈ I ) are each Gaussian and if they are independent (which means that Xi1 , Xi2 , . . . , Xin are independent for any finite number of indices i1 , i2 , . . . , in ) then the variables are jointly Gaussian. Suppose that (Xi : i ∈ I ) are jointly Gaussian, and suppose (Yj : j ∈ J ) are random variables indexed by some set J such that each Yj is a finite linear combination of (Xi : i ∈ I ). Then the random variables (Yj : j ∈ J ) are jointly Gaussian too. The proof goes as follows. If Z is a finite linear combination of (Yj : j ∈ J ) then Z = b1 Yj1 + b2 Yj2 + · · · + bn Yjn . But each Yj is a finite linear combination of (Xi : i ∈ I ), so Z can be written as a finite linear combination of (Xi : i ∈ I ): Z = b1 (c11 Xi11 + c12 Xi12 + · · · + c1k1 Xi1k1 ) + · · · + bn (cn1 Xin1 + · · · + cnkn Xinkn ). Therefore Z is a Gaussian random variable, as was to be proved. A random vector X of dimension m is called Gaussian if the coordinates X1 , . . . , Xm are jointly Gaussian. We write that X is a N (µ, K ) random vector if X is a Gaussian random vector with 63 mean vector µ and covariance matrix K . Let X be a N (µ, K ) random vector. Then for any vector u, the random variable uT X is Gaussian with mean uT µ and variance given by Var(uT X ) = Cov(uT X, uT X ) = uT K u. Thus, we already know the characteristic function of uT X . But the characteristic function of the vector X evaluated at u is the characteristic function of uT X evaluated at 1: ΦX (u) = E [ej u TX = E [ej (u T X) = ΦuT X (1) = ej u T µ− 1 uT K u 2 Note that if the coordinates of X are uncorrelated, or equivalently if K is a diagonal matrix, then m ΦX (u) = i=1 exp(j ui µi − kii u2 i )= 2 Φi (ui ) i where kii denotes the ith diagonal element of K , and Φi is the characteristic function of a N (µi , kii ) random variable. By uniqueness of joint characteristic functions, it follows that X1 , . . . , Xm are independent random variables. As we see next, even if K is not diagonal, X can be expressed as a linear transformation of a vector of independent Gaussian random variables. Let X be a N (µ, K ) random vector. Since K is positive semidefinite it can be written as K = U ΛU T where U is orthonormal (so U U T = U T U = I ) and Λ is a diagonal matrix with the eigenvalues λ1 , λ2 , . . . , λm of K along the diagonal. (See the Appendix.) Let Y = U T (X − µ). Then Y is a Gaussian vector with mean 0 and covariance matrix given by Cov(Y , Y ) = Cov(U T X, U T X ) = U T K U = Λ. In summary, we have X = U Y + µ, and Y is a vector of independent Gaussian random variables, the ith one being N (0, λi ). Suppose further that K is nonsingular, meaning det(K ) = 0. Since det(K ) = λ1 λ2 · · · λm this implies that λi > 0 for each i, so that Y has the joint pdf m fY (y ) = i=1 √ y2 1 exp − i 2λi 2π λ i = 1 (2π ) m 2 det(K ) exp − y T Λ −1 y 2 . Since | det(U )| = 1 and U Λ−1 U T = K −1 , the joint pdf for the N (µ, K ) random vector X is given by fX (x) = fY (U T (x − µ)) = 1 m − 1 exp (2π ) 2 |K | 2 (x − µ)T K −1 (x − µ) 2 . Two random vectors X and Y are called jointly Gaussian if all the coordinates X1 , . . . , Xm , Y1 , Y2 , . . . , Yn are jointly Gaussian. If X and Y are jointly Gaussian vectors and uncorrelated (so Cov(X, Y ) = 0) then X and Y are independent. To prove this, let Z denote the dimension m + n vector with coordinates X1 , . . . , Xm , Y1 , . . . , Yn . Since Cov(X, Y ) = 0, the covariance matrix of Z is block diagonal: Cov(Z ) = Cov(X ) 0 0 Cov(Y ) 64 . Therefore, for u ∈ Rm and v ∈ Rn , 1u 2v = ΦX (u)ΦY (v ). u v ΦZ = exp − T Cov(Z ) u u +j v v T EZ Such factorization implies that X and Y are indeed independent. Consider next MMSE estimation for jointly Gaussian vectors. Let X and Y be jointly Gaussian vectors. Recall that E [X |Y ] is the MMSE linear estimator of X given Y , and by the orthogonality principle, E e = 0 and Cov(e, Y ) = 0, where e = X − E [X |Y ]. Since Y and e are obtained from X and Y by linear transformations, they are jointly Gaussian. Since Cov(e, Y ) = 0, the random vectors e and Y are also independent. Now focus on the following rearrangement of the definition of e: X = e + E [X |Y ]. Since E [X |Y ] is a function of Y and since e is independent of Y with distribution N (0, Cov(e)), the following key observation can be made. Given Y = y , the conditional distribution of X is N (E [X |Y = y ], Cov(e)). In particular, the conditional mean E [X |Y = y ] is equal to E [X |Y = y ]. That is, if X and Y are jointly Gaussian, then E [X |Y ] = E [X |Y ]. The above can be written in more detail using the explicit formulas for E [X |Y ] and Cov(e) found earlier: E [X |Y = y ] = E [X |Y = y ] = E X + Cov(X, Y )Cov(Y , Y )−1 (y − E [Y ]) Cov(e) = Cov(X ) − Cov(X, Y )Cov(Y )−1 Cov(Y , X ). If Cov(e) is not singular, then fX |Y (x|y ) = 1 m 2 (2π ) |Cov(e)| 1 2 exp − 1 x − E [X |Y = y ] 2 T Cov(e)−1 (x − E [X |Y = y ]) . Example 3.8 Suppose X and Y are jointly Gaussian mean zero random variables such that the X 43 has covariance matrix . Let us find simple expressions for the two random vector Y 39 variables E [X 2 |Y ] and P [X ≥ c|Y ]. Note that if W is a random variable with the N (µ, σ 2 ) distribution, then E [W 2 ] = µ2 + σ 2 and P {W ≥ c} = Q( c−µ ), where Q is the standard Gaussian σ complementary CDF. The idea is to apply these facts to the conditional distribution of X given Y . v( ,Y 2 Given Y = y , the conditional distribution of X is N ( CoarX,Y ) y , Cov(X ) − Cov(XY )) ), or N ( y , 3). 3 V (Y ) Var( (y / Therefore, E [X 2 |Y = y ] = ( y )2 + 3 and P [X ≥ c|Y = y ] = Q( c−√3 3) ). Applying these two 3 (Y functions to the random variable Y yields E [X 2 |Y ] = ( Y )2 + 3 and P [X ≥ c|Y ] = Q( c−√3/3) ). 3 3.4 Linear Innovations Sequences Let X , Y1 , . . . , Yn be mean zero random vectors. In general, computation of the joint pro jection E [X |Y1 , . . . , Yn ] is considerably more complicated than computation of the individual pro jections 65 E [X |Yi ], because it requires inversion of the covariance matrix of all the Y ’s. However, if E [Yi YjT ] = 0 for i = j (i.e., all coordinates of Yi are orthogonal to all coordinates of Yj for i = j ), then n E [X |Y1 , . . . , Yn ] = i=1 E [X |Yi ]. (3.3) The orthogonality principle can be used to prove (3.3) as follows. It suffices to prove that the sum of individual pro jections on the right side of (3.3) satisfies the two properties that together characterize the joint pro jection on the left side of (3.3). First, the sum of individual pro jections is linear in Y1 , . . . , Yn . Secondly, let n e = X− i=1 E [X |Yi ]. T It must be shown that E [e(Y1T c1 + Y2T c2 + · · · + Yn cn )] = 0 for any constant vectors c1 , . . . , cn . It T ] = 0 for all i. But E [X |Y ] has the form B Y , since X and Y have is enough to show that E [eYi j jj j mean zero, so E [eYiT ] = E X − E [X |Yi ] YiT − E [Bj Yj YiT ]. j :j = i Each term on the right side of this equation is zero, so E [eYiT ] = 0 and (3.3) is proved. If Y1 , Y2 , . . . , Yn are mean zero random vectors but E [Yi YjT ] = 0 for some i, j then (3.3) doesn’t directly apply. However, by orthogonalizing the Y ’s we can obtain a sequence Y1 , Y2 , . . . , Yn that can be used instead. Let Y1 = Y1 , and for k ≥ 2 let Yk = Yk − E [Yk |Y1 , . . . , Yk−1 ]. Then E [Yi YjT ] = 0 for i = j . In addition, by induction on k , we can prove that the set of all random variables obtained by linear transformation of Y1 , . . . , Yk is equal to the set of all random variables obtained by linear transformation of Y1 , . . . , Yk . Thus, for any mean zero random variable X , n E [X | Y1 , . . . , Yn ] = E [X |Y1 , . . . , Yn ] = i=1 E [X |Yi ]. Moreover, this same result can be used to compute Y2 , . . . , Yn recursively: Yk = Yk − k −1 i=1 E [Yk |Yi ] k ≥ 2. The sequence Y1 , Y2 , . . . , Yn is called the linear innovations sequence for Y1 , Y2 , . . . , Yn . Although we defined innovations sequences only for mean zero random vectors in this section, the idea can be used for random vectors with nonzero mean, for example by first subtracting the means. 66 3.5 Discrete-time Kalman filtering Kalman filtering is a state-space approach to the problem of estimating one random sequence from another. Recursive equations are found that are useful in many real-time applications. For notational convenience, in this section, lower case letters are used for random vectors. All the random variables involved are assumed to have finite second moments. The state sequence x0 , x1 , . . ., is to be estimated from an observed sequence y0 , y1 , . . .. These sequences of random vectors are assumed to satisfy the following state and observation equations. State: Observation: xk+1 = Fk xk + wk yk = T Hk xk + vk k≥0 k ≥ 0. It is assumed that • x0 , v0 , v1 , . . . , w0 , w1 , . . . are pairwise uncorrelated. • E x0 = x0 , Cov(x0 ) = P0 , E wk = 0, Cov(wk ) = Qk , E vk = 0, Cov(vk ) = Rk . • Fk , Hk , Qk , Rk for k ≥ 0; P0 are known matrices. • x0 is a known vector. See Figure 3.2 for a block diagram of the state and observation equations. The evolution of the vk wk + xk+1 xk Delay T Hk + yk F k Figure 3.2: Block diagram of the state and observations equations. state sequence x0 , x1 , . . . is driven by the random vectors w0 , w1 , . . ., while the random vectors v0 , v1 , . . . , represent observation noise. Let xk = E [xk ] and Pk = Cov(xk ). These quantities are recursively determined for k ≥ 1 by T xk+1 = Fk xk and Pk+1 = Fk Pk Fk + Qk , (3.4) where the initial conditions x0 and P0 are given as part of the state model. The idea of the Kalman filter equations is to recursively compute conditional expectations in a similar way. Let y k = (y0 , y1 , . . . , yk ) represent the observations up to time k . Define for nonnegative integers i, j xi|j = E [xi |y j ] and the associated covariance of error matrices Σi|j = Cov(xi − xi|j ). 67 The goal is to compute xk+1|k for k ≥ 0. The Kalman filter equations will first be stated, then briefly discussed, and then derived. The Kalman filter equations are given by xk+1|k = T Fk − Kk Hk xk|k−1 + Kk yk (3.5) T = Fk xk|k−1 + Kk yk − Hk xk|k−1 with the initial condition x0|−1 = x0 , where the gain matrix Kk is given by T K k = F k Σ k |k − 1 H k H k Σ k | k − 1 H k + R k −1 (3.6) T T Hk Σk|k−1 Fk + Qk (3.7) and the covariance of error matrices are recursively computed by T Σk+1|k = Fk Σk|k−1 − Σk|k−1 Hk Hk Σk|k−1 Hk + Rk −1 with the initial condition Σ0|−1 = P0 . See Figure 3.3 for the block diagram. yk Kk + xk+1 k Delay xk k!1 F !Kk HT k k Figure 3.3: Block diagram of the Kalman filter. We comment briefly on the Kalman filter equations, before deriving them. First, observe what happens if Hk is the zero matrix, Hk = 0, for all k . Then the Kalman filter equations reduce to (3.4) with xk|k−1 = xk , Σk|k−1 = Pk and Kk = 0. Taking Hk = 0 for all k is equivalent to having no observations available. In many applications, the sequence of gain matrices can be computed ahead of time according to (3.6) and (3.7). Then as the observations become available, the estimates can be computed using only (3.5). In some applications the matrices involved in the state and observation models, including the covariance matrices of the vk ’s and wk ’s, do not depend on k . The gain matrices Kk could still depend on k due to the initial conditions, but if the model is stable in some sense, then the gains converge to a constant matrix K , so that in steady state the filter equation (3.5) becomes time invariant: xk+1|k = (F − K H T )xk|k−1 + K yk . In other applications, particularly those involving feedback control, the matrices in the state and/or observation equations might not be known until just before they are needed. The Kalman filter equations are now derived. Roughly speaking, there are two considerations for computing xk+1|k once xk|k−1 is computed: (1) the change in state from xk to xk+1 , and (2) the availability of the new observation yk . It is useful to treat the two considerations separately. To predict xk+1 without the benefit of the new observation we only need to use the state update equation and the fact wk ⊥ y k−1 , to find E xk+1 | y k−1 68 = Fk xk|k−1 . (3.8) Thus, if it weren’t for the new observation, the filter update equation would simply consist of multiplication by Fk . Furthermore, the covariance of error matrix would be T Σk+1|k−1 = Cov(xk+1 − Fk xk|k−1 ) = Fk Σk|k−1 Fk + Qk . (3.9) Consider next the new observation yk . The observation yk is not totally new—for it can be predicted in part from the previous observations. Specifically, we can consider yk = yk − E [yk | y k−1 ] ˜ to be the new part of the observation yk . The variable yk is the linear innovation at time k . Since ˜ the linear span of the random variables in (y k−1 , yk ) is the same as the linear span of the random variables in (y k−1 , yk ), for the purposes of incorporating the new observation we can pretend that ˜ yk is the new observation rather than yk . By the observation equation and the facts E [vk ] = 0 and ˜ T T T E [y k−1 vk ] = 0, it follows that E [yk | y k−1 ] = Hk xk|k−1 , so yk = yk − Hk xk|k−1 . ˜ Since xk|k−1 can be expressed as a linear transformation of (1, y k−1 , yk ), or equivalently as a linear transformation of (1, y k−1 , yk ), ˜ xk+1|k = Fk xk|k−1 + E xk+1 − Fk xk|k−1 | y k−1 , yk . ˜ (3.10) Since E [yk ] = 0 and E [y k−1 yk ] = 0, ˜ ˜T E [xk+1 − Fk xk|k−1 | y k−1 , yk ] = E [xk+1 − Fk xk|k−1 | y k−1 ] + E [xk+1 − Fk xk|k−1 | yk ] ˜ ˜ 0 (3.11) where the first term on the right side of (3.11) is zero by (3.8). Since xk+1 − Fk xk|k−1 and yk are ˜ both mean zero, E [xk+1 − Fk xk|k−1 | yk ] = Kk yk ˜ ˜ (3.12) Kk = Cov(xk+1 − Fk xk|k−1 , yk )Cov(yk )−1 . ˜ ˜ (3.13) where Combining (3.10), (3.11), and (3.12) yields the main Kalman filter equation xk+1|k = Fk xk|k−1 + Kk yk . ˜ (3.14) Taking into account the new observation yk , which is orthogonal to the previous observations, yields ˜ a reduction in the covariance of error: Σk+1|k = Σk+1|k−1 − Cov(Kk yk ). ˜ (3.15) The Kalman filter equations (3.5), (3.6), and (3.7) follow easily from (3.14), (3.13), and (3.15), respectively. Some of the details follow. To convert (3.13) into (3.6), use T Cov(xk+1 − Fk xk|k−1 , yk ) = Cov(Fk (xk − xk|k−1 ) + wk , Hk (xk − xk|k−1 ) + vk ) ˜ = Cov(Fk (xk − = F k Σ k |k − 1 H k 69 T xk|k−1 ), Hk (xk − xk|k−1 )) (3.16) and T Cov(yk ) = Cov(Hk (xk − xk|k−1 ) + vk ) ˜ T = Cov(Hk (xk − xk|k−1 )) + Cov(vk ) T = H k Σ k |k − 1 H k + R k To convert (3.15) into (3.7) use (3.9) and T Cov(Kk yk ) = Kk Cov(yk )Kk ˜ ˜ = Cov(xk+1 − Fk xk|k−1 )Cov(yk )−1 Cov(xk+1 − Fk xk|k−1 ) ˜ This completes the derivation of the Kalman filtering equations. 70 3.6 Problems 3.1. Rotation of a joint normal distribution yielding indep endence Let X be a Gaussian vector with E [X ] = 10 5 C ov (X ) = 21 11 . (a) Write an expression for the pdf of X that does not use matrix notation. (b) Find a vector b and orthonormal matrix U such that the vector Y define by Y = U T (X − b) is a mean zero Gaussian vector such at Y1 and Y2 are independent. 3.2. Linear approximation of the cosine function over [0, π ] Let Θ be uniformly distributed on the interval [0, π ] (yes, [0, π ], not [0, 2π ]). Suppose Y = cos(Θ) is to be estimated by an estimator of the form a + bΘ. What numerical values of a and b minimize the mean square error? 3.3. Calculation of some minimum mean square error estimators Let Y = X + N , where X has the exponential distribution with parameter λ, and N is Gaussian with mean 0 and variance σ 2 . The variables X and N are independent, and the parameters λ and 1 1 σ 2 are strictly positive. (Recall that E [X ] = λ and Var(X ) = λ2 .) (a) Find E [X |Y ] and also find the mean square error for estimating X by E [X |Y ]. (b) Does E [X |Y ] = E [X |Y ]? Justify your answer. (Hint: Answer is yes if and only if there is no estimator for X of the form g (Y ) with a smaller MSE than E [X |Y ].) 3.4. Valid covariance matrix For what real values of a and b is the following matrix the covariance matrix of some real-valued random vector? 21b K = a 1 0 . b01 Hint: An symmetric n × n matrix is positive semidefinite if and only if the determinant of every matrix obtained by deleting a set of rows and the corresponding set of columns, is nonnegative. 3.5. Conditional probabilities with joint Gaussians I X 1ρ Let be a mean zero Gaussian vector with correlation matrix Y ρ1 (a) Express P [X ≤ 1|Y ] in terms of ρ, Y , and the standard normal CDF, Φ. (b) Find E [(X − Y )2 |Y = y ] for real values of y . , where |ρ| < 1. 3.6. Conditional probabilities with joint Gaussians I I Let X, Y be jointly Gaussian random variables with mean zero and covariance matrix Cov X Y = 46 6 18 . You may express your answers in terms of the Φ function defined by Φ(u) = (a) Find P [|X − 1| ≥ 2]. 71 u −s2 /2 ds. √1 −∞ 2π e (b) What is the conditional density of X given that Y = 3? You can either write out the density in full, or describe it as a well known density with specified parameter values. (c) Find P [|X − E [X |Y ]| ≥ 1]. 3.7. An estimation error b ound Suppose the random vector X Y has mean vector 2 −2 and covariance matrix 83 32 . Let e = X − E [X | Y ]. (a) If possible, compute E [e2 ]. If not, give an upper bound. (b) For what joint distribution of X and Y (consistent with the given information) is E [e2 ] maximized? Is your answer unique? 3.8. An MMSE estimation problem (a) Let X and Y be jointly uniformly distributed over the triangular region in the x − y plane with corners (0,0), (0,1), and (1,2). Find both the linear minimum mean square error (LMMSE) estimator estimator of X given Y and the (possibly nonlinear) MMSE estimator X given Y . Compute the mean square error for each estimator. What percentage reduction in MSE does the MMSE estimator provide over the LMMSE? (b) Repeat part (a) assuming Y is a N (0, 1) random variable and X = |Y |. 3.9. Diagonalizing a two-dimensional Gaussian distribution 1ρ X1 be a mean zero Gaussian random vector with correlation matrix , Let X = X2 ρ1 where |ρ| < 1. Find an orthonormal 2 by 2 matrix U such that X = U Y for a Gaussian vector Y1 such that Y1 is independent of Y2 . Also, find the variances of Y1 and Y2 . Y= Y2 Note: The following identity might be useful for some of the problems that follow. If A, B , C, and D are jointly Gaussian and mean zero, then E [AB C D] = E [AB ]E [C D] + E [AC ]E [B D] + E [AD]E [B C ]. This implies that E [A4 ] = 3E [A2 ]2 , Var(A2 ) = 2E [A2 ], and Cov(A2 , B 2 ) = 2Cov(A, B )2 . Also, E [A2 B ] = 0. 3.10. An estimator of an estimator Let X and Y be square integrable random variables and let Z = E [X | Y ], so Z is the MMSE estimator of X given Y . Show that the LMMSE estimator of X given Y is also the LMMSE estimator of Z given Y . (Can you generalize this result?). 3.11. Pro jections onto nested linear subspaces (a) Use the Orthogonality Principle to prove the following statement: Suppose V0 and V1 are two closed linear spaces of second order random variables, such that V0 ⊃ V1 , and suppose X is a random variable with finite second moment. Let Zi∗ be the random variable in Vi with the minimum ∗ mean square distance from X . Then Z1 is the variable in V1 with the minimum mean square dis∗ tance from Z0 . (b) Suppose that X, Y1 , and Y2 are random variables with finite second moments. For each of the following three statements, identify the choice of subspace V0 and V1 such that the statement follows from part (a): (i) E [X |Y1 ] = E [ E [X |Y1 , Y2 ] |Y1 ]. 72 (ii) E [X |Y1 ] = E [ E [X |Y1 , Y2 ] |Y1 ]. (Sometimes called the “tower property.”) (iii) E [X ] = E [E [X |Y1 ]]. (Think of the expectation of a random variable as the constant closest to the random variable, in the m.s. sense. ) 3.12. Some identities for estimators Let X and Y be random variables with E [X 2 ] < ∞. For each of the following statements, determine if the statement is true. If yes, give a justification using the orthogonality principle. If no, give a counter example. (a) E [X cos(Y )|Y ] = E [X |Y ] cos(Y ) (b) E [X |Y ] = E [X |Y 3 ] (c) E [X 3 |Y ] = E [X |Y ]3 (d) E [X |Y ] = E [X |Y 2 ] (e) E [X |Y ] = E [X |Y 3 ] 3.13. The square ro ot of a p ositive-semidefinite matrix (a) True or false? It B is a square matrix over the reals, then B B T is positive semidefinite. (b) True or false? If K is a symmetric positive semidefinite matrix over the reals, then there exists a symmetric positive semidefinite matrix S over the reals such that K = S 2 . (Hint: What if K is also diagonal?) 3.14. Estimating a quadratic X 1ρ Let be a mean zero Gaussian vector with correlation matrix , where |ρ| < 1. Y ρ1 (a) Find E [X 2 |Y ], the best estimator of X 2 given Y . (b) Compute the mean square error for the estimator E [X 2 |Y ]. (c) Find E [X 2 |Y ], the best linear (actually, affine) estimator of X 2 given Y , and compute the mean square error. 3.15. A quadratic estimator Suppose Y has the N (0, 1) distribution and that X = |Y |. Find the estimator for X of the form X = a + bY + cY 2 which minimizes the mean square error. (You can use the following numerical values: E [|Y |] = 0.8, E [Y 4 ] = 3, E [|Y |Y 2 ] = 1.6.) (a) Use the orthogonality principle to derive equations for a, b, and c. (b) Find the estimator X . (c) Find the resulting minimum mean square error. 3.16. An vations sequence and its application inno 1 0.5 Y1 0.5 Y2 1 Let Y3 be a mean zero random vector with correlation matrix 0.5 0.5 0 0.25 X Y1 (a) Let Y1 , Y2 , Y3 denote the innovations sequence. Find the matrix A so that Y2 Y3 73 0.5 0 0.5 .25 . 1 0.25 0.25 1 Y1 = A Y2 . Y3 Y1 Y1 (b) Find the correlation matrix of Y2 and cross covariance matrix Cov(X, Y2 ). Y3 Y3 (c) Find the constants a, b, and c to minimize E [(X − aY1 − bY2 − cY3 )2 ]. 3.17. Estimation for an additive Gaussian noise mo del Assume x and n are independent Gaussian vectors with means x, n and covariance matrices Σx ¯¯ and Σn . Let y = x + n. Then x and y are jointly Gaussian. (a) Show that E [x|y ] is given by either x + Σx (Σx + Σn )−1 (y − (x + n)) ¯ ¯¯ or Σn (Σx + Σn )−1 x + Σx (Σx + Σn )−1 (y − n). ¯ ¯ (b). Show that the conditional covariance matrix of x given y is given by any of the three expressions: Σx − Σx (Σx + Σn )−1 Σx = Σx (Σx + Σn )−1 Σn = (Σ−1 + Σ−1 )−1 . x n (Assume that the various inverses exist.) 3.18. A Kalman filtering example (a) Let σ 2 > 0 and let f be a real constant. Let x0 denote a N (0, σ 2 ) random variable and let f be a real-valued constant. Consider the state and observation sequences defined by: (state) xk+1 = f xk + wk (observation) yk = xk + vk where w1 , w2 , . . . ; v1 , v2 , . . . are mutually independent N (0, 1) random variables. Write down the Kalman filter equations for recursively computing the estimates xk|k−1 , the (scaler) gains Kk , and ˆ 2 the sequence of the variances of the errors (for brevity write σk for the covariance or error instead of Σk|k−1 ). (b) For what values of f is the sequence of error variances bounded? 3.19. Steady state gains for one-dimensional Kalman filter This is a continuation of the previous problem. 2 (a) Show that limk→∞ σk exists. 2 , in terms of f . (b) Express the limit, σ∞ 2 (c) Explain why σ∞ = 1 if f = 0. 3.20. A variation of Kalman filtering (a) Let σ 2 > 0 and let f be a real constant. Let x0 denote a N (0, σ 2 ) random variable and let f be a real-valued constant. Consider the state and observation sequences defined by: (state) xk+1 = f xk + wk (observation) yk = xk + wk where w1 , w2 , are mutually independent N (0, 1) random variables. Note that the state and observation equations are driven by the same sequence, so that some of the Kalman filtering equations derived in the notes do not apply. Derive recursive equations needed to compute xk|k−1 , including ˆ recursive equations for any needed gains or variances of error. (Hints: What modifications need to 74 be made to the derivation for the standard model? Check that your answer is correct for f = 1.) 3.21. The Kalman filter for xk|k Suppose in a given application a Kalman filter has been implemented to recursively produce xk+1|k for k ≥ 0, as in class. Thus by time k , xk+1|k , Σk+1|k , xk|k−1 , and Σk|k−1 are already computed. Suppose that it is desired to also compute xk|k at time k . Give additional equations that can be used to compute xk|k . (You can assume as given the equations in the class notes, and don’t need to write them all out. Only the additional equations are asked for here. Be as explicit as you can, expressing any matrices you use in terms of the matrices already given in the class notes.) 3.22. An innovations problem Let U1 , U2 , . . . be a sequence of independent random variables, each uniformly distributed on the interval [0, 1]. Let Y0 = 1, and Yn = U1 U2 · · · Un for n ≥ 1. (a) Find the variance of Yn for each n ≥ 1. (b) Find E [Yn |Y0 , . . . , Yn−1 ] for n ≥ 1. (c) Find E [Yn |Y0 , . . . , Yn−1 ] for n ≥ 1. (d) Find the linear innovations sequence Y = (Y0 , Y1 , . . .). (e) Fix a positive integer M and let XM = U1 + . . . + UM . Using the answer to part (d), find E [XM |Y0 , . . . , YM ], the best linear estimator of XM given (Y0 , . . . , YM ). 3.23. Linear innovations and orthogonal p olynomials (a) Let X be a N (0, 1) random variable. Show that for integers n ≥ 0, E [X n ] = n! (n/2)!2n/2 0 n even n odd Hint: One approach is to apply the power series expansion for ex on each side of the identity 2 E [euX ] = eu /2 , and identify the coefficients of un . (b) Let X be a N (0, 1) random variable, and let Yn = X n for integers n ≥ 0. Note that Y0 ≡ 1. Express the first five terms of the linear innovations sequence Yn in terms of X . 3.24*. Pro of of the orthogonality principle Prove the seven statements lettered (a)-(g) in what follows. Let X be a random variable and let V be a collection of random variables on the same probability space such that (i) E [Z 2 ] < +∞ for each Z ∈ V (ii) V is a linear class, i.e., if Z, Z ∈ V then so is aZ + bZ for any real numbers a and b. (iii) V is closed in the sense that if Zn ∈ V for each n and Zn converges to a random variable Z in the mean square sense, then Z ∈ V . The Orthogonality Principle is that there exists a unique element Z ∗ ∈ V so that E [(X − Z ∗ )2 ] ≤ E [(X − Z )2 ] for all Z ∈ V . Furthermore, a random variable W ∈ V is equal to Z ∗ if and only if (X − W ) ⊥ Z for all Z ∈ V . ((X − W ) ⊥ Z means E [(X − W )Z ] = 0.) The remainder of this problem is aimed at a proof. Let d = inf {E [(X − Z )2 ] : Z ∈ V }. By definition of infimum there exists a sequence Zn ∈ V so that E [(X − Zn )2 ] → d as n → +∞. (a) The sequence Zn is Cauchy in the mean square sense. (Hint: Use the “parallelogram law”: E [(U − V )2 ] + E [(U + V )2 ] = 2(E [U 2 ] + E [V 2 ]). Thus, by the 75 Cauchy criteria, there is a random variable Z ∗ such that Zn converges to Z ∗ in the mean square sense. (b) Z ∗ satisfies the conditions advertised in the first sentence of the principle. (c) The element Z ∗ satisfying the condition in the first sentence of the principle is unique. (Consider two random variables that are equal to each other with probability one to be the same.) This completes the proof of the first sentence. (d) (“if ” part of second sentence). If W ∈ V and (X − W ) ⊥ Z for all Z ∈ V , then W = Z ∗ . (The “only if ” part of second sentence is divided into three parts:) (e) E [(X − Z ∗ − cZ )2 ] ≥ E [(X − Z ∗ )2 ] for any real constant c. (f ) −2cE [(X − Z ∗ )Z ] + c2 E [Z 2 ] ≥ 0 for any real constant c. (g) (X − Z ∗ ) ⊥ Z , and the principle is proved. 76 Chapter 4 Random Pro cesses 4.1 Definition of a random pro cess A random process X is an indexed collection X = (Xt : t ∈ T) of random variables, all on the same probability space (Ω, F , P ). In many applications the index set T is a set of times. If T = Z, or more generally, if T is a set of consecutive integers, then X is called a discrete time random process. If T = R or if T is an interval of R, then X is called a continuous time random process. Three ways to view a random process X = (Xt : t ∈ T) are as follows: • For each t fixed, Xt is a function on Ω. • X is a function on T × Ω with value Xt (ω ) for given t ∈ T and ω ∈ Ω. • For each ω fixed with ω ∈ Ω, Xt (ω ) is a function of t, called the sample path corresponding to ω . Example 4.1 Suppose W1 , W2 , . . . are independent random variables with 1 P {Wk = 1} = P {Wk = −1} = 2 for each k , and suppose X0 = 0 and Xn = W1 + · · · + Wn for positive integers n. Let W = (Wk : k ≥ 1) and X = (Xn : n ≥ 0). Then W and X are both discrete time random processes. The index set T for X is Z+ . A sample path of W and a corresponding sample path of X are shown in Figure 4.1. X (!) W (!) k k k Figure 4.1: Typical sample paths 77 k The following notation is used: µX (t) = E [Xt ] RX (s, t) = E [Xs Xt ] CX (s, t) = Cov(Xs , Xt ) FX,n (x1 , t1 ; . . . ; xn , tn ) = P {Xt1 ≤ x1 , . . . , Xtn ≤ xn } and µX is called the mean function, RX is called the correlation function, CX is called the covariance function, and FX,n is called the nth order CDF. Sometimes the prefix “auto,” meaning “self,” is added to the words correlation and covariance, to emphasize that only one random process is 2 involved. A second order random process is a random process (Xt : t ∈ T) such that E [Xt ] < +∞ for all t ∈ T. The mean, correlation, and covariance functions of a second order random process are all well-defined and finite. If Xt is a discrete random variable for each t, then the nth order pmf of X is defined by pX,n (x1 , t1 ; . . . ; xn , tn ) = P {Xt1 = x1 , . . . , Xtn = xn }. Similarly, if Xt1 , . . . , Xtn are jointly continuous random variables for any distinct t1 , . . . , tn in T, then X has an nth order pdf fX,n , such that for t1 , . . . , tn fixed, fX,n (x1 , t1 ; . . . ; xn , tn ) is the joint pdf of Xt1 , . . . , Xtn . Example 4.2 Let A and B be independent, N (0, 1) random variables. Suppose Xt = A + B t + t2 for all t ∈ R. Let us describe the sample functions, the mean, correlation, and covariance functions, and the first and second order pdf ’s of X . Each sample function corresponds to some fixed ω in Ω. For ω fixed, A(ω ) and B (ω ) are numbers. The sample paths all have the same shape–they are parabolas with constant second derivative equal to 2. The sample path for ω fixed has t = 0 intercept A(ω ), and minimum value ω2 A(ω ) − B (4 ) achieved at t = − B (w) . Three typical sample paths are shown in Figure 4.2. The 2 A(!) !B(!) 2 t 2 A(!)! B(!) 4 Figure 4.2: Typical sample paths various moment functions are given by µX (t) = E [A + B t + t2 ] = t2 RX (s, t) = E [(A + B s + s2 )(A + B t + t2 )] = 1 + st + s2 t2 CX (s, t) = RX (s, t) − µX (s)µX (t) = 1 + st. 78 As for the densities, for each t fixed, Xt is a linear combination of two independent Gaussian random variables, and Xt has mean µX (t) = t2 and variance Var(Xt ) = CX (t, t) = 1 + t2 . Thus, Xt is a N (t2 , 1 + t2 ) random variable. That specifies the first order pdf fX,1 well enough, but if one insists on writing it out in all detail it is given by fX,1 (x, t) = 1 2π (1 + t2 ) exp − (x − t2 )2 2(1 + t2 ) . For s and t fixed distinct numbers, Xs and Xt are jointly Gaussian and their covariance matrix is given by Cov Xs Xt = 1 + s2 1 + st 1 + st 1 + t2 . The determinant of this matrix is (s − t)2 , which is nonzero. Thus X has a second order pdf fX,2 . For most purposes, we have already written enough about fX,2 for this example, but in full detail it is given by fX,2 (x, s; y , t) = 1 1 exp − 2π |s − t| 2 x − s2 y − t2 T 1 + s2 1 + st 1 + st 1 + t2 −1 x − s2 y − t2 . The nth order distributions of X for this example are joint Gaussian distributions, but densities don’t exist for n ≥ 3 since Xt1 , Xt2 , and Xt3 are linearly dependent for any t1 , t2 , t3 . A random process (Xt : t ∈ T) is said to be Gaussian if the random variables Xt : t ∈ T comprising the process are jointly Gaussian. The process X in the example just discussed is Gaussian. All the finite order distributions of a Gaussian random process X are determined by the mean function µX and autocorrelation function RX . Indeed, for any finite subset {t1 , t2 , . . . , tn } of T, (Xt1 , . . . , Xtn )T is a Gaussian vector with mean (µX (t1 ), . . . , µX (tn ))T and covariance matrix with ij th element CX (ti , tj ) = RX (ti , tj ) − µX (ti )µX (tj ). Two or more random processes are said to be jointly Gaussian if all the random variables comprising the processes are jointly Gaussian. Example 4.3 Let U = (Uk : k ∈ Z) be a random process such that the random variables Uk : k ∈ Z are independent, and P [Uk = 1] = P [Uk = −1] = 1 for all k . Let X = (Xt : t ∈ R) 2 be the random process obtained by letting Xt = Un for n ≤ t < n + 1 for any n. Equivalently, Xt = U t . A sample path of U and a corresponding sample path of X are shown in Figure 4.3. Both random processes have zero mean, so their covariance functions are equal to their correlation Uk Xt k t Figure 4.3: Typical sample paths function and are given by RU (k , l) = 1 if k = l 0 else RX (s, t) = 79 1 if s = t 0 else . The random variables of U are discrete, so the nth order pmf of U exists for all n. It is given by pU,n (x1 , k1 ; . . . ; xn , kn ) = 2−n if (x1 , . . . , xn ) ∈ {−1, 1}n 0 else for distinct integers k1 , . . . , kn . The nth order pmf of X exists for the same reason, but it is a bit more difficult to write down. In particular, the joint pmf of Xs and Xt depends on whether s = t . If s = t then Xs = Xt and if s = t then Xs and Xt are independent. Therefore, the second order pmf of X is given as follows: 1 2 if t1 = t2 and either x1 = x2 = 1 or x1 = x2 = −1 1 if t1 = t2 and x1 , x2 ∈ {−1, 1} fX,2 (x1 , t1 ; x2 , t2 ) = 4 0 else. 4.2 Random walks and gambler’s ruin Suppose p is given with 0 < p < 1. Let W1 , W2 , . . . be independent random variables with P {Wi = 1} = p and P {Wi = −1} = 1 − p for i ≥ 1. Suppose X0 is an integer valued random variable independent of (W1 , W2 , . . .), and for n ≥ 1, define Xn by Xn = X0 + W1 + · · · + Wn . A sample path of X = (Xn : n ≥ 0) is shown in Figure 4.4. The random process X is called a b Xn (!) k n Figure 4.4: A typical sample path random walk. Write Pk and Ek for conditional probabilities and conditional expectations given that X0 = k . For example, Pk [A] = P [A | X0 = k ] for any event A. Let us summarize some of the basic properties of X . • Ek [Xn ] = k + n(2p − 1). • Vark (Xn ) = Var(k + W1 + · · · + Wn ) = 4np(1 − p). • limn→∞ Xn n = 2p − 1 (a.s. and m.s. under Pk , k fixed). 80 • limn→∞ Pk Xn −n(2p−1) √ 4np(1−p) = Φ(c). ≤c n j • Pk {Xn = k + j − (n − j )} = pj (1 − p)n−j for 0 ≤ j ≤ n. Almost all the properties listed are properties of the one dimensional distributions of X . In fact, only the strong law of large numbers, giving the a.s. convergence in the third property listed, depends on the joint distribution of the Xn ’s. The so-called Gambler’s Ruin problem is a nice example of the calculation of a probability involving the joint distributions of the random walk X . Interpret Xn as the number of units of money a gambler has at time n. Assume that the initial wealth k satisfies k ≥ 0, and suppose the gambler has a goal of accumulating b units of money for some positive integer b ≥ k . While the random walk (Xn : n ≥ 0) continues on forever, we are only interested in it until it hits either 0 or b. Let Rb denote the event that the gambler is eventually ruined, meaning the random walk reaches zero without first reaching b. The gambler’s ruin probability is Pk [Rb ]. A simple idea allows us to compute the ruin probability. The idea is to condition on the value of the first step W1 , and then to recognize that after the first step is taken, the conditional probability of ruin is the same as the unconditional probability of ruin for initial wealth k + W1 . Let rk = Pk [Rb ] for 0 ≤ k ≤ b, so rk is the ruin probability for the gambler with initial wealth k and target wealth b. Clearly r0 = 1 and rb = 0. For 1 ≤ k ≤ b − 1, condition on W1 to yield rk = Pk {W1 = 1}Pk [Rb | W1 = 1] + Pk {W1 = −1}Pk [Rb | W1 = −1] or rk = prk+1 + (1 − p)rk−1 . This yields b − 1 linear equations for the b − 1 unknowns r1 , . . . , rb−1 . 1 If p = 1 the equations become rk = 2 {rk−1 + rk+1 } so that rk = A + B k for some constants A 2 1 and B . Using the boundary conditions r0 = 1 and rb = 0, we find that rk = 1 − k in case p = 2 . b Note that, interestingly enough, after the gambler stops playing, he’ll have b units with probability k b and zero units otherwise. Thus, his exp ected wealth after completing the game is equal to his initial capital, k . k k If p = 1 , we seek a solution of the form rk = Aθ1 + B θ2 , where θ1 and θ2 are the two roots of the 2 quadratic equation θ = pθ2 + (1 − p) and A, B are selected to meet the two boundary conditions. 1 The roots are 1 and 1−p , and finding A and B yields, that if p = 2 p rk = 1− p p k 1− − 1− p p 1− p p b b 0 ≤ k ≤ b. 1 Focus, now, on the case that p > 2 . By the law of large numbers, Xn → 2p − 1 a.s. as n → ∞. n This implies, in particular, that Xn → +∞ a.s. as n → ∞. Thus, unless the gambler is ruined in finite time, his capital converges to infinity. Let R be the event that the gambler is eventually ruined. The events Rb increase with b because if b is larger the gambler has more possibilities to be ruined before accumulating b units of money: Rb ⊂ Rb+1 ⊂ · · · and R = ∪∞ k Rb . Therefore by b= the countable additivity of probability, Pk [R] = lim Pk [Rb ] = b→∞ lim rk b→∞ = 1−p p k . Thus, the probability of eventual ruin decreases geometrically with the initial wealth k . 81 4.3 Pro cesses with indep endent increments and martingales The increment of a random process X = (Xt : t ∈ T) over an interval [a, b] is the random variable Xb − Xa . A random process is said to have independent increments if for any positive integer n and any t0 < t1 < · · · < tn in T, the increments Xt1 − Xt0 , . . . , Xtn − Xtn−1 are mutually independent. A random process (Xt : t ∈ T) is called a martingale if E [Xt ] is finite for all t and for any positive integer n and t1 < t2 < · · · < tn < tn+1 , E [Xtn+1 | Xt1 , . . . , Xtn ] = Xtn or, equivalently, E [Xtn+1 − Xtn | Xt1 , . . . , Xtn ] = 0. If tn is interpreted as the present time, then tn+1 is a future time and the value of (Xt1 , . . . , Xtn ) represents information about the past and present values of X . With this interpretation, the martingale property is that the future increments of X have conditional mean zero, given the past and present values of the process. An example of a martingale is the following. Suppose a gambler has initial wealth X0 . Suppose the gambler makes bets with various odds, such that, as far as the past history of X can determine, the bets made are all for fair games in which the expected net gains are zero. Then the wealth of the gambler at time t, Xt , is a martingale. Suppose (Xt ) is an independent increment process with index set T = R+ or T = Z+ , with X0 equal to a constant and with mean zero increments. Then X is a martingale, as we now show. Let t1 < · · · < tn+1 be in T. Then (Xt1 , . . . , Xtn ) is a function of the increments Xt1 − X0 , Xt2 − Xt1 , . . . , Xtn − Xtn−1 , and hence it is independent of the increment Xtn+1 − Xtn . Thus E [Xtn+1 − Xtn | Xt1 , . . . , Xtn ] = E [Xtn+1 − Xtn ] = 0. The random walk (Xn : n ≥ 0) arising in the gambler’s ruin problem is an independent increment 1 process, and if p = 2 it is also a martingale. The following proposition is stated, without proof, to give an indication of some of the useful deductions that follow from the martingale property. Prop osition 4.3.1 (a) Let X0 , X1 , X1 , . . . be nonnegative random variables such that E [Xk+1 | X0 , . . . , Xk ] ≤ Xk for k ≥ 0 (such X is a nonnegative supermartingale). Then max Xk P 0≤ k ≤ n ≥γ ≤ E [X0 ] . γ 2 (b) (Doob’s L2 Inequality) Let X0 , X1 , . . . be a martingale sequence with E [Xn ] < +∞ for some n. Then 2 E max Xk 0≤ k ≤ n 82 2 ≤ 4E [Xn ]. 4.4 Brownian motion A Brownian motion, also called a Wiener process, is a random process W = (Wt : t ≥ 0) such that B.0 P {W0 = 0} = 1. B.1 W has independent increments. B.2 Wt − Ws has the N (0, σ 2 (t − s)) distribution for t ≥ s. B.3 P [Wt is a continuous function of t] = 1, or in other words, W is sample continuous with probability one. A typical sample path of a Brownian motion is shown in Figure 4.5. A Brownian motion, being a X (! ) t t Figure 4.5: A typical sample path of Brownian motion mean zero independent increment process with P {W0 = 0} = 1, is a martingale. The mean, correlation, and covariance functions of a Brownian motion W are given by µW (t) = E [Wt ] = E [Wt − W0 ] = 0 and, for s ≤ t, RW (s, t) = E [Ws Wt ] = E [(Ws − W0 )(Ws − W0 + Wt − Ws )] = E [(Ws − W0 )2 ] = σ 2 s so that, in general, CW (s, t) = RW (s, t) = σ 2 (s ∧ t). A Brownian motion is Gaussian, because if 0 ≤ t1 ≤ · · · ≤ tn , then each coordinate of the vector (Wt1 , . . . , Wtn ) is a linear combination of the n independent Gaussian random variables (Wti − Wti−1 : 1 ≤ i ≤ n). Thus, properties B.0–B.2 imply that W is a Gaussian random process with µW = 0 and RW (s, t) = σ 2 (s ∧ t). In fact, the converse is also true. If W = (Wt : t ≥ 0) is a Gaussian random process with mean zero and RW (s, t) = σ 2 (s ∧ t), then B.0–B.2 are true. Property B.3 does not come automatically. For example, if W is a Brownian motion and if U is a Unif(0,1) distributed random variable, let W be defined by Wt = Wt + I{U =t} . 83 Then P {Wt = Wt } = 1 for each t ≥ 0 and W also satisfies B.0–B.2, but W fails to satisfy B.3. Thus, W is not a Brownian motion. The difference between W and W is significant if events involving uncountably many values of t are investigated. For example, P {Wt ≤ 1 for 0 ≤ t ≤ 1} = P {Wt ≤ 1 for 0 ≤ t ≤ 1}. 4.5 Counting pro cesses and the Poisson pro cess A function f on R+ is called a counting function if f (0) = 0, f is nondecreasing, f is right continuous, and f is integer valued. The interpretation is that f (t) is the number of “counts” observed during the interval (0, t]. An increment f (b) − f (a) is the number of counts in the interval (a, b]. If ti denotes the time of the ith count for i ≥ 1, then f can be described by the sequence (ti ). Or, if u1 = t1 and ui = ti − ti−1 for i ≥ 2, then f can be described by the sequence (ui ). See Figure 4.6. The numbers t1 , t2 , . . . are called the count times and the numbers u1 , u2 , . . . are called f(t) 3 2 1 0 u1 u2 u3 t1 0 t2 t t3 Figure 4.6: A counting function. the intercount times. The following equations clearly hold: f (t) = ∞ n=1 I{t≥tn } tn = min{t : f (t) ≥ n} tn = u 1 + · · · + u n . A random process is called a counting process if with probability one its sample path is a counting function. A counting process has two corresponding random sequences, the sequence of count times and the sequence of intercount times. The most widely used example of a counting process is a Poisson process, defined next. Let λ > 0. By definition, a Poisson process with rate λ is a random process N = (Nt : t ≥ 0) such that N.1 N is a counting process, N.2 N has independent increments, N.3 N (t) − N (s) has the P oi(λ(t − s)) distribution for t ≥ s. 84 The joint pdf of the first n count times T1 , . . . , Tn can be found as follows. Let 0 < t1 < t2 < · · · < tn . Select > 0 so small that (t1 − , t1 ], (t2 − , t2 ], . . . , (tn − , tn ] are disjoint intervals of R+ . Then the probability that (T1 , . . . , Tn ) is in the n-dimensional cube with upper corner t1 , . . . , tn and sides of length is given by P {Ti ∈ (ti − , ti ] for 1 ≤ i ≤ n} = P {Nt1 − = 0, Nt1 − Nt1 − = 1, Nt2 − − Nt1 = 0, . . . , Ntn − Ntn − = 1} = (e−λ(t1 − ) )(λ e−λ )(e−λ(t2 − −t1 ) = (λ )n e−λtn . The volume of the cube is n. ) · · · (λ e−λ ) Therefore (T1 , . . . , Tn ) has the pdf λn e−λtn 0 fT1 ···Tn (t1 , . . . , tn ) = if 0 < t1 < · · · < tn else. The marginal density of Tn is given for t > 0 by (replacing tn by t): fTn (t) = = t tn−1 t2 ··· 0 0 n tn−1 e−λt λ 0 λn e−λtn dt1 · · · dtn−1 (n − 1)! so that Tn has the Gamma(λ, n) density. The joint pdf of the intercount times U1 , . . . , Un for a rate λ Poisson process is found next. The vector (U1 , . . . , Un ) is the image of (T1 , . . . , Tn ) under the mapping (t1 , . . . , tn ) → (u1 , . . . , un ) defined by u1 = t1 , uk = tk − tk−1 for k ≥ 2. The mapping is invertible, since tk = u1 + · · · + uk for 1 ≤ k ≤ n, it has range Rn , and the Jacobian + 1 −1 1 ∂u −1 1 = ∂t .. .. . . −1 1 has unit determinant. Therefore λn e−λ(u1 +···+un ) u ∈ Rn +. 0 else fU1 ...Un (u1 , . . . , un ) = Thus the intercount times U1 , U2 , . . . are independent and each is exponentially distributed with parameter λ. Fix τ > 0 and an integer n ≥ 1. Let us find the conditional density of (T1 , . . . , Tn ) given that Nτ = n. Arguing as before, we find that for 0 < t1 < · · · < tn < τ and all > 0 small enough P {{Ti ∈ (ti − , ti ] : 1 ≤ i ≤ n} ∩ {Nτ = n}} = (λ )n e−λτ . Dividing by n P {Nτ = n} and letting event {Nτ = n}: → 0 yields the conditional density of T1 , . . . , Tn given the fT1 ···Tn (t1 , . . . , tn ) = n! τn 0 85 if 0 < t1 < · · · < tn ≤ τ . else Equivalently, given that n counts occur during [0, τ ], the times of the counts are as if n times were independently selected, each uniformly distributed on [0, τ ]. We have the following proposition. Prop osition 4.5.1 Let N be a counting process and let λ > 0. The fol lowing are equivalent: (a) N is a Poisson process with rate λ. (b) The intercount times U1 , U2 , . . . are mutual ly independent, Exp(λ) random variables. (c) For each τ > 0, Nτ is a Poisson random variable with parameter λτ , and given Nτ = n, the times of the n counts during [0, τ ] are the same as n independent, Unif[0, τ ] random variables, reordered to be nondecreasing. A Poisson process is not a martingale. However, if N is defined by Nt = Nt − λt, then N is an independent increment process with mean 0 and N0 = 0. Thus, N is a martingale. Note that N has the same mean and covariance function as a Brownian motion with σ 2 = λ, which shows how little one really knows about a process from its mean function and correlation function alone. 4.6 Stationarity Consider a random process X = (Xt : t ∈ T) such that either T = Z or T = R. Then X is said to be stationary if for any t1 , . . . , tn and s in T, the random vectors (Xt1 , . . . , Xtn ) and (Xt1 +s , . . . , Xtn +s ) have the same distribution. In other words, the joint statistics of X of all orders are unaffected by a shift in time. The condition of stationarity of X can also be expressed in terms of the CDF’s of X : X is stationary if for any n ≥ 1, s, t1 , . . . , tn ∈ T, and x1 , . . . , xn ∈ R, FX,n (x1 , t1 ; . . . ; xn , tn ) = FX,n (x1 , t1 + s; . . . ; xn ; tn + s). Suppose X is a stationary second order random process. Then by the n = 1 part of the definition 2 of stationarity, Xt has the same distribution for all t. In particular, µX (t) and E [Xt ] do not depend on t. Moreover, by the n = 2 part of the definition E [Xt1 Xt2 ] = E [Xt1 +s Xt2 +s ] for any s ∈ T. If 2 E [Xt ] < +∞ for all t, then E [Xt+s ] and RX (t1 + s, t2 + s) are finite and both do not depend on s. A second order random process (Xt : t ∈ T) with T = Z or T = R is called wide sense stationary (WSS) if µX (t) = µX (s + t) and RX (t1 , t2 ) = RX (t1 + s, t2 + s) for all t, s, t1 , t2 ∈ T. As shown above, a stationary second order random process is WSS. Wide sense stationarity means that µX (t) is a finite number, not depending on t, and RX (t1 , t2 ) depends on t1 , t2 only through the difference t1 − t2 . By a convenient and widely accepted abuse of notation, if X is WSS, we use µX to be the constant and RX to be the function of one real variable such that E [Xt ] = µX E [Xt1 Xt2 ] = RX (t1 − t2 ) t∈T t1 , t2 ∈ T. The dual use of the notation RX if X is WSS leads to the identity RX (t1 , t2 ) = RX (t1 − t2 ). As a practical matter, this means replacing a comma by a minus sign. Since one interpretation of RX 86 requires it to have two arguments, and the other interpretation requires only one argument, the interpretation is clear from the number of arguments. Some brave authors even skip mentioning that X is WSS when they write: “Suppose (Xt : t ∈ R) has mean µX and correlation function RX (τ ),” because it is implicit in this statement that X is WSS. Since the covariance function CX of a random process X satisfies CX (t1 , t2 ) = RX (t1 , t2 ) − µX (t1 )µX (t2 ), if X is WSS then CX (t1 , t2 ) is a function of t1 − t2 . The notation CX is also used to denote the function of one variable such that CX (t1 − t2 ) = Cov(Xt1 , Xt2 ). Therefore, if X is WSS then CX (t1 − t2 ) = CX (t1 , t2 ). Also, CX (τ ) = RX (τ ) − µ2 , where in this equation τ should be thought X of as the difference of two times, t1 − t2 . In general, there is much more to know about a random vector or a random process than the first and second moments. Therefore, one can mathematically define WSS processes that are spectacularly different in appearance from any stationary random process. For example, any random process (Xk : k ∈ Z) such that the Xk are independent with E [Xk ] = 0 and Var(Xk ) = 1 for all k is WSS. To be specific, we could take the Xk to be independent, with Xk being N (0, 1) for k ≤ 0 and with Xk having pmf pX,1 (x, k ) = P {Xk = x} = 0 else 1 2k 2 1− 1 k2 x ∈ {k , −k } if x = 0 for k ≥ 1. A typical sample path of this WSS random process is shown in Figure 4.7. Xk k Figure 4.7: A typical sample path The situation is much different if X is a Gaussian process. Indeed, suppose X is Gaussian and WSS. Then for any t1 , t2 , . . . , tn , s ∈ T, the random vector (Xt1 +s , Xt2 +s , . . . , Xtn +s )T is Gaussian with mean (µ, µ, . . . , µ)T and covariance matrix with ij th entry CX ((ti + s) − (tj + s)) = CX (ti − tj ). This mean and covariance matrix do not depend on s. Thus, the distribution of the vector does not depend on s. Therefore, X is stationary. In summary, if X is stationary then X is WSS, and if X is both Gaussian and WSS, then X is stationary. 87 Example 4.4 Let Xt = A cos(ωc t + Θ), where ωc is a nonzero constant, A and Θ are independent random variables with P {A > 0} = 1 and E [A2 ] < +∞. Each sample path of the random process (Xt : t ∈ R) is a pure sinusoidal function at frequency ωc radians per unit time, with amplitude A and phase Θ. We address two questions. First, what additional assumptions, if any, are needed on the distributions of A and Θ to imply that X is WSS? Second, we consider two distributions for Θ which each make X WSS, and see if they make X stationary. To address whether X is WSS, the mean and correlation functions can be computed as follows. Since A and Θ are independent and since cos(ωc t + Θ) = cos(ωc t) cos(Θ) − sin(ωc t) sin(Θ), µX (t) = E [A] (E [cos(Θ)] cos(ωc t) − E [sin(Θ)] sin(ωc t)) . Thus, the function µX (t) is a linear combination of cos(ωc t) and sin(ωc t). The only way such a linear combination can be independent of t is if the coefficients of both cos(ωc t) and sin(ωc t) are zero (in fact, it is enough to equate the values of µX (t) at ωc t = 0, π , and π ). Therefore, µX (t) 2 does not depend on t if and only if E [cos(Θ)] = E [sin(Θ)] = 0. Turning next to RX , using the trigonometric identity cos(a) cos(b) = (cos(a − b) + cos(a + b))/2 yields RX (s, t) = E [A2 ]E [cos(ωc s + Θ) cos(ωc t + Θ)] E [A2 ] {cos(ωc (s − t)) + E [cos(ωc (s + t) + 2Θ)]} . = 2 Since s + t can be arbitrary for s − t fixed, in order that RX (s, t) be a function of s − t alone it is necessary that E [cos(ωc (s + t) + 2Θ)] be a constant, independent of the value of s + t. Arguing just as in the case of µX , with Θ replaced by 2Θ, yields that RX (s, t) is a function of s − t if and only if E [cos(2Θ)] = E [sin(2Θ)] = 0. Combining the findings for µX and RX , yields that X is WSS, if and only if, E [cos(Θ)] = E [sin(Θ)] = E [cos(2Θ)] = E [sin(2Θ)] = 0. There are many distributions for Θ in [0, 2π ] such that the four moments specified are zero. Two possibilities are (a) Θ is uniformly distributed on the interval [0, 2π ], or, (b) Θ is a discrete random π variable, taking the four values 0, π , π , 32 with equal probability. Is X stationary for either 2 possibility? We shall show that X is stationary if Θ is uniformly distributed over [0, 2π ]. Stationarity means that for any fixed constant s, the random processes (Xt : t ∈ R) and (Xt+s : t ∈ R) have the same finite order distributions. For this example, ˜ Xt+s = A cos(ωc (t + s) + Θ) = A cos(ωc t + Θ) ˜ ˜ where Θ = ((ωc s + Θ) mod 2π ). By an example discussed in Section 1.4, Θ is again uniformly ˜ ) have the same joint distribution, so distributed on the interval [0, 2π ]. Thus (A, Θ) and (A, Θ ˜ A cos(ωc t + Θ) and A cos(ωc t + Θ) have the same finite order distributions. Hence, X is indeed stationary if Θ is uniformly distributed over [0, 2π ]. π Assume now that Θ takes on each of the values of 0, π , π , and 32 with equal probability. Is X 2 stationary? If X were stationary then, in particular, Xt would have the same distribution for all t. π On one hand, P {X0 = 0} = P {Θ = π or Θ = 32 } = 1 . On the other hand, if ωc t is not an integer 2 2 88 multiple of π , then ωc t + Θ cannot be an integer multiple of π , so P {Xt = 0} = 0. Hence X is not 2 2 stationary. (With more work it can be shown that X is stationary, if and only if, (Θ mod 2π ) is uniformly distributed over the interval [0, 2π ].) 4.7 Joint prop erties of random pro cesses Two random processes X and Y are said to be jointly stationary if their parameter set T is either Z or R, and if for any t1 , . . . , tn , s ∈ T, the distribution of the random vector (Xt1 +s , Xt2 +s , . . . , Xtn +s , Yt1 +s , Yt2 +s , . . . , Ytn +s ) does not depend on s. The random processes X and Y are said to be jointly Gaussian if all the random variables comprising X and Y are jointly Gaussian. If X and Y are second order random processes on the same probability space, the cross correlation function, RX Y , is defined by RX Y (s, t) = E [Xs Yt ], and the cross covariance function, CX Y , is defined by CX Y (s, t) = Cov(Xs , Yt ). The random processes X and Y are said to be jointly WSS, if X and Y are each WSS, and if RX Y (s, t) is a function of s − t. If X and Y are jointly WSS, we use RX Y (τ ) for RX Y (s, t) where τ = s − t, and similarly CX Y (s − t) = CX Y (s, t). Note that CX Y (s, t) = CY X (t, s), so CX Y (τ ) = CY X (−τ ). 4.8 Conditional indep endence and Markov pro cesses Markov processes are naturally associated with the state space approach for modeling a system. The idea of a state space model for a given system is to define the state of the system at any given time t. The state of the system at time t should summarize everything about the system up to and including time t that is relevant to the future of the system. For example, the state of an aircraft at time t could consist of the position, velocity, and remaining fuel at time t. Think of t as the present time. The state at time t determines the possible future part of the aircraft tra jectory. For example, it determines how much longer the aircraft can fly and where it could possibly land. The state at time t does not completely determine the entire past tra jectory of the aircraft. Rather, the state summarizes enough about the system up to the present so that if the state is known, no more information about the past is relevant to the future possibilities. The concept of state is inherent in the Kalman filtering model discussed in Chapter 3. The notion of state is captured for random processes using the notion of conditional independence and the Markov property, which are discussed next. Let X, Y , Z be random vectors. We shall define the condition that X and Z are conditionally independent given Y . Such condition is denoted by X − Y − Z . If X, Y , Z are discrete, then X − Y − Z is defined to hold if P [X = i, Z = k | Y = j ] = P [X = i | Y = j ]P [Z = k | Y = j ] (4.1) for all i, j, k with P {Y = j } > 0. Equivalently, X − Y − Z if P [X = i, Y = j, Z = k ]P {Y = j } = P {X = i, Y = j }P {Z = k , Y = j } 89 (4.2) for all i, j, k . Equivalently again, X - Y - Z if P [Z = k | X = i, Y = j ] = P [Z = k | Y = j ] (4.3) for all i, j, k with P {X = i, Y = j } > 0. The forms (4.1) and (4.2) make it clear that the condition X − Y − Z is symmetric in X and Z : thus X − Y − Z is the same condition as Z − Y − X . The form (4.2) does not involve conditional probabilities, so no requirement about conditioning events having positive probability is needed. The form (4.3) shows that X − Y − Z means that knowing Y alone is as informative as knowing both X and Y , for the purpose of determining conditional probabilies of Z . Intuitively, the condition X − Y − Z means that the random variable Y serves as a state. If X, Y , and Z have a joint pdf, then the condition X − Y − Z can be defined using the pdfs and conditional pdfs in a similar way. For example, the conditional independence condition X − Y − Z holds by definition if fX Z |Y (x, z |y ) = fX |Y (x|y )fZ |Y (z |y ) whenever fY (y ) > 0 An equivalent condition is fZ |X Y (z |x, y ) = fZ |Y (z |y ) whenever fX Y (x, y ) > 0. (4.4) Example 4.5 Suppose X, Y , Z are jointly Gaussian vectors. Let us see what the condition X − Y − Z means in terms of the covariance matrices. Assume without loss of generality that the vectors have mean zero. Because X, Y , and Z are jointly Gaussian, the condition (4.4) is equivalent to the condition that E [Z |X, Y ] = E [Z |Y ] (because given X, Y , or just given Y , the conditional distribution of Z is Gaussian, and in the two cases the mean and covariance of the conditional distribution of Z is the same.) The idea of linear innovations applied to the length two sequence ˜ ˜ (Y , X ) yields E [Z |X, Y ] = E [Z |Y ] + E [Z |X ] where X = X − E [X |Y ]. Thus X − Y − Z if and only if ˜ ˜ ˜ E [Z |X ] = 0, or equivalently, if and only if Cov(X , Z ) = 0. Since X = X − Cov(X, Y )Cov(Y )−1 Y , if follows that ˜ Cov(X , Z ) = Cov(X, Z ) − Cov(X, Y )Cov(Y )−1 Cov(Y , Z ). Therefore, X − Y − Z if and only if Cov(X, Z ) = Cov(X, Y )Cov(Y )−1 Cov(Y , Z ). (4.5) In particular, if X, Y , and Z are jointly Gaussian random variables with nonzero variances, the condition X − Y − Z holds if and only if the correlation coefficients satisfy ρX Z = ρX Y ρY Z . A general definition of conditional probabilities and conditional independence, based on the general definition of conditional expectation given in Chapter 3, is given next. Recall that P [F ] = E [IF ] for any event F , where IF denotes the indicator function of F . If Y is a random vector, we define P [F |Y ] to equal E [IF |Y ]. This means that P [F |Y ] is the unique (in the sense that any two versions are equal with probability one) random variable such that (1) P [F |Y ] is a function of Y and it has finite second moments, and (2) E [g (Y )P [F |Y ]] = E [g (Y )IA ] for any g (Y ) with finite second moment. Given arbitrary random vectors, we define X and Z to be conditionally indpendent given Y , (written X − Y − Z ) if for any Borel sets A and B , P [{X ∈ A}{Z ∈ B }|Y ] = P [X ∈ A|Y ]P [Z ∈ B |Y ]. 90 Equivalently, X − Y − Z holds if for any Borel set B , P [Z ∈ B |X, Y ] = P [Z ∈ B |Y ]. A random process X = (Xt : t ∈ T) is said to be a Markov process if for any t1 , . . . , tn+1 in T with t1 < · · · < tn , the following conditional independence condition holds: (Xt1 , · · · , Xtn ) − Xtn − Xtn+1 (4.6) It turns out that the Markov property is equivalent to the following conditional independence property: For any t1 , . . . , tn+m in T with t1 < · · · < tn+m , (Xt1 , · · · , Xtn ) − Xtn − (Xtn , · · · , Xtn+m ) (4.7) The definition (4.6) is easier to check than condition (4.7), but (4.7) is appealing because it is symmetric in time. In words, thinking of tn as the present time, the Markov property means that the past and future of X are conditionally independent given the present state Xtn . Example 4.6 (Markov property of independent increment processes) Let (Xt : t ≥ 0) be an independent increment process such that X0 is a constant. Then for any t1 , . . . , tn+1 with 0 ≤ t1 ≤ · · · ≤ tn+1 , the vector (Xt1 , . . . , Xtn ) is a function of the n increments Xt1 − X0 , Xt2 − Xt1 , Xtn − Xtn−1 , and is thus independent of the increment V = Xtn+1 − Xtn . But Xtn+1 is determined by V and Xtn . Thus, X is a Markov process. In particular, random walks, Brownian motions, and Poisson processes are Markov processes. Example 4.7 (Gaussian Markov processes) Suppose X = (Xt : t ∈ T) is a Gaussian random process with Var(Xt ) > 0 for all t. By the characterization of conditional independence for jointly Gaussian vectors (4.5), the Markov property (4.6) is equivalent to Xt1 Xt1 Xt Xt 2 2 Cov( . , Xtn+1 ) = Cov( . , Xtn )Var(Xtn )−1 Cov(Xtn , Xtn+1 ) . . . . Xtn Xtn which, letting ρ(s, t) denote the correlation coefficient between Xs and Xt , is equivalent to the requirement ρ(t1 , tn+1 ) ρ(t1 , tn ) ρ(t2 , tn+1 )) ρ(t2 , tn ) = ρ(tn , tn+1 ) . . . . . . ρ(tn , tn+1 ) ρ(tn , tn ) Therefore a Gaussian process X is Markovian if and only if ρ(r, t) = ρ(r, s)ρ(s, t) whenever r, s, t ∈ T with r < s < t. (4.8) If X = (Xk : k ∈ Z ) is a discrete time stationary Gaussian process, then ρ(s, t) may be written as ρ(k ), where k = s − t. Note that ρ(k ) = ρ(−k ). Such a process is Markovian if and only if ρ(k1 + k2 ) = ρ(k1 )ρ(k2 ) for all positive integers k1 and k2 . Therefore, X is Markovian if and only if ρ(k ) = b|k| for all k , for some constant b with |b| ≤ 1. Equivalently, a stationary Gaussian process X = (Xk : k ∈ Z ) with V ar(Xk ) > 0 for all k is Markovian if and only if the covariance function has the form CX (k ) = Ab|k| for some constants A and b with A ≥ 0 and |b| ≤ 1. 91 Similarly, if (Xt : t ∈ R) is a continuous time stationary Gaussian process with V ar(Xt ) > 0 for all t, X is Markovian if and only if ρ(s + t) = ρ(s)ρ(t) for all s, t ≥ 0. The only bounded realvalued functions satisfying such a multiplicative condition are exponential functions. Therefore, a stationary Gaussian process X with V ar(Xt ) > 0 for all t is Markovian if and only if ρ has the form ρ(τ ) = exp(−α|τ |), for some constant α ≥ 0, or equivalently, if and only if CX has the form CX (τ ) = A exp(−α|τ |) for some constants A > 0 and α ≥ 0. 4.9 Discrete state Markov pro cesses This section delves further into the theory of Markov processes in the technically simplest case of a discrete state space. Let S be a finite or countably infinite set, called the state space. Given a probability space (Ω, F , P ), an S valued random variable is defined to be a function Y mapping Ω to S such that {ω : Y (ω ) = s} ∈ F for each s ∈ S . Assume that the elements of S are ordered so that S = {a1 , a2 , . . . , an } in case S has finite cardinality, or S = {a1 , a2 , a3 , . . .} in case S has infinite cardinality. Given the ordering, an S valued random variable is equivalent to a positive integer valued random variable, so it is nothing exotic. Think of the probability distribution of an S valued random variable Y as a row vector of possibly infinite dimension, called a probability vector: pY = (P {Y = a1 }, P {Y = a2 }, . . .). Similarly think of a deterministic function g on S as a column vector, g = (g (a1 ), g (a2 ), . . .)T . Since the elements of S may not even be numbers, it might not make sense to speak of the expected value of an S valued random variable. However, if g is a function mapping S to the reals, then g (Y ) is a real-valued random variable and its expectation is given by the inner product of the probability vector pY and the column vector g : E [g (Y )] = i∈S pY (i)g (i) = pY g . A random process X = (Xt : t ∈ T) is said to have state space S if Xt is an S valued random variable for each t ∈ T, and the Markov property of such a random process is defined just as it is for a real valued random process. Let (Xt : t ∈ T) be a be a Markov process with state space S . For brevity we denote the first order pmf of X at time t as π (t) = (πi (t) : i ∈ S ). That is, πi (t) = pX (i, t) = P {X (t) = i}. The following notation is used to denote conditional probabilities: P [Xt1 = j1 , . . . , Xtn = jn |Xs1 = i1 , . . . , Xsm = im ] = pX (j1 , t1 ; . . . ; jn , tn |i1 , s1 ; . . . ; im , sm ) For brevity, conditional probabilities of the form P [Xt = j |Xs = i] are written as pij (s, t), and are called the transition probabilities of X . The first order pmfs π (t) and the transition probabilities pij (s, t) determine all the finite order distributions of the Markov process as follows. Given one writes t1 < t2 < . . . < tn in T, ii , i2 , ..., in ∈ S (4.9) pX (i1 , t1 ; · · · ; in , tn ) = pX (i1 , t1 ; · · · ; in−1 , tn−1 )pX (in , tn |i1 , t1 ; · · · ; in−1 , tn−1 ) = pX (i1 , t1 ; · · · ; in−1 , tn−1 )pin−1 in (tn−1 , tn ) Application of this operation n − 2 more times yields that pX (i1 , t1 ; · · · ; in−1 , tn−1 ) = πi1 (t1 )pi1 i2 (t1 , t2 ) · · · pin−1 in (tn−1 , tn ) 92 (4.10) which shows that the finite order distributions of X are indeed determined by the first order pmfs and the transition probabilities. Equation (4.10) can be used to easily verify that the form (4.7) of the Markov property holds. Given s < t, the collection H (s, t) defined by H (s, t) = (pij (s, t) : i, j ∈ S ) should be thought of as a matrix, and it is called the transition probability matrix for the interval [s, t]. Let e denote the column vector with all ones, indexed by S . Since π (t) and the rows of H (s, t) are probability vectors, it follows that π (t)e = 1 and H (s, t)e = e. Computing the distribution of Xt by summing over all possible values of Xs yields that πj (t) = i P [Xs = i, Xt = j ] = i πi (s)pij (s, t), which in matrix form yields that π (t) = π (s)H (s, t) for s, t ∈ T, s ≤ t. Similarly, given s < τ < t, computing the conditional distribution of Xt given Xs by summing over all possible values of Xτ yields H (s, t) = H (s, τ )H (τ , t) s, τ , t ∈ T, s < τ < t. (4.11) The relations (4.11) are known as the Chapman-Kolmogorov equations. A Markov process is time-homogeneous if the transition probabilities pij (s, t) depend on s and t only through t − s. In that case we write pij (t − s) instead of pij (s, t), and Hij (t − s) instead of Hij (s, t). If the Markov process is time-homogeneous, then π (s + τ ) = π (s)H (τ ) for s, s + τ ∈ T and τ ≥ 0. A probability distribution π is called an equilibrium (or invariant) distribution if π H (τ ) = π for all τ ≥ 0. Recall that a random process is stationary if its finite order distributions are invariant with respect to translation in time. On one hand, referring to (4.10), we see that a time-homogeneous Markov process is stationary if and only if π (t) = π for all t for some equilibrium distribution π . On the other hand, a Markov random process that is stationary is time homogeneous. Repeated application of the Chapman-Kolmogorov equations yields that pij (s, t) can be expressed in terms of transition probabilities for s and t close together. For example, consider Markov processes with index set the integers. Then H (n, k + 1) = H (n, k )P (k ) for n ≤ k , where P (k ) = H (k , k + 1) is the one-step transition probability matrix. Fixing n and using forward recursion starting with H (n, n) = I , H (n, n + 1) = P (n), H (n, n + 2) = P (n)P (n + 1), and so forth yields H (n, l) = P (n)P (n + 1) · · · P (l − 1) In particular, if the chain is time-homogeneous then H (k ) = P k for all k , where P is the time independent one-step transition probability matrix, and π (l) = π (k )P l−k for l ≥ k . In this case a probability distribution π is an equilibrium distribution if and only if π P = π . Example 4.8 Consider a two-stage pipeline through which packets flow, as pictured in Figure 4.8. Some assumptions about the pipeline will be made in order to model it as a simple discrete time Markov process. Each stage has a single buffer. Normalize time so that in one unit of time a packet can make a single transition. Call the time interval between k and k + 1 the k th “time slot,” and assume that the pipeline evolves in the following way during a given slot. a d1 d2 Figure 4.8: A two-stage pipeline 93 a ad 2 ad 2 01 00 d1 a ad 2 ad2 d1 10 11 d2 d 2 Figure 4.9: One-step transition probability diagram. If at the beginning of the slot, there are no packets in stage one, then a new packet arrives to stage one with probability a, independently of the past history of the pipeline and of the outcome at stage two. If at the beginning of the slot, there is a packet in stage one and no packet in stage two, then the packet is transfered to stage two with probability d1 . If at the beginning of the slot, there is a packet in stage two, then the packet departs from the stage and leaves the system with probability d2 , independently of the state or outcome of stage one. These assumptions lead us to model the pipeline as a discrete-time Markov process with the state space S = {00, 01, 10, 11}, transition probability diagram shown in Figure 4.9 (using the notation x = 1 − x) and one-step transition probability matrix P given by ¯ a ¯ 0 a 0 ¯ ad2 ad2 ad2 ad2 ¯ ¯¯ P = ¯ 0 0 d1 d1 ¯ 0 0 d2 d2 The rows of P are probability vectors. For example, the first row is the probability distribution of the state at the end of a slot, given that the state is 00 at the beginning of a slot. Now that the model is specified, let us determine the throughput rate of the pipeline. The equilibrium probability distribution π = (π00 , π01 , π10 , π11 ) is the probability vector satisfying the linear equation π = π P . Once π is found, the throughput rate η can be computed as follows. It is defined to be the rate (averaged over a long time) that packets transit the pipeline. Since at most two packets can be in the pipeline at a time, the following three quantities are all clearly the same, and can be taken to be the throughput rate. The rate of arrivals to stage one The rate of departures from stage one (or rate of arrivals to stage two) The rate of departures from stage two 94 Focus on the first of these three quantities to obtain η = P [an arrival at stage 1] = P [an arrival at stage 1|stage 1 empty at slot beginning]P [stage 1 empty at slot beginning] = a(π00 + π01 ). Similarly, by focusing on departures from stage 1, obtain η = d1 π10 . Finally, by focusing on departures from stage 2, obtain η = d2 (π01 + π11 ). These three expressions for η must agree. Consider the numerical example a = d1 = d2 = 0.5. The equation π = π P yields that π is proportional to the vector (1, 2, 3, 1). Applying the fact that π is a probability distribution yields that π = (1/7, 2/7, 3/7, 1/7). Therefore η = 3/14 = 0.214 . . .. In the remainder of this section we assume that X is a continuous-time, finite-state Markov process. The transition probabilities for arbitrary time intervals can be described in terms of the transition probabilites over arbitrarily short time intervals. By saving only a linearization of the transition probabilities, the concept of generator matrix arises naturally, as we describe next. Let S be a finite set. A pure-jump function for state space S is a function x : R+ → S such that there is a sequence of times, 0 = τ0 < τ1 < · · · with limi→∞ τi = ∞, and a sequence of states with si = si+1 , i ≥ 0, such that that x(t) = si for τi ≤ t < τi+1 . A pure-jump Markov process is an S valued Markov process such that, with probability one, the sample functions are pure jump functions. Let Q = (qij : i, j ∈ S ) be such that qij ≥ 0 qii = − i, j ∈ S , i = j i ∈ S. j ∈S ,j =i qij (4.12) An example for state space S = {1, 2, 3} is −1 0.5 0.5 Q = 1 −2 1 , 0 1 −1 and this matrix Q can be represented by the transition rate diagram shown in Figure 4.10. A pure0.5 2 1 1 1 0.5 1 3 Figure 4.10: Transition rate diagram for a continuous time Markov process jump, time-homogeneous Markov process X has generator matrix Q if the transition probabilities (pij (τ )) satisfy lim (pij (h) − I{i=j } )/h = qij i, j ∈ S (4.13) h 0 95 or equivalently pij (h) = I{i=j } + hqij + o(h) (4.14) i, j ∈ S where o(h) represents a quantity such that limh→0 o(h)/h = 0. For the example this means that the transition probability matrix for a time interval of duration h is given by 1 − h 0.5h 0.5h o(h) o(h) o(h) h 1 − 2h h + o(h) o(h) o(h) 0 h 1−h o(h) o(h) o(h) For small enough h, the rows of the first matrix are probability distributions, owing to the assumptions on the generator matrix Q. Prop osition 4.9.1 Given a matrix Q satisfying (4.12), and a probability distribution π (0) = (πi (0) : i ∈ S ), there is a pure-jump, time-homogeneous Markov process with generator matrix Q and initial distribution π (0). The finite order distributions of the process are uniquely determined by π (0) and Q. The first order distributions and the transition probabilities can be derived from Q and an initial distribution π (0) by solving differential equations, derived as follows. Fix t > 0 and let h be a small positive number. The Chapman-Kolmogorov equations imply that πj (t + h) − πj (t) = h pij (h) − I{i=j } h πi (t) i∈S . (4.15) Letting h converge to zero yields the differential equation: ∂ πj (t) = ∂t or, in matrix notation, can be rewritten as ∂ π (t) ∂t πi (t)qij (4.16) i∈S = π (t)Q. This equation, known as the Kolmogorov forward equation, ∂ πj (t) = ∂t i∈S ,i=j πi (t)qij − πj (t)qj i , (4.17) i∈S ,i=j which states that the rate change of the probability of being at state j is the rate of “probability flow” into state j minus the rate of probability flow out of state j . Example 4.9 Consider the two-state, continuous time Markov process with the transition rate diagram shown in Figure 4.11 for some positive constants α and β . The generator matrix is given ! 1 2 " Figure 4.11: Transition rate diagram for a two-state continuous time Markov process by Q= −α α β −β 96 Let us solve the forward Kolmogorov equation for a given initial distribution π (0). The equation for π1 (t) is ∂ π1 (t) = −απ1 (t) + β π2 (t); π1 (0) given ∂t But π1 (t) = 1 − π2 (t), so ∂ π1 (t) = −(α + β )π1 (t) + β ; ∂t π1 (0) given By differentiation we check that this equation has the solution t π1 (t) = π1 (0)e−(α+β )t + e−(α+β )(t−s) β ds 0 −(α +β )t = π1 (0)e so that π (t) = π (0)e−(α+β )t + + β (1 − e−(α+β )t ). α+β α β , α+β α+β (1 − e−(α+β )t ) For any initial distribution π (0), lim π (t) = t→∞ α β , α+β α+β . The rate of convergence is exponential, with rate parameter α + β , and the limiting distribution is the unique probability distribution satisfying π Q = 0. 97 4.10 Problems 4.1. Event probabilities for a simple random pro cess Define the random process X by Xt = 2A + B t where A and B are independent random variables with P [A = 1] = P [A = −1] = P [B = 1] = P [B = −1] = 0.5. (a) Sketch the possible sample functions. (b) Find P [Xt ≥ 0] for all t. (c) Find P [Xt ≥ 0 for all t]. 4.2. Correlation function of a pro duct Let Y and Z be independent random processes with RY (s, t) = 2 exp(−|s − t|) cos(2π f (s − t)) and RZ (s, t) = 9 + exp(−3|s − t|4 ). Find the autocorrelation function RX (s, t) where Xt = Yt Zt . 4.3. A sinusoidal random pro cess Let Xt = A cos(2π V t + Θ) where the amplitude A has mean 2 and variance 4, the frequency V in Hertz is uniform on [0, 5], and the phase Θ is uniform on [0, 2π ]. Furthermore, suppose A, V and Θ are independent. Find the mean function µX (t) and autocorrelation function RX (s, t). Is X WSS? 4.4. Another sinusoidal random pro cess Suppose that X1 and X2 are random variables such that E X1 = E X2 = E X1 X2 = 0 and Var(X1 ) = Var(X2 ) = σ 2 . Define Yt = X1 cos(2π t) − X2 sin(2π t). (a) Is the random process Y necessarily wide-sense stationary? (b) Give an example of random variables X1 and X2 satisfying the given conditions such that Y is stationary. (c) Give an example of random variables X1 and X2 satisfying the given conditions such that Y is not (strict sense) stationary. 4.5. A random pro cess corresp onding to a random parab ola Define a random process X by Xt = A + B t + t2 , where A and B are independent, N (0, 1) random ˆ variables. (a) Find E [X5 |X1 ], the linear minimum mean square error (LMMSE) estimator of X5 given X1 , and compute the mean square error. (b) Find the MMSE (possibly nonlinear) estimator ˆ of X5 given X1 , and compute the mean square error. (c) Find E [X5 |X0 , X1 ] and compute the mean square error. (Hint: Can do by inspection.) 4.6. MMSE prediction for a Gaussian pro cess based on two observations Let X be a stationary Gaussian process with mean zero and RX (τ ) = 5 cos( π2τ )3−|τ | . (a) Find the covariance matrix of the random vector (X (2), X (3), X (4))T . (b) Find E [X (4)|X (2)]. (c) Find E [X (4)|X (2), X (3)]. 4.7. A simple discrete-time random pro cess Let U = (Un : n ∈ Z) consist of independent random variables, each uniformly distributed on the interval [0, 1]. Let X = (Xk : k ∈ Z} be defined by Xk = max{Uk−1 , Uk }. (a) Sketch a typical sample path of the process X . (b) Is X stationary? (c) Is X Markov? (d) Describe the first order distributions of X . (e) Describe the second order distributions of X . 4.8. Poisson pro cess probabilities Consider a Poisson process with rate λ > 0. (a) Find the probability that there is (exactly) one count in each of the three intervals [0,1], [1,2], and [2,3]. (b) Find the probability that there are two counts in the interval [0, 2] and two counts in the interval 98 [1, 3]. (Note: your answer to part (b) should be larger than your answer to part (a)). (c) Find the probability that there are two counts in the interval [1,2], given that there are two counts in the interval [0,2] and two counts in the the interval [1,3]. 4.9. Sliding function of an i.i.d. Poisson sequence Let X = (Xk : k ∈ Z ) be a random process such that the Xi are independent, Poisson random variables with mean λ, for some λ > 0. Let Y = (Yk : k ∈ Z ) be the random process defined by Yk = Xk + Xk+1 . (a) Show that Yk is a Poisson random variable with parameter 2λ for each k . (b) Show that X is a stationary random process. (c) Is Y a stationary random process? Justify your answer. 4.10. Adding jointly stationary Gaussian pro cesses Let X and Y be jointly stationary, jointly Gaussian random processes with mean zero, autocorrelation functions RX (t) = RY (t) = exp(−|t|), and cross-correlation function RX Y (t) = (0.5) exp(−|t − 3|). (a) Let Z (t) = (X (t) + Y (t))/2 for all t. Find the autocorrelation function of Z . (b) Is Z a stationary random process? Explain. (c) Find P [X (1) ≤ 5Y (2) + 1]. You may express your answer in terms of the standard normal cumulative distribution function Φ. 4.11. Invariance of prop erties under transformations Let X = (Xn : n ∈ Z), Y = (Yn : n ∈ Z), and Z = (Zn : n ∈ Z) be random processes such that 2 3 Yn = Xn for all n and Zn = Xn for all n. Determine whether each of the following statements is always true. If true, give a justification. If not, give a simple counter example. (a) If X is Markov then Y is Markov. (b) If X is Markov then Z is Markov. (c) If Y is Markov then X is Markov. (d) If X is stationary then Y is stationary. (e) If Y is stationary then X is stationary. (f ) If X is wide sense stationary then Y is wide sense stationary. (g) If X has independent increments then Y has independent increments. (h) If X is a martingale then Z is a martingale. 4.12. A linear evolution equation with random co efficients 2 Let the variables Ak , Bk , k ≥ 0 be mutually independent with mean zero. Let Ak have variance σA 2 for all k . Define a discrete-time random pro cess Y by and let Bk have variance σB Y = (Yk : k ≥ 0), such that Y0 = 0 and Yk+1 = Ak Yk + Bk for k ≥ 0. (a) Find a recursive method for computing Pk = E [(Yk )2 ] for k ≥ 0. (b) Is Y a Markov process? Explain. (c) Does Y have independent increments? Explain. (d) Find the autocorrelation function of Y . ( You can use the second moments (Pk ) in expressing your answer.) (e) Find the corresponding linear innovations sequence (Yk :≥ 1). 99 4.13. On an M /D/∞ queue Suppose packets enter a buffer according to a Poisson point process on R at rate λ, meaning that the number of arrivals in an interval of length τ has the Poisson distribution with mean λτ , and the numbers of arrivals in disjoint intervals are independent. Suppose each packet stays in the buffer for one unit of time, independently of other packets. Because the arrival process is memoryless, because the service times are deterministic, and because the packets are served simultaneously, corresponding to infinitely many servers, this queueing system is called an M /D/∞ queueing system. The number of packets in the system at time t is given by Xt = N (t − 1, t], where N (a, b] is the number of customers that arrive during the interval (a, b]. (a) Find the mean and autocovariance function of X . (b) Is X stationary? Is X wide sense stationary? (c) Is X a Markov process? (d) Find a simple expression for P {Xt = 0 for t ∈ [0, 1]} in terms of λ. (e) Find a simple expression for P {Xt > 0 for t ∈ [0, 1]} in terms of λ. 4.14. A fly on a cub e Consider a cube with vertices 000, 001, 010, 100, 110, 101. 011, 111. Suppose a fly walks along edges of the cube from vertex to vertex, and for any integer t ≥ 0, let Xt denote which vertex the fly is at at time t. Assume X = (Xt : t ≥ 0) is a discrete time Markov process, such that given Xt , the next state Xt+1 is equally likely to be any one of the three vertices neighboring Xt . (a) Sketch the one step transition probability diagram for X . (b) Let Yt denote the distance of Xt , measured in number of hops, between vertex 000 and Xt . For example, if Xt = 101, then Yt = 2. The process Y is a Markov process with states 0,1,2, and 3. Sketch the one-step transition probability diagram for Y . (c) Suppose the fly begins at vertex 000 at time zero. Let τ be the first time that X returns to vertex 000 after time 0, or eqivalently, the first time that Y returns to 0 after time 0. Find E [τ ]. 4.15. A space-time transformation of Brownian motion Suppose X = (Xt : t ≥ 0) is a real-valued, mean zero, independent increment process, and let 2 E [Xt ] = ρt for t ≥ 0. Assume ρt < ∞ for all t. (a) Show that ρ must be nonnegative and nondecreasing over [0, ∞). (b) Express the autocorrelation function RX (s, t) in terms of the function ρ for all s ≥ 0 and t ≥ 0. (c) Conversely, suppose a nonnegative, nondecreasing function ρ on [0, ∞) is given. Let Yt = W (ρt ) for t ≥ 0, where W is a standard Brownian motion with RW (s, t) = min{s, t}. Explain why Y is an independent increment process with E [Yt2 ] = ρt for all t ≥ 0. (d) Define a process Z in terms of a standard Brownian motion W by Z0 = 0 and Zt = tW ( 1 ) for t t > 0. Does Z have independent increments? Justify your answer. 4.16. An M/M/1/B queueing system Suppose X is a continuous-time Markov process with the transition rate diagram shown, for a positive integer B and positive constant λ. ! $ ! ! # # % # ! ! !"!"! # (a) Find the generator matrix, Q, of X for B = 4. 100 & !# # & # (b) Find the equilibrium probability distribution. (Note: The process X models the number of customers in a queueing system with a Poisson arrival process, exponential service times, one server, and a finite buffer.) 4.17. Identification of sp ecial prop erties of two discrete time pro cesses Determine which of the properties: (i) Markov property (ii) martingale property (iii) independent increment property are possessed by the following two random processes. Justify your answers. (a) X = (Xk : k ≥ 0) defined recursively by X0 = 1 and Xk+1 = (1 + Xk )Uk for k ≥ 0, where U0 , U1 , . . . are independent random variables, each uniformly distributed on the interval [0, 1]. (b) Y = (Yk : k ≥ 0) defined by Y0 = V0 , Y1 = V0 + V1 , and Yk = Vk−2 + Vk−1 + Vk for k ≥ 2, where Vk : k ∈ Z are independent Gaussian random variables with mean zero and variance one. 4.18. Identification of sp ecial prop erties of two discrete time pro cesses (version 2) Determine which of the properties: (i) Markov property (ii) martingale property (iii) independent increment property are possessed by the following two random processes. Justify your answers. (a) (Xk : k ≥ 0), where Xk is the number of cells alive at time k in a colony that evolves as follows. Initially, there is one cell, so X0 = 1. During each discrete time step, each cell either dies or splits into two new cells, each possibility having probability one half. Suppose cells die or split independently. Let Xk denote the number of cells alive at time k . (b) (Yk : k ≥ 0), such that Y0 = 1 and, for k ≥ 1, Yk = U1 U2 . . . Uk , where U1 , U2 , . . . are independent random variables, each uniformly distributed over the interval [0, 2]. 4.19. Identification of sp ecial prop erties of two continuous time pro cesses Answer as in the previous problem, for the following two random processes: 2 (a) Z = (Zt : t ≥ 0), defined by Zt = exp(Wt − σ2 t ), where W is a Brownian motion with parameter σ 2 . (Hint: Observe that E [Zt ] = 1 for all t.) (b) R = (Rt : t ≥ 0) defined by Rt = D1 + D2 + · · · + DNt , where N is a Poisson process with rate λ > 0 and Di : i ≥ 1 is an iid sequence of random variables, each having mean 0 and variance σ 2 . 4.20. Identification of sp ecial prop erties of two continuous time pro cesses (version 2) Answer as in the previous problem, for the following two random processes: (a) Z = (Zt : t ≥ 0), defined by Zt = Wt3 , where W is a Brownian motion with parameter σ 2 . (b) R = (Rt : t ≥ 0), defined by Rt = cos(2π t + Θ), where Θ is uniformly distributed on the interval [0, 2π ]. 4.21. A branching pro cess Let p = (pi : i ≥ 0) be a probability distribution on the nonnegative integers with mean m. Consider a population beginning with a single individual, comprising generation zero. The offspring of the initial individual comprise the first generation, and, in general, the offspring of the k th gener101 ation comprise the k + 1st generation. Suppose the number of offspring of any individual has the probability distribution p, independently of how many offspring other individuals have. Let Y0 = 1, and for k ≥ 1 let Yk denote the number of individuals in the k th generation. (a) Is Y = (Yk : k ≥ 0) a Markov process? Briefly explain your answer. k (b) Find constants ck so that Yk is a martingale. c (c) Let am = P [Ym = 0], the probability of extinction by the mth generation. Express am+1 in terms of the distribution p and am (Hint: condition on the value of Y1 , and note that the Y1 subpopulations beginning with the Y1 individuals in generation one are independent and statistically identical to the whole population.) (d) Express the probability of eventual extinction, a∞ = limm→∞ am , in terms of the distribution p. Under what condition is a∞ = 1? (e) Find a∞ in terms of θ in case pk = θk (1 − θ) for k ≥ 0 and 0 ≤ θ < 1. (This distribution is θ similar to the geometric distribution, and it has mean m = 1−θ .) 4.22. Moving balls Consider the motion of three indistinguishable balls on a linear array of positions, indexed by the positive integers, such that one or more balls can occupy the same position. Suppose that at time t = 0 there is one ball at position one, one ball at position two, and one ball at position three. Given the positions of the balls at some integer time t, the positions at time t + 1 are determined as follows. One of the balls in the left most occupied position is picked up, and one of the other two balls is selected at random (but not moved), with each choice having probability one half. The ball that was picked up is then placed one position to the right of the selected ball. (a) Define a finite-state Markov process that tracks the relative positions of the balls. Try to use a small number of states. (Hint: Take the balls to be indistinguishable, and don’t include the position numbers.) Describe the significance of each state, and give the one-step transition probability matrix for your process. (b) Find the equilibrium distribution of your process. (c) As time progresses, the balls all move to the right, and the average speed has a limiting value, with probability one. Find that limiting value. (You can use the fact that for a finite-state Markov process in which any state can eventually be reached from any other, the fraction of time the process is in a state i up to time t converges a.s. to the equilibrium probability for state i as t → ∞. (d) Consider the following continuous time version of the problem. Given the current state at time t, a move as described above happens in the interval [t, t + h] with probability h + o(h). Give the generator matrix Q, find its equilibrium distribution, and identify the long term average speed of the balls. 4.23. Mean hitting time for a discrete time, discrete space Markov pro cess Let (Xk : k ≥ 0) be a time-homogeneous Markov process with the one-step transition probability diagram shown. 0.4 0.6 1 0.2 2 0.8 0.4 3 (a) Write down the one step transition probability matrix P . (b) Find the equilibrium probability distribution π . 102 0.6 (c) Let τ = min{k ≥ 0 : Xk = 3} and let ai = E [τ |X0 = i] for 1 ≤ i ≤ 3. Clearly a3 = 0. Derive equations for a1 and a2 by considering the possible values of X1 , in a way similar to the analysis of the gambler’s ruin problem. Solve the equations to find a1 and a2 . 4.24. Mean hitting time for a continuous time, discrete space Markov pro cess Let (Xt : t ≥ 0) be a time-homogeneous Markov process with the transition rate diagram shown. 1 1 10 1 2 5 3 (a) Write down the rate matrix Q. (b) Find the equilibrium probability distribution π . (c) Let τ = min{t ≥ 0 : Xt = 3} and let ai = E [τ |X0 = i] for 1 ≤ i ≤ 3. Clearly a3 = 0. Derive equations for a1 and a2 by considering the possible values of Xt (h) for small values of h > 0 and taking the limit as h → 0. Solve the equations to find a1 and a2 . 4.25. Distribution of holding time for discrete state Markov pro cesses (a) Let (Xk : k ≥ 0) be a time-homogeneous Markov process with one-step transition probability matrix P . Fix a given state i, suppose that P [X0 = i] = 1, and let τ = min{k ≥ 0 : Xk = i}. Find the probability distribution of τ . What well known type of distribution does τ have, and why should that be expected? (Hint: The distribution is completely determined by pii . For example, if pii = 0 then P [τ = 1] = 1.) (b) Let (Xt : t ≥ 0) be a time-homogeneous Markov process with transition rate matrix Q. Fix a given state i, suppose that P [X0 = i] = 1, and let τ = min{t ≥ 0 : Xt = i}. This problem will lead to your finding the probability distribution of τ . For h > 0, let hZ+ = {nh : n ≥ 0}. Note that (Xt : t ∈ hZ+ ) is a discrete-time Markov process with one step transition probability matrix H (h) = (pkj (h)), although the time between steps is h, rather than the usual one time unit. Let τ h = min{t ∈ hZ+ : Xt = i}. (i) Describe the probability distribution of τ h . (Hint: the parameter will be the transition probability for an interval of length h: pii (h).) (ii) Show that limh→0 τ h = τ a.s. Since convergence a.s. implies convergence in d., it follows that limh→0 τ h = τ d., so the distribution of τ , which we seek to find, is the limit of the distributions of τ h . (iii) The distribution of τ h converges to what distribution as h → 0? This is the distribution of τ . What well known type of distribution does τ have, and why should that be expected? (Hint: The limit can be identified by taking either the limit of the cumulative distribution functions, or the limit of the characteristic functions.) 4.26. Some orthogonal martingales based on Brownian motion (This problem is related to the problem on linear innovations and orthogonal polynomials in the previous problem set.) Let W = (Wt : t ≥ 0) be a Brownian motion with σ 2 = 1 (called a standard 2 Brownian motion), and let Mt = exp(θWt − θ2 t ) for an arbitrary constant θ. (a) Show that (Mt : t ≥ 0) is a martingale. (Hint for parts (a) and (b): For notational brevity, let Ws represent (Wu : 0 ≤ u ≤ s) for the purposes of conditioning. If Zt is a function of Wt for each t, then a sufficient condition for Z to be a martingale is that E [Zt |Ws ] = Zs whenever 0 < s < t, 103 because then E [Zt |Zu , 0 ≤ u ≤ s] = E [E [Zt |Ws ]|Zu , 0 ≤ u ≤ s] = E [Zs |Zu , 0 ≤ u ≤ s] = Zs ). (b) By the power series expansion of the exponential function, exp(θWt − θ2 t θ2 θ3 ) = 1 + θWt + (Wt2 − t) + (Wt3 − 3tWt ) + · · · 2 2 3! ∞ n θ Mn (t) = n! n=0 √t where Mn (t) = tn/2 Hn ( Wt ), and Hn is the nth Hermite polynomial. The fact that M is a martingale for any value of θ can be used to show that Mn is a martingale for each n (you don’t need to supply details). Verify directly that Wt2 − t and Wt3 − 3tWt are martingales. (c) For fixed t, (Mn (t) : n ≥ 0) is a sequence of orthogonal random variables, because it is the linear innovations sequence for the variables 1, Wt , Wt2 , . . .. Use this fact and the martingale property of the Mn processes to show that if n = m and s, t ≥ 0, then Mn (s) ⊥ Mm (t). 4.27*. Auto correlation function of a stationary Markov pro cess Let X = (Xk : k ∈ Z ) be a Markov process such that the state-space, {ρ1 , ρ2 , ..., ρn }, is a finite subset of the real numbers. Let P = (pij ) denote the matrix of one-step transition probabilities. Let e be the column vector of all ones, and let π (k ) be the row vector π (k ) = (P [X k = ρ1 ], ..., P [Xk = ρn ]). (a) Show that P e = e and π (k + 1) = π (k )P . (b) Show that if the Markov chain X is a stationary random process then π (k ) = π for all k , where π is a vector such that π = π P . (c) Prove the converse of part (b). (m) (m) (d) Show that P [Xk+m = ρj |Xk = ρi , Xk−1 = s1 , ..., Xk−m = sm ] = pij , where pij is the i, j th element of the mth power of P , P m , and s1 , . . . , sm are arbitrary states. (e) Assume that X is stationary. Express RX (k ) in terms of P , (ρi ), and the vector π of parts (b) and (c). 104 Chapter 5 Basic Calculus of Random Pro cesses The calculus of deterministic functions revolves around continuous functions, derivatives, and integrals. These concepts all involve the notion of limits. (See the appendix for a review). In this chapter the same concepts are treated for random processes. Of the four types of limits for random variables, we will use the mean square sense of convergence the most. As an application of integration of random processes, ergodicity and the Karhunen-Lo´ve expansion are discussed. In e addition, notation for complex-valued random processes is introduced. 5.1 Continuity of random pro cesses Let X = (Xt : t ∈ T) be a second order random process such that T = R or T is an interval in R. For t fixed, X is defined to be mean square continuous (m.s. continuous) at t if lim Xs = Xt m.s. s→t Equivalently, if E [(Xs − Xt )2 ] → 0 as s → t. By definition, the process is m.s. continuous if it is m.s. continuous at each t. The notions of continuity in probability and continuity in distribution of a random process, either at a fixed time or everywhere, are defined similarly: The m.s. limits are simply replaced by limits in p. or d. sense. Almost sure continuity (a.s. continuity) of X is defined in a different way. Recall that for a fixed ω ∈ Ω, the random process X gives a sample path, which is a function on T. Continuity of a sample path is thus defined as it is for any deterministic function. The subset of Ω, {X is continuous}, is the set of ω such that the sample path for ω is continuous. The random process X is said to be a.s. continuous if the set {X is continuous} is an event which has probability one. Just as for convergence of sequences, if X is m.s. continuous or a.s. continuous, it is continuous in p. sense. And if X is continuous in p. sense, then it is continuous in d. sense. Whether X is m.s. continuous depends only on the correlation function RX . In fact, we shall prove that the following three conditions are equivalent: (a) RX is continuous at all points of the form (t, t), (b) X is m.s. continuous, (c) RX is continuous over T × T. 105 To begin the proof, fix t ∈ T and suppose that RX is continous at the point (t, t). Then RX (s, s), RX (s, t), and RX (t, s) all converge to RX (t, t) as s → t. Therefore, lims→t E [|Xs − Xt |2 ] = lims→t (RX (s, s) − RX (s, t) − RX (t, s) + RX (t, t)) = 0. So X is m.s. continous at t. Therefore if RX is continuous at all points of the form (t, t) ∈ T × T, then X is m.s. continous at all t ∈ T. Therefore (a) implies (b). Suppose condition (b) is true. If sn → s and tn → t as n → ∞, then Corollary 2.2.3 implies that RX (sn , tn ) → RX (s, t) as n → ∞. Thus, RX is continous at all points (s, t) of T × T. Therefore (b) implies (c). Since (c) obviously implies (a), the proof of the equivalence of (a) through (c) is complete. If X is WSS, then RX (s, t) = RX (τ ) where τ = s − t, and the three equivalent conditions become (a) RX (τ ) is continuous at τ = 0, (b) X is m.s. continuous, (c) RX (τ ) is continuous over all of R. The last part of Corollary 2.2.3 implies that if X is m.s. continuous, then the mean function µX (t) = E [X (t)] is continuous. Example 5.1 Let W = (Wt : t ≥ 0) be a Brownian motion with parameter σ 2 . Then E [(Wt − Ws )2 ] = σ 2 |t − s| → 0 as s → t. Therefore W is m.s. continuous. Another way to show W is m.s. continuous is to observe that the autocorrelation function, RW (s, t) = σ 2 (s ∧ t), is continuous. Since W is m.s. continuous, it is also continuous in the p. and d. senses. As we stated in defining W , it is a.s. continuous as well. Example 5.2 Let N = (Nt : t ≥ 0) be a Poisson process with rate λ > 0. Then for fixed t, E [(Nt − Ns )2 ] = λ(t − s) + (λ(t − s))2 → 0 as s → t. Therefore N is m.s. continuous. As required, RN , given by RN (s, t) = λ(s ∧ t) + λ2 st, is continuous. Since N is m.s. continuous, it is also continuous in the p. and d. senses. However, N is not a.s. continuous. In fact, P [N is continuous on [0, a]] = e−λa and P [N is continuous on R+ ] = 0. 5.2 Differentiation of random pro cesses Let X = (Xt : t ∈ T) be a second order random process such that T = R or T is an interval in R. For t fixed, X is defined to be m.s. differentiable at t if the following limit exists: lim s→t Xs −Xt s−t m.s. The limit, if it exists, is the m.s. derivative of X at t, denoted by Xt . By definition, X is m.s. differentiable if it is m.s. differentiable at each t. If X is m.s. differentiable at t, then by the last part of Corollary 2.2.3, the following limit exists: lim E s→t Xs −Xt s−t or lim s→t 106 µX (s) − µX (t) s−t and is equal to E [X (t)] = µX (t). Therefore, the derivative of the mean function of X , is equal to the mean function of the derivative of X : µX = µX . Whether X is m.s. differentiable depends only on the autocorrelation function RX of X . By Proposition 2.2.2, X is m.s. differentiable at t if and only if the following limit exists and is finite: lim s,s →t X (s ) − X (t) s −t X (s) − X (t) s−t E . Equivalently, X is m.s. differentiable at t if and only if the following limit exists and is finite: lim s,s →t RX (s, s ) − RX (s, t) − RX (t, s ) + RX (t, t) . (s − t)(s − t) (5.1) The numerator in (5.1) involves RX evaluated at the four courners of the rectangle [t, s] × [t, s ]. Let ∂i denote the operation of taking the partial derivative with respect to the ith argument. For example, if f (x, y ) = x2 y 3 then ∂2 f (x, y ) = 3x2 y 2 and ∂1 ∂2 f (x, y ) = 6xy 2 . Suppose RX , ∂2 RX and ∂1 ∂2 RX exist and are continuous functions. Then by the fundamental theorem of calculus, (R(s, s ) − RX (s, t)) − (RX (t, s ) − RX (t, t)) = s t ∂2 RX (s, v )dv − s = t s s ∂2 Rx (t, v )dv t ∂1 ∂2 RX (u, v )dudv . t Therefore, the ratio in (5.1) is the average value of ∂1 ∂2 RX over the rectangle [t, s] × [t, s ]. Since ∂1 ∂2 RX is assumed to be continuous, the limit in (5.1) exists and it is equal to ∂1 ∂2 RX (t, t). Furthermore, if t, t ∈ R, then by Corollary 2.2.3, E [X (t)X (t )] = lim s →t s →t E [(X (s) − X (t))(X (s ) − X (t ))] (s − t)(s − t ) = ∂1 ∂2 RX (t, t ). That is, RX = ∂1 ∂2 RX . To summarize, if RX , ∂2 RX and ∂1 ∂2 RX exist and are continuous, then X is m.s. differentiable, and the correlation function of the derivative, X , is given by RX = ∂2 ∂1 RX . These conditions also imply that ∂1 RX and ∂2 ∂1 RX exist, and ∂1 ∂2 RX = ∂2 ∂1 RX . If X is WSS, then RX (s − t) = RX (τ ) where τ = s − t. Suppose RX (τ ), RX (τ ) and RX (τ ) exist and are continuous functions of τ . Then ∂1 RX (s, t) = RX (τ ) ∂2 ∂1 RX (s, t) = −RX (τ ). and Thus, X exists in the m.s. sense. It is also WSS and has mean 0 and correlation function RX (τ ) = −RX (τ ). We have given a sufficient condition for a WSS process to be m.s. differentiable. A simple necessary condition is obtained as follows. If X is WSS then E X (t) − X (0) t 2 =− 107 2(RX (t) − RX (0)) t2 (5.2) Therefore, if X is m.s. differentiable then the right side of (5.2) must converge to a finite limit as t → 0, so in particular it is necessary that RX (0) exist and RX (0) = 0. Let us consider some examples. A Wiener process W = (Wt : t ≥ 0) is not m.s. differentiable because lim E t→0 W (t) − W (0) t 2 σ2 t→0 t = lim = +∞. Similarly, a Poisson process is not m.s. differentiable. A WSS process X with RX (τ ) = e−α|τ | is 1 not m.s. differentiable because RX (0) does not exist. A WSS process X with RX (τ ) = 1+τ 2 is m.s. differentiable, and its derivative process X is WSS with mean 0 and covariance function RX (τ ) = − 1 1 + τ2 = 2 − 6τ 2 . (1 + τ 2 )3 Prop osition 5.2.1 Suppose X is a m.s. differentiable random process and f is a differentiable function. Then the product X f = (X (t)f (t) : t ∈ R) is mean square differentiable and (X f ) = X f + Xf . Pro of: Fix t. Then for each s = t, X (s)f (s) − X (t)f (t) s−t = (X (s) − X (t))f (s) X (t)(f (s) − f (t)) + s−t s−t → X (t)f (t) + X (t)f (t) m.s. as s → t. Theoretical Exercise Suppose X is m.s. differentiable. Show that RX 5.3 X = ∂1 RX . Integration of random pro cess Let X = (Xt : a ≤ t ≤ b) be a random process and let h be a function on a finite interval [α, b]. How shall we define the following integral? b a Xt h(t)dt. (5.3) One approach is to note that for each fixed ω , Xt (ω ) is a deterministic function of time, and so the integral can be defined as the integral of a deterministic function for each ω . We shall focus on another approach, namely mean square (m.s.) integration. An advantage of m.s. integration is that it relies much less on properties of sample paths of random processes. We suppose that X is a second order random process and define the integral as the m.s. limit of Riemann sums, using meshes of [a, b] of the form a = t0 ≤ v1 ≤ t1 ≤ v2 ≤ · · · ≤ tn−1 ≤ vn ≤ tn = b. The integral exists in the m.s. (Riemann) sense and is equal to limmax |tk −tk−1 |→0 m.s. n k=1 Xvk h(vk )(tk − tk − 1 ) (5.4) whenever the limit exists. Whether the m.s. integral (5.3) exists is determined by RX and h. A simple necessary and sufficient condition can be obtained using Proposition 2.2.2 as follows. Consider two meshes i i i a = ti ≤ v1 ≤ ti ≤ v2 ≤ · · · ≤ ti −1 ≤ vn = ti i = b, 0 1 n n 108 for i = 1, 2. For each mesh there is a corresponding Riemann sum, and the expected value of the product of the two Riemann sums is n1 j =1 n2 12 1 2 1 k=1 RX (vj , vk )h(vj )h(vk )(tj − t1−1 )(t2 − t2 −1 ). j k k This double sum is just the Riemann sum for the two dimensional integral bb a a RX (s, t)h(s)h(t)dsdt. (5.5) Thus, the integral (5.3) exists in the m.s. (Riemann) sense, if and only if the integral in (5.5) exists as a two-dimensional Riemann integral. For example, if RX and h are both continuous functions, then the integral is well defined. If the integral exists over all finite intervals [a, b], then we define ∞ Xt h(t)dt = −∞ b lim m.s. a,b→∞ Xt h(t)dt −a which is well defined whenever the m.s. limit exists. Note that the mean of the Riemann sum in (5.4) is equal to n k=1 µX (vk )h(vk )(tk b − tk − 1 ) which is just the Reimann sum for the integral a µX (t)h(t)dt. Thus, since m.s. convergence implies convergence of the means (see Corollary 2.2.3), if the m.s. integral (5.3) exists then b E Xt h(t)dt b = a µX (t)h(t)dt. (5.6) a This is not too surprising, as both the expectation and integration with respect to t are integrations, so (5.6) means the order of integration can be switched. Suppose g (t) is another function and Yt another random process such that both Xt h(t) and Yt g (t) are m.s. integrable over [a, b]. Then with probability one: b b Xt h(t) + Yt g (t)dt = a b Xt h(t)dt + a Yt g (t)dt a because the corresponding Riemann sums satisfy the same additivity property. Corollary 2.2.3 implies that b E b Xs h(s)ds a Yt g (t)dt b = a b a h(s)g (t)RX Y (s, t)dsdt. (5.7) a Equation (5.7) is similar to (5.6), in that both imply that the order of expectation and integration over time can be interchanged. Combining (5.6) and (5.7) yields: b Cov a b Xs h(s)ds, Yt g (t)dt b = a a b CX Y (s, t)h(s)g (t)dsdt. a If a sequence of Gaussian random variables converges in distribution (or in m.s., p. or a.s. sense) then the limit random variable is again Gaussian. Consequently, if some jointly Gaussian random variables are given, and more random variables are defined using linear operations and limits in the m.s., a.s., or p. sense, then all the random variables are jointly Gaussian. So, for example, if X is a Gaussian random process, then X and its m.s. derivative process X (if it exists) are jointly Gaussian random processes. Also, if X is Gaussian then X and all the integrals of the form b I (a, b) = a Xt h(t)dt are jointly Gaussian. 109 Theoretical Exercise Suppose X = (Xt : t ≥ 0) is a random process such that RX is continuous. t Let Yt = 0 Xs ds. Show that Y is m.s. differentiable, and P [Yt = Xt ] = 1 for t ≥ 0. Example 5.3 Let (Wt : t ≥ 0) be a Brownian motion with σ 2 = 1, and let Xt = t ≥ 0. Let us find RX and P [|Xt | ≥ t] for t > 0. Since RW (u, v ) = u ∧ v , s RX (s, t) = E 0 s Wv dv 0 t 0 for t Wu du = t 0 Ws d s 0 (u ∧ v )dv du. To proceed, consider first the case s ≥ t and partition the region of integration into three parts as shown in Figure 5.1. The contributions from the two triangular subregions is the same, so v t u<v u>v u t s Figure 5.1: Partition of region of integration t RX (s, t) = 2 0 = t3 + 3 u s v dv du + 0 t2 (s v dv du t − t) 2 t = 0 t2 s t 3 −. 2 6 Still assuming that s ≥ t, this expression can be rewritten as RX (s, t) = st(s ∧ t) (s ∧ t)3 − . 2 6 (5.8) Although we have found (5.8) only for s ≥ t, both sides are symmetric in s and t. Thus (5.8) holds for all s, t. Since W is a Gaussian process, X is a Gaussian process. Also, E [Xt ] = 0 (since W is mean 3 2 zero) and E [Xt ] = RX (t, t) = t3 . Thus, Xt P [|Xt | ≥ t] = 2P ≥ t3 3 Note that P [|Xt | ≥ t] → 1 as t → +∞. 110 t = 2Q t3 3 3 t . Example 5.4 Let N = (Nt : t ≥ 0) be a second order process with a continuous autocorrelation function RN and let x0 be a constant. Consider the problem of finding a m.s. differentiable random process X = (Xt : t ≥ 0) satisfying the linear differential equation Xt = −Xt + Nt , X0 = x0 . (5.9) Guided by the case that Nt is a smooth nonrandom function, we write t Xt = x0 e−t + e−(t−v) Nv dv (5.10) 0 or t Xt = x0 e−t + e−t (5.11) ev Nv dv . 0 Using Proposition 5.2.1, it is not difficult to check that (5.11) indeed gives the solution to (5.9). Next, let us find the mean and autocovariance functions of X in terms of those of N . Taking the expectation on each side of (5.10) yields t µX (t) = x0 e−t + e−(t−v) µN (v )dv . (5.12) 0 A different way to derive (5.12) is to take expectations in (5.9) to yield the deterministic linear differential equation: µX (t) = −µX (t) + µN (t); µX (0) = x0 which can be solved to yield (5.12). To summarize, we found two methods to start with the stochastic differential equation (5.10) to derive (5.12), thereby expressing the mean function of the solution X in terms of the mean function of the driving process N . The first is to solve (5.9) to obtain (5.10) and then take expectations, the second is to take expectations first and then solve the deterministic differential equation for µX . The same two methods can be used to express the covariance function of X in terms of the covariance function of N . For the first method, we use (5.10) to obtain s x0 e−s + CX (s, t) = Cov 0 s = 0 t t e−(s−u) Nu du, x0 e−t + e−(t−v) Nv dv 0 e−(s−u) e−(t−v) CN (u, v )dv du. (5.13) 0 The second method is to derive deterministic differential equations. To begin, note that ∂1 CX (s, t) = Cov (Xs , Xt ) = Cov (−Xs + Ns , Xt ) so ∂1 CX (s, t) = −CX (s, t) + CN X (s, t). 111 (5.14) For t fixed, this is a differential equation in s. Also, CX (0, t) = 0. If somehow the cross covariance function CN X is found, (5.14) and the boundary condition CX (0, t) = 0 can be used to find CX . So we turn next to finding a differential equation for CN X . ∂2 CN X (s, t) = Cov(Ns , Xt ) = Cov(Ns , −Xt + Nt ) so ∂2 CN X (s, t) = −CN X (s, t) + CN (s, t). (5.15) For s fixed, this is a differential equation in t with initial condition CN X (s, 0) = 0. Solving (5.15) yields t CN X (s, t) = e−(t−v) CN (s, v )dv . (5.16) 0 Using (5.16) to replace CN X in (5.14) and solving (5.14) yields (5.13). 5.4 Ergo dicity Let X be a stationary or WSS random process. Ergodicity generally means that certain time averages are asymptotically equal to certain statistical averages. For example, suppose X = (Xt : t ∈ R) is WSS and m.s. continuous. The mean µX is defined as a statistical average: µX = E [Xt ] for any t ∈ R. The time average of X over the interval [0, t] is given by 1 t t 0 Xu du. Of course, for t fixed, the time average is a random variable, and is typically not equal to the statistical average µX . The random process X is called mean ergodic (in the m.s. sense) if lim m.s. t→∞ 1 t t Xu du = µX . 0 A discrete time WSS random process X is similarly called mean ergodic (in the m.s. sense) if lim m.s. n→∞ 1 n n Xi = µX (5.17) i=1 For example, we know by the law of large numbers that if X = (Xn : n ∈ Z) is WSS with CX (n) = I{n=0} (so that the Xi ’s are uncorrelated) then (5.17) is true. For another example, if CX (n) = 1 for all n, it means that X0 has variance one and P [Xk = X0 ] = 1 for all k (because equality holds in the Schwarz inequality: CX (n) ≤ CX (0)). Then for all n ≥ 1, 1 n n Xk = X0 . k=0 Since X0 has variance one, the process X is not ergodic if CX (n) = 1 for all n. 112 Let us proceed in continuous time. Let X = (Xt : t ∈ R) be a WSS and m.s. continuous random process. Then, by the definition of m.s. convergence, X is mean ergodic if and only if 1 t lim E t→∞ t 2 t 0 = 0. Xu du − µX t Since E 1 0 Xu du = 1 0 µX du = µX , (5.18) is the same as Var t t the properties of m.s. integrals, Var 1 t t = Xu d u 1 t Cov 0 t Xu du, 0 1 t (5.18) t 0 Xu du 1 t → 0 as t → ∞. By t Xv dv 0 1tt CX (u − v )dudv t2 0 0 = E [CX (U − V )] = where U and V are independent random variables, each uniformly distributed over the interval [0, t]. The pdf of U − V is the symmetric triangle shaped pdf t − |τ | t2 fU −V (τ ) = + so Var 1 t t Xu du t−τ t t − |τ | t2 −t 0 = Note that t = 2 t t 0 CX (τ )dt t−τ t CX (τ )dτ . ≤ 1 for 0 ≤ τ ≤ t. We state the result as a proposition. Prop osition 5.4.1 Let X be a WSS and m.s. continuous random process. Then X is mean ergodic (in the m.s. sense) if and only if 2 t→∞ t t lim 0 t−τ t CX (τ )dτ =0 Sufficient conditions are (a) limτ →∞ CX (τ ) = 0. (This condition is also necessary if limτ →∞ CX (τ ) exists.) (b) ∞ −∞ |CX (τ )|dτ < +∞. (c) limτ →∞ RX (τ ) = 0. (d) ∞ −∞ |RX (τ )|dτ < +∞. 113 (5.19) Pro of: The first part has already been shown, so it remains to prove the assertions regarding (a)-(d). Suppose CX (τ ) → c as τ → ∞. We claim the left side of (5.19) is equal to c. Indeed, given ε > 0 there exists L > 0 so that |CX (τ ) − c| ≤ ε whenever τ ≥ L. For 0 ≤ τ ≤ L we can use the Schwarz inequality to bound CX (τ ), namely |CX (τ )| ≤ CX (0). Therefore for t ≥ L, 2 t t 0 t−τ t CX (τ )dτ − c ≤ 2 t t 0 t−τ |CX (τ ) − c| dτ t 2ε t t − τ 2L (CX (0) + |c|) dτ + dτ t0 tLt 2L (CX (0) + |c|) 2L (CX (0) + |c|) 2ε t t − τ + dτ = +ε ≤ t L0 t t ≤ 2ε for t large enough ≤ Thus the left side of (5.19) is equal to c, as claimed. Hence if limτ →∞ CX (τ ) = c, (5.19) holds if and only if c = 0. It remains to prove that (b), (c) and (d) each imply (5.19). Suppose condition (b) holds. Then 2 t t 0 t−τ t CX (τ )dτ ≤ ≤ 2 t 1 t t 0 |CX (τ )|dτ ∞ −∞ |CX (τ )|dτ → 0 as t → ∞ so that (5.19) holds. Suppose either condition (c) or condition (d) holds. Since the integral in (5.19) is the variance of a random variable and since CX (τ ) = RX (τ ) − µ2 , it follows that X 0≤ 2 t t 0 t−τ t CX (τ )dt = −µ2 + X 2 t t 0 t−τ t RX (τ )dτ → −µ2 as t → ∞. X Thus, µX = 0 and RX (τ ) = CX (τ ) for all τ . Therefore either condition (a) holds or condition (b) holds, so (5.19) holds as well. Example 5.5 Let fc be a nonzero constant, let Θ be a random variable such that cos(Θ), sin(Θ), cos(2Θ), and sin(2Θ) have mean zero, and let A be a random variable independent of Θ such that E [A2 ] < +∞. Let X = (Xt : t ∈ R) be defined by Xt = A cos(2π fc t + Θ). Then X is WSS with 2 µX = 0 and RX (τ ) = CX (τ ) = E [A ] cos(2πfc τ ) . Condition (5.19) is satisfied, so X is mean ergodic. 2 Mean ergodicity can also be directly verified: 1 t t Xu du = 0 = ≤ At cos(2π fc u + Θ)du t0 A(sin(2π fc t + Θ) − sin(Θ)) 2π fc t |A| → 0 m.s. as t → ∞. π fc t 114 Example 5.6 (Comp osite binary source) A student has two biased coins, each with a zero on one side and a one on the other. Whenever the first coin is flipped the outcome is a one 1 3 with probability 4 . Whenever the second coin is flipped the outcome is a one with probability 4 . Consider a random process (Wk : k ∈ Z) formed as follows. First, the student selects one of the coins, each coin being selected with equal probability. Then the selected coin is used to generate the Wk ’s — the other coin is not used at all. This scenario can be modelled as in Figure 5.2, using the following random variables: Uk Wk S=0 S=1 Vk Figure 5.2: A composite binary source. • (Uk : k ∈ Z) are independent B e 3 4 random variables • (Vk : k ∈ Z) are independent B e 1 4 random variables • S is a B e 1 2 random variable • The above random variables are all independent • Wk = (1 − S )Uk + S Vk . The variable S can be thought of as a switch state. Value S = 0 corresponds to using the coin with probability of heads equal to 3 for each flip. 4 Clearly W is stationary, and hence also WSS. Is W mean ergodic? One approach to answering this is the direct one. Clearly µW = E [Wk ] = E [Wk |S = 0]P [S = 0] + E [Wk | S = 1]P [S = 1] = So the question is whether 1 n n k=1 1 m.s. 2 ? Wk → But by the strong law of large numbers 1 n n Wk = k=1 = m.s. → 1 n n k=1 ((1 − S )Uk + S Vk ) (1 − S ) 1 n n Uk k=1 3 1 (1 − S ) + S 4 4 115 = +S 1 n 3S −. 4 2 n Vk k=1 31 11 ·+· 42 42 = 1 . 2 Thus, the limit is a random variable, rather than the constant 1 . Intuitively, the process W has 2 such strong memory due to the switch mechanism that even averaging over long time intervals does not diminish the randomness due to the switch. Another way to show that W is not mean ergodic is to find the covariance function CW and use 2 the necessary and sufficient condition (5.19) for mean ergodicity. Note that for k fixed, Wk = Wk 2 ] = 1 . If k = l, then with probability one, so E [Wk 2 E [Wk Wl ] = E [Wk Wl | S = 0]P [S = 0] + E [Wk Wl | S = 1]P [S = 1] 1 1 = E [Uk Ul ] + E [Vk Vl ] 2 2 1 1 = E [Uk ]E [Ul ] + E [Vk ]E [Vl ] 2 2 3 21 1 21 5 = + = . 4 2 4 2 16 Therefore, CW (n) = 1 4 1 16 if n = 0 if n = 0 Since limn→∞ CW (n) exists and is not zero, W is not mean ergodic. In many applications, we are interested in averages of functions that depend on multiple random variables. We discuss this topic for a discrete time stationary random process, (Xn : n ∈ Z). Let h be a bounded, Borel measurable function on Rk for some k . What time average would we expect to be a good approximation to the statistical average E [h(X1 , . . . , Xk )]? A natural choice is 1 n n j =1 h(Xj , Xj +1 , . . . , Xj +k−1 ). We define a stationary random process (Xn : n ∈ Z) to be ergodic if 1 n→∞ n n lim j =1 h(Xj , . . . , Xj +k−1 ) = E [h(X1 , . . . , Xk )] for every k ≥ 1 and for every bounded Borel measurable function h on Rk , where the limit is taken in any of the three senses a.s., p. or m.s. (The mathematics literature uses a different definition of ergodicity, which is equivalent.) An interpretation of the definition is that if X is ergodic then all of its finite dimensional distributions are determined as time averages. As an example, suppose h(x1 , x2 ) = 1 if x1 > 0 ≥ x2 . 0 else Then h(X1 , X2 ) is one if the process (Xk ) makes a “down crossing” of level 0 between times one and two. If X is ergodic then with probability 1, 1 lim (number of down crossings between times 1 and n + 1) = P [X1 > 0 ≥ X2 ]. n→∞ n Ergodicity is a strong property. Two types of ergodic random processes are the following: • a process X = (Xk ) such that the Xk ’s are iid. • a stationary Gaussian random process X such that limn→∞ RX (n) = 0 or, limn→∞ CX (n) = 0. 116 5.5 Complexification, Part I In some application areas, primarily in connection with spectral analysis as we shall see, complex valued random variables naturally arise. Vectors and matrices over C are reviewed in the appendix. A complex random variable X = U + j V can be thought of as essentially a two dimensional random variable with real coordinates U and V . Similarly, a random complex n-dimensional vector X can be written as X = U + j V , where U and V are each n-dimensional real vectors. As far as distributions are concerned, a random vector in n-dimensional complex space Cn is equivalent to a random vector with 2n real dimensions. For example, if the 2n real variables in U and V are jointly continuous, then X is a continuous type complex random vector and its density is given by a function fX (x) for x ∈ Cn . The density fX is related to the joint density of U and V by fX (u + j v ) = fU V (u, v ) for all u, v ∈ Rn . As far as moments are concerned, all the second order analysis covered in the notes up to this point can be easily modified to hold for complex random variables, simply by inserting complex conjugates in appropriate places. To begin, if X and Y are complex random variables, we define their correlation by E [X Y ∗ ] and similarly their covariance as E [(X − E [X ])(Y − E [Y ])∗ ]. The Schwarz inequality becomes |E [X Y ∗ ]| ≤ E [|X |2 ]E [|Y |2 ] and its proof is essentially the same as for real valued random variables. The cross correlation matrix for two complex random vectors X and Y is given by E [X Y ∗ ], and similarly the cross covariance matrix is given by Cov(X, Y ) = E [(X − E [X ])(Y − E [Y ])∗ ]. As before, Cov(X ) = Cov(X, X ). The various formulas for covariance still apply. For example, if A and C are complex matrices and b and d are complex vectors, then Cov(AX + b, C Y + d) = ACov(X, Y )C ∗ . Just as in the case of real valued random variables, a matrix K is a valid covariance matrix (in other words, there exits some random vector X such that K = Cov(X )) if and only if K is Hermetian symmetric and positive semidefinite. Complex valued random variables X and Y with finite second moments are said to be orthogonal if E [X Y ∗ ] = 0, and with this definition the orthogonality principle holds for complex valued random variables. If X and Y are complex random vectors, then again E [X |Y ] is the MMSE estimator of X given Y , and the covariance matrix of the error vector is given by Cov(X ) − Cov(E [X |Y ]). The MMSE estimator for X of the form AY + b (i.e. the best linear estimator of X based on Y ) and the covariance of the corresponding error vector are given just as for vectors made of real random variables: ˆ E [X |Y ] = E [X ] + Cov(X, Y )Cov(Y )−1 (Y − E [Y ]) ˆ Cov(X − E [X |Y ]) = Cov(X ) − Cov(X, Y )Cov(Y )−1 Cov(Y , X ) By definition, a sequence X1 , X2 , . . . of complex valued random variables converges in the m.s. sense to a random variable X if E [|Xn |2 ] < ∞ for all n and if limn→∞ E [|Xn − X |2 ] = 0. The various Cauchy criteria still hold with minor modification. A sequence with E [|Xn |2 ] < ∞ for all n is a Cauchy sequence in the m.s. sense if limm,n→∞ E [|Xn − Xm |2 ] = 0. As before, a sequence converges in the m.s. sense if and only if it is a Cauchy sequence. In addition, a sequence X1 , X2 , . . . of complex valued random variables with E [|Xn |2 ] < ∞ for all n converges in the m.s. sense if ∗ and only if limm,n→∞ E [Xm Xn ] exits and is a finite constant c. If the m.s. limit exists, then the limiting random variable X satisfies E [|X |2 ] = c. Let X = (Xt : t ∈ T) be a complex random process. We can write Xt = Ut + j Vt where U and V are each real valued random processes. The process X is defined to be a second order process if E [|Xt |2 ] < ∞ for all t. Since |Xt |2 = Ut2 + Vt2 for each t, X being a second order process is 117 equivalent to both U and V being second order processes. The correlation function of a second ∗ order complex random process X is defined by RX (s, t) = E [Xs Xt ]. The covariance function is given by CX (s, t) = Cov(Xs , Xt ) where the definition of Cov for complex random variables is used. The definitions and results given for m.s. continuity, m.s. differentiation, and m.s. integration all carry over to the case of complex processes, since they are based on the use of the Cauchy criteria for m.s. convergence which also carries over. For example, a complex valued random process is m.s. continuous if and only if its correlation function RX is continuous. Similarly the cross correlation function for two second order random processes X and Y is defined by RX Y (s, t) = E [Xs Yt∗ ]. Note ∗ that RX Y (s, t) = RY X (t, s). Let X = (Xt : t ∈ T) be a complex random process such that T is either the real line or the set of integers, and write Xt = Ut + j Vt where U and V are each real valued random processes. By definition, X is stationary if and only if for any t1 , . . . , tn ∈ T, the joint distribution of (Xt1 +s , . . . , Xtn +s ) is the same for all s ∈ T. Equivalently, X is stationary if and only if U and V are jointly stationary. The process X is defined to be WSS if X is a second order process such that E [Xt ] does not depend on t, and RX (s, t) is a function of s − t alone. If X is WSS we use RX (τ ) to denote RX (s, t), where τ = s − t. A pair of complex-valued random processes X and Y are defined to be jointly WSS if both X and Y are WSS and if the cross correlation function RX Y (s, t) is a ∗ function of s − t. If X and Y are jointly WSS then RX Y (−τ ) = RY X (τ ). In summary, everything we’ve discussed in this section regarding complex random variables, vectors, and processes can be considered a simple matter of notation. One simply needs to add lines to indicate complex conjugates, to use |X |2 instead of X 2 , and to use a star “∗ ” for Hermetian transpose in place of “T ” for transpose. We shall begin using the notation at this point, and return to a discussion of the topic of complex valued random processes in a later section. In particular, we will examine complex normal random vectors and their densities, and we shall see that there is somewhat more to complexification than just notation. 5.6 The Karhunen-Lo`ve expansion e We’ve seen that under a change of coordinates, an n-dimensional random vector X is transformed into a vector Y = U ∗ X such that the coordinates of Y are othogonal random variables. Here U is the unitary matrix such that E [X X ∗ ] = U ΛU ∗ . The columns of U are eigenvectors of the Hermetian symmetric matrix E [X X ∗ ] and the corresponding nonnegative eigenvalues of E [X X † ] comprise the diagonal of the diagonal matrix Λ. The columns of U form an orthonormal basis for Cn . The Karhunen-Lo`ve expansion gives a similar change of coordinates for a random process on e a finite interval, using an orthonormal basis of functions instead of an othonormal basis of vectors. Fix an interval [a, b]. The L2 norm of a real or complex valued function f on the interval [a, b] is defined by b ||f || = a |f (t)|2 dt We write L2 [a, b] for the set of all functions on [a, b] which have finite L2 norm. The inner product of two functions f and g in L2 [a, b] is defined by b (f , g ) = f (t)g ∗ (t)dt a 118 The functions f and g are said to be orthogonal if (f , g ) = 0. Note that ||f || = (f , f ) and the Schwarz inequality holds: |(f , g )| ≤ ||f || · ||g ||. An orthonormal basis for L2 [a, b] is a sequence of functions φ1 , φ2 , . . . such that (φi , φj ) = I{i=j } . An orthonormal basis is complete if any function f in L2 [a, b] can be written as ∞ f (t) = (f , φn )φn (t) (5.20) n=1 In many instances encountered in practice, the sum (5.20) converges for each t, but in general what is meant is that the convergence is in the sense of the L2 [a, b] norm: b lim N →∞ a N |f (t) − n=1 (f , φn )φn (t)|2 dt = 0 or equivalently N lim ||f − N →∞ n=1 (f , φn )φn || = 0 The representation (5.20) can be thought of as a representation of the function f relative to a coordinate system given by the orthonormal basis. The nth coordinate of f is given by the inner product (f , φn ) and we write f ↔ ((f , φ1 ), (f , φ2 ), . . .)T . A simple example of an orthonormal basis for an interval [0, T ] is given by φ1 (t) = φ2k (t) 1 √, T 2 =T φ2 (t) = 2 T π cos( 2T t ), cos( 2πkt ), φ2k+1 (t) = T φ3 (t) = 2 T π sin( 2T t ), sin( 2πkt ) for k ≥ 1. T 2 T (5.21) What happens if we try replacing f by a random process X ? Suppose that X is a m.s. continuous random process on the interval [a, b]. Simply replacing f by X in the series representation (5.20) yields the representation ∞ Xt = (X, φn )φn (t) (5.22) n=1 where the coordinates of X are now random variables given by integrals of X : b (X, φn ) = a Xt φ∗ (t)dt. n The first moments of the coordinates of X can be expressed using the mean function given by µX (t) = E [Xt ]: b E [(X, φn )] = a µX (t)φ∗ (t)dt = (µX , φn ) n Therefore, the mean of the nth coordinate of X is the nth coordinate of the mean function of X . The correlations of the coordinates are given by b ∗ E [(X, φm )(X, φn ) ] = E a b = a b a Xt φ∗ (t)dt m b a Xs φ∗ (s)ds n RX (t, s)φ∗ (t)φn (s)dsdt m 119 ∗ (5.23) There is no reason to expect that the coordinates of X relative to an arbitrary orthonormal basis are orthogonal random variables. However, if the basis functions are eigenfunctions for the correlation function of RX , the story is different. Suppose for each n that the basis function φn is an eigenfunction of RX with corresponding eigenvalue λn , meaning b RX (t, s)φn (s)ds = λn φn (t) (5.24) a Substituting this eigen relation into (5.23) yields b E [(X, φm )(X, φn )∗ ] = a λn φ∗ (t)φn (t)dt = λn I{m=n} m (5.25) Theorem 5.6.1 Karhunen-Lo`ve expansion If X = (Xt : a ≤ t ≤ b) is m.s. continuous, equivae lently if RX is continuous, then there exists a complete orthonormal basis (φn : n ≥ 1) of continuous functions and corresponding nonnegative eigenvalues (λn : n ≥ 1) for RX , and RX is given by the fol lowing series expansion: ∞ RX (s, t) = λn φn (s)φ∗ (t) n n=1 The series converges uniformly in s, t, meaning that N lim max |RX (s, t) − N →∞ a≤s,t≤b n=1 λn φn (s)φ∗ (t)| = 0 n The process X can be represented as Xt = ∞ φn (t)(X, φn ) n=1 and the coordinates (X, φn ) satisfy (5.25). The series converges in the m.s. sense, uniformly in t. Pro of The part of the theorem refering to the representation of RX is called Mercer’s theorem and is not proved here. To establish the statement about the convergence of the representation of X , begin by observing that b E [Xt (X, φn )∗ ] = E [Xt a b = ∗ Xs φn (s)ds] RX (t, s)φn (s)ds = λn φn (t). (5.26) a Equations (5.25) and (5.26) imply that N E Xt − n=1 2 φn (t)(X, φn ) = RX (t, t) − N n=1 λn |φn (t)|2 , (5.27) which, since the series on the right side of (5.27) converges uniformly in t, implies the stated convergence property for the representation of X . 2 120 Remarks (1) Symbolically, mimicking matrix notation, we can write the representation of RX as ∗ λ1 φ1 (t) φ∗ (t) λ2 2 . λ3 RX (s, t) = [φ1 (s)|φ2 (s)| · · · ] . . .. . (2) If f ∈ L2 [a, b] and f (t) represents a voltage or current across a resistor, then the energy dissipated during the interval [a, b] is, up to a multiplicative constant, given by b (Energy of f ) = ||f ||2 = |f (t)|2 dt = a n=1 |(f , φn )|2 . The mean total energy of (Xt : a < t < b) is thus given by b E[ a b |Xt |2 dt] = RX (t, t)dt a bN = a n=1 ∞ = λn |φn (t)|2 dt λn n=1 (3) If (Xt : a ≤ t ≤ b) is a real valued mean zero Gaussian process and if the orthonormal basis functions are real valued, then the coordinates (X, φn ) are uncorrelated, real valued, jointly Gaussian random variables, and therefore are independent. Example 5.7 Let W = (Wt : t ≥ 0) be a Brownian motion with parameter σ 2 . Let us find the KL expansion of W over the interval [0, T ]. Substituting RX (s, t) = σ 2 (s ∧ t) into the eigen relation (5.24) yields t T σ 2 sφn (s)ds + 0 σ 2 tφn (s)ds = λn φn (t) (5.28) t Differentiating (5.28) with respect to t yields σ 2 tφn (t) − σ 2 tφn (t) + T t σ 2 φn (s)ds = λn φn (t), (5.29) and differentiating a second time yields that the eigenfunctions satisfy the differential equation λφ = −σ 2 φ. Also, setting t = 0 in (5.28) yields the boundary condition φn (0) = 0, and setting t = T in (5.29) yields the boundary condition φn (T ) = 0. Solving yields that the eigenvalue and eigenfunction pairs for W are λn = 4σ 2 T 2 (2n + 1)2 π 2 φn (t) = 2 sin T (2n + 1)π t 2T These functions form a complete orthonormal basis for L2 [0, T ]. 121 n≥0 Example 5.8 Let X be a white noise process. Such a process is not a random process as defined in these notes, but can be defined as a generalized process in the same way that a delta function can be defined as a generalized function. Generalized random processes, just like generalized functions, only make sense when mutliplied by a suitable function and then integrated. For example, the delta function δ is defined by the requirement that for any function f that is continous at t = 0, ∞ f (t)δ (t)dt = f (0) −∞ ∞ A white noise process X is such that integrals of the form −∞ f (t)X (t)dt exist for functions f with finite L2 norm ||f ||. The integrals are random variables with finite second moments, mean zero and correlations given by E ∞ −∞ f (s)Xs ds ∞ g (t)Xt dt −∞ ∗ = σ2 ∞ f (t)g ∗ (t)dt −∞ In a formal or symbolic sense, this means that X is a WSS process with mean µX = 0 and ∗ autocorrelation function RX (s, t) = E [Xs Xt ] given by RX (τ ) = σ 2 δ (τ ). What would the KL expansion be for a white noise process over some fixed interval [a,b]? The eigen relation (5.24) becomes simply σ 2 φ(t) = λn φ(t) for all t in the interval. Thus, all the eigenvalues of a white noise process are equal to σ 2 , and any function f with finite norm is an eigen function. Thus, if (φn : n ≥ 1) is an arbitrary complete orthonormal basis for the square integrable functions on [a, b], then the coordinates of the white noise process X , formally given by Xn = (X, φn ), satisfy ∗ E [Xn Xm ] = σ 2 I{n=m} . (5.30) This offers a reasonable interpretation of white noise. It is a generalized random process such that its coordinates (Xn : n ≥ 1) relative to an arbitrary orthonormal basis for a finite interval have mean zero and satisfy (5.30). Example 5.9 (Periodic WSS random processes) Let X = (Xt : t ∈ R) be a WSS random process and let T be a positive constant. It is shown next that the following three conditions are equivalent: (a) RX (T ) = RX (0) (b) P [XT +τ = Xτ ] = 1 for all τ ∈ R (c) RX (T + τ ) = RX (τ ) for all τ ∈ R (i.e. RX (τ ) is periodic with period T ) Suppose (a) is true. Since RX (0) is real valued, so is RX (T ), yielding ∗ ∗ ∗ ∗ E [|XT +τ − Xτ |2 ] = E [XT +τ XT +τ − XT +τ Xτ − Xτ XT +τ + Xτ Xτ ] ∗ = RX (0) − RX (T ) − RX (T ) + RX (0) = 0 Therefore, (a) implies (b). Next, suppose (b) is true and let τ ∈ R. Since two random variables that are equal with probability one have the same expectation, (b) implies that ∗ ∗ RX (T + τ ) = E [XT +τ X0 ] = E [Xτ X0 ] = RX (τ ). Therefore (b) imples (c). Trivially (c) implies (a), so the equivalence of (a) through (c) is proved. We shall call X a periodic, WSS process of period T if X is WSS and any of the three equivalent properties (a), (b), or (c) hold. 122 Property (b) almost implies that the sample paths of X are periodic. However, for each τ it can be that Xτ = Xτ +T on an event of probability zero, and since there are uncountably many real numbers τ , the sample paths need not be periodic. However, suppose (b) is true and define a process Y by Yt = X(t mod T ) . (Recall that by definition, (t mod T ) is equal to t + nT , where n is selected so that 0 ≤ t + nT < T .) Then Y has periodic sample paths, and Y is a version of X , which by definition means that P [Xt = Yt ] = 1 for any t ∈ R. Thus, the properties (a) through (c) are equivalent to the condition that X is WSS and there is a version of X with periodic sample paths of period T . Suppose X is a m.s. continuous, periodic, WSS random process. Due to the periodicity of X , it is natural to consider the restriction of X to the interval [0, T ]. The Karhunen-Lo`ve expansion e of X restricted to [0, T ] is described next. Let φn be the function on [0, T ] defined by φn (t) = e2πj nt/T √ T The functions (φn : n ∈ Z) form a complete orthonormal basis for L2 [0, T ]. In addition, for any n fixed, both RX (τ ) and φn are periodic with period dividing T , so T T RX (s, t)φn (t)dt = 0 0 T = 0 RX (s − t)φn (t)dt RX (t)φn (s − t)dt T 1 √ RX (t)e2πj ns/T e−2πj nt/T dt T0 = λn φn (s). = Therefore φn is an eigenfunction of RX , and the corresponding eigenvalue λn is given by T λn = RX (t)e−2πj nt/T dt = √ T (RX , φn ). 0 The Karhunen-Lo`ve expansion (5.20) of X over the interval [0, T ] can be written as e ˆ Xn e2πj nt/T Xt = (5.31) n ˆ where Xn is defined by 1 1 ˆ Xn = √ (X, φn ) = T T Note that T Xt e−2πj nt/T dt 0 1 λn ˆ ˆ∗ E [Xm Xn ] = E [(X, φm )(X, φn )∗ ] = I T T {m=n} Although the representation (5.31) has been derived only for 0 ≤ t ≤ T , both sides of (5.31) are periodic with period T . Therefore, the representation (5.31) holds for all t. It is called the spectral representation of the periodic, WSS process X . 123 Let pX be the function on the real line R = (ω : −∞ < ω < ∞), pX (ω ) = λn /T 0 ω= else 2π n T 1 defined by for some integer n The function pX is called the power spectral mass function of X . It is similar to a probability mass π function, in that it is positive for at most a countable infinity of values. The value pX ( 2Tn ) is equal th term in the representation (5.31): to the power of the n 2π n ˆ ˆ E [|Xn e2πj nt/T |2 ] = E [|Xn |2 ] = pX ( ) T and the total mass of pX is the total power of X , E [|Xt |]2 . Periodicity is a rather restrictive assumption to place on a WSS process. In the next chapter we shall further investigate spectral properties of WSS processes. We shall see that many WSS random processes have a power spectral density. A given random variable might have a pmf or a pdf, and it definitely has a CDF. In the same way, a given WSS process might have a power spectral mass function or a power spectral density function, and it definitely has a cumulative power spectral distribution function. The periodic WSS processes of period T are precisely those WSS processes that have a power spectral mass function that is concentrated on the integer multiples of 2π . T 1 The Greek letter ω is used here as it is traditionally used for frequency measured in radians per second. It is related to the frequency f measured in cycles per second by ω = 2π f . Here ω is not the same as a typical element of the underlying space of all outcomes, Ω. The meaning of ω should be clear from the context. 124 5.7 Problems 5.1. Calculus for a simple Gaussian random pro cess Define X = (Xt : t ∈ R) by Xt = A + B t + C t2 , where A, B , and C are independent, N (0, 1) 1 random variables. (a) Verify directly that X is m.s. differentiable. (b) Express P { 0 Xs ds ≥ 1} in terms of Q, the standard normal complementary CDF. 5.2. Lack of sample path continuity of a Poisson pro cess Let N = (Nt : t ≥ 0) be a Poisson process with rate λ > 0. (a) Find the following two probabilities, explaining your reasoning: P {N is continuous over the interval [0,T] } for a fixed T > 0, and P {N is continuous over the interval [0, ∞)}. (b) Is N sample path continuous a.s.? Is N m.s. continuous? 5.3. Prop erties of a binary valued pro cess Let Y = (Yt : t ≥ 0) be given by Yt = (−1)Nt , where N is a Poisson process with rate λ > 0. (a) Is Y a Markov process? If so, find the transition probability function pi,j (s, t) and the transition rate matrix Q. (b) Is Y mean square continuous? (c) Is Y mean square differentiable? (d) Does 1T limT →∞ T 0 yt dt exist in the m.s. sense? If so, identify the limit. 5.4. Some statements related to the basic calculus of random pro cesses Classify each of the following statements as either true (meaning always holds) or false, and justify your answers. (a) Let Xt = Z , where Z is a Gaussian random variable. Then X = (Xt : t ∈ R) is mean ergodic in the m.s. sense. σ 2 |τ | ≤ 1 is a valid autocorrelation function. (b) The function RX defined by RX (τ ) = 0 τ >1 (c) Suppose X = (Xt : t ∈ R) is a mean zero stationary Gaussian random process, and suppose X is m.s. differentiable. Then for any fixed time t, Xt and Xt are independent. 5.5. Differentiation of the square of a Gaussian random pro cess (a) Show that if random variables (An : n ≥ 0) are mean zero and jointly Gaussian and if limn→∞ An = A m.s., then limn→∞ A2 = A2 m.s. (Hint: If A, B , C , and D are mean zero and n jointly Gaussian, then E [AB C D] = E [AB ]E [C D] + E [AC ]E [B D] + E [AD]E [B C ].) (b) Show that if random variables (An , Bn : n ≥ 0) are jointly Gaussian and limn→∞ An = A m.s. and limn→∞ Bn = B m.s. then limn→∞ An Bn = AB m.s. (Hint: Use part (a) and the identity 2− 2 2 ab = (a+b) 2 a −b .) 2 (c) Let X be a mean zero, m.s. differentiable Gaussian random process, and let Yt = Xt for all t. Is Y m.s. differentiable? If so, justify your answer and express the derivative in terms of Xt and Xt . 5.6. Cross correlation b etween a pro cess and its m.s. derivative Suppose X is a m.s. differentiable random process. Show that RX X = ∂1 RX . (It follows, in particular, that ∂1 RX exists.) 5.7. Fundamental theorem of calculus for m.s. calculus Suppose X = (Xt : t ≥ 0) is a m.s. continuous random process. Let Y be the process defined by t Yt = 0 Xu du for t ≥ 0. Show that X is the m.s. derivative of Y . (It follows, in particular, that Y 125 is m.s. differentiable.) 5.8. A windowed Poisson pro cess Let N = (Nt : t ≥ 0) be a Poisson process with rate λ > 0, and let X = (Xt : t ≥ 0) be defined by Xt = Nt+1 − Nt . Thus, Xt is the number of counts of N during the time window (t, t + 1]. (a) Sketch a typical sample path of N , and the corresponding sample path of X . (b) Find the mean function µX (t) and covariance function CX (s, t) for s, t ≥ 0. Express your answer in a simple form. (c) Is X Markov? Why or why not? (d) Is X mean-square continuous? Why or why not? t (e) Determine whether 1 0 Xs ds converges in the mean square sense as t → ∞. t 5.9. An integral of white noise times an exp onential t Let X = 0 Zu e−u du, where Z is white Gaussian noise with autocorrelation function δ (τ )σ 2 , for 2 > 0. (a) Find the auto correlation function, R (s, t) for s, t ≥ 0. (b) Is X mean square some σ X differentiable? Justify your answer. (c) Does Xt converge in the mean square sense as t → ∞? Justify your answer. 5.10. A singular integral with a Brownian motion 1 Consider the integral 0 wt dt, where w = (wt : t ≥ 0) is a standard Brownian motion. Since t 1 wt Var( wt ) = 1 diverges as t → 0, we define the integral as lim →0 t t t dt m.s. if the limit exists. (a) Does the limit exist? If so, what is the probability distribution of the limit? ∞ T (b) Similarly, we define 1 wt dt to be limT →∞ 1 wt dt m.s. if the limit exists. Does the limit t t exist? If so, what is the probability distribution of the limit? 5.11. An integrated Poisson pro cess t Let N = (Nt : t ≥ 0) denote a Poisson process with rate λ > 0, and let Yt = 0 Ns ds for s ≥ 0. (a) Sketch a typical sample path of Y . (b) Compute the mean function, µY (t), for t ≥ 0. (c) Compute Var(Yt ) for t ≥ 0. (c) Determine the value of the limit, limt→∞ P [Yt < t]. 5.12. Recognizing m.s. prop erties Suppose X is a mean zero random process. For each choice of autocorrelation function shown, indicate which of the following properties X has: m.s. continuous, m.s. differentiable, m.s. integrable over finite length intervals, and mean ergodic in the the m.s. sense. (a) X is WSS with RX (τ ) = (1 − |τ |)+ , (b) X is WSS with RX (τ ) = 1 + (1 − |τ |)+ , (c) X is WSS with RX (τ ) = cos(20π τ ) exp(−10|τ |), 1 if s = t (d) RX (s, t) = , (not WSS, you don’t need to check for mean ergodic property) 0 else √ (e) RX (s, t) = s ∧ t for s, t ≥ 0. (not WSS, you don’t need to check for mean ergodic property) 5.13. A random Taylor’s approximation Suppose X is a mean zero WSS random process such that RX is twice continuously differentiable. Guided by Taylor’s approximation for deterministic functions, we might propose the following estimator of Xt given X0 and X0 : Xt = X0 + tX0 . 126 (a) Express the covariance matrix for the vector (X0 , X0 , Xt )T in terms of the function RX and its derivatives. (b) Express the mean square error E [(Xt − Xt )2 ] in terms of the function RX and its derivatives. (c) Express the optimal linear estimator E [Xt |X0 , X0 ] in terms of X0 , X0 , and the function RX and its derivatives. (d) (This part is optional - not required.) Compute and compare limt→0 (mean square error)/t4 for the two estimators, under the assumption that RX is four times continuously differentiable. 5.14. Correlation ergo dicity of Gaussian pro cesses A WSS random process X is called correlation ergodic (in the m.s. sense) if for any constant h, lim m.s. t→∞ 1 t t Xs+h Xs ds = E [Xs+h Xs ] 0 Suppose X is a mean zero, real-valued Gaussian process such that RX (τ ) → 0 as |τ | → ∞. Show that X is correlation ergodic. (Hints: Let Yt = Xt+h Xt . Then correlation ergodicity of X is equivalent to mean ergodicity of Y . If A, B , C, and D are mean zero, jointly Gaussian random variables, then E [AB C D] = E [AB ]E [C D] + E [AC ]E [B D] + E [AD]E [B C ]. 5.15. A random pro cess which changes at a random time Let Y = (Yt : t ∈ R) and Z = (Zt : t ∈ R) be stationary Gaussian Markov processes with mean zero and autocorrelation functions RY (τ ) = RZ (τ ) = e−|τ | . Let U be a real-valued random variable and suppose Y , Z , and U , are mutually independent. Finally, let X = (Xt : t ∈ R) be defined by Xt = Yt t < U Zt t ≥ U (a) Sketch a typical sample path of X . (b) Find the first order distributions of X . (c) Express the mean and autocorrelation function of X in terms of the CDF, FU , of U . (d) Under what condition on FU is X m.s. continuous? (e) Under what condition on FU is X a Gaussian random process? 5.16. Gaussian review question Let X = (Xt : t ∈ R) be a real-valued stationary Gauss-Markov process with mean zero and autocorrelation function CX (τ ) = 9 exp(−|τ |). (a) A fourth degree polynomial of two variables is given by p(x, y ) = a+bx+cy +dxy +ex2 y +f xy 2 +... such that all terms have the form cxi y j with i + j ≤ 4. Suppose X2 is to be estimated by an estimator of the form p(X0 , X1 ). Find the fourth degree polynomial p to minimize the MSE: E [(X2 − p(X0 , X1 ))2 ] and find the resulting MMSE. (Hint: Think! Very little computation is needed.) 1 (b) Find P [X2 ≥ 4|X0 = π , X1 = 3]. You can express your answer using the Gaussian Q function 2 ∞ Q(c) = c √1 π e−u /2 du. (Hint: Think! Very little computation is needed.) 2 5.17. First order differential equation driven by Gaussian white noise Let X be the solution of the ordinary differential equation X = −X + N , with initial condition x0 , 127 where N = (Nt : t ≥ 0) is a real valued Gaussian white noise with RN (τ ) = σ 2 δ (τ ) for some constant σ 2 > 0. Although N is not an ordinary random process, we can interpret this as the condition that N is a Gaussian random process with mean µN = 0 and correlation function RN (τ ) = σ 2 δ (τ ). (a) Find the mean function µX (t) and covariance function CX (s, t). (b) Verify that X is a Markov process by checking the necessary and sufficient condition: CX (r, s)CX (s, t) = CX (r, t)CX (s, s) whenever r < s < t. (Note: The very definition of X also suggests that X is a Markov process, because if t is the “present time,” the future of X depends only on Xt and the future of the white noise. The future of the white noise is independent of the past (Xs : s ≤ t). Thus, the present value Xt contains all the information from the past of X that is relevant to the future of X . This is the continuous-time analog of the discrete-time Kalman state equation.) (c) Find the limits of µX (t) and RX (t + τ , t) as t → ∞. (Because these limits exist, X is said to be asymptotically WSS.) 5.18. Karhunen-Lo`ve expansion of a simple random pro cess e Let X be a WSS random process with mean zero and autocorrelation function RX (τ ) = 100(cos(10π τ ))2 = 50 + 50 cos(20π τ ). (a) Is X mean square differentiable? (Justify your answer.) (b) Is X mean ergodic in the m.s. sense? (Justify your answer.) (c) Describe a set of eigenfunctions and corresponding eigenvalues for the Karhunen-Lo`ve expane sion of (Xt : 0 ≤ t ≤ 1). 5.19. Karhunen-Lo`ve expansion of a finite rank pro cess e Suppose Z = (Zt : 0 ≤ t ≤ T ) has the form Zt = N=1 Xn ξn (t) such that the functions ξ1 , . . . , ξN n are orthonormal over the interval [0, T ], and the vector X = (X1 , ..., XN )T has a correlation matrix K with det(K ) = 0. The process Z is said to have rank N . Suppose K is not diagonal. Describe the Karhunen-Lo`ve expansion of Z . That is, describe an orthornormal basis (φn : n ≥ 1), and e eigenvalues for the K-L expansion of X , in terms of the given functions (ξn ) and correlation matrix K . Also, describe how the coordinates (Z, φn ) are related to X . 5.20. Mean ergo dicity of a p erio dic WSS random pro cess Let X be a mean zero periodic WSS random process with period T > 0. Recall that X has a power spectral representation Xt = Xn e2πj nt/T . n∈Z where the coefficients Xn are orthogonal random variables. The power spectral mass function of X π is the discrete mass function pX supported on frequencies of the form 2Tn , such that E [|Xn |2 ] = 2π n pX ( T ). Under what conditions on pX is the process X mean ergodic in the m.s. sense? Justify your answer. 5.21. Application of the Karhunen-Lo`ve expansion to estimation e Let X = (Xt : 0 ≤ T ) be a random process given by Xt = AB sin( πt ), where A and T are positive T constants and B is a N (0, 1) random variable. Think of X as an amplitude modulated random signal. (a) What is the expected total energy of X ? (b) What are the mean and covariance functions of X ? 128 (c) Describe the Karhunen-Lo´ve expansion of X . (Hint: Only one eigenvalue is nonzero, call it e λ1 . What are λ1 , the corresponding eigenfunction φ1 , and the first coordinate X1 = (X, φ1 )? You don’t need to explicitly identify the other eigenfunctions φ2 , φ3 , . . .. They can simply be taken to fill out a complete orthonormal basis.) (d) Let N = (Xt : 0 ≤ T ) be a real-valued Gaussian white noise process independent of X with RN (τ ) = σ 2 δ (τ ), and let Y = X + N . Think of Y as a noisy observation of X . The same basis functions used for X can be used for the Karhunen-Lo`ve expansions of N and Y . Let N1 = (N , φ1 ) e and Y1 = (Y , φ1 ). Note that Y1 = X1 + N1 . Find E [B |Y1 ] and the resulting mean square error. (Remark: The other coordinates Y2 , Y3 , . . . are independent of both X and Y1 , and are thus useless for the purpose of estimating B . Thus, E [B |Y1 ] is equal to E [B |Y ], the MMSE estimate of B given the entire observation process Y .) 5.22*. An auto correlation function or not? Let RX (s, t) = cosh(a(|s − t| − 0.5)) for −0.5 ≤ s, t 0.5 where a is a positive constant. Is RX the autocorrelation function of a random process of the form X = (Xt : −0.5 ≤ t ≤ 0.5)? If not, explain why not. If so, give the Karhunen-Lo`ve expansion for X . e 5.23*. On the conditions for m.s. differentiability t2 sin(1/t2 ) t = 0 (a) Let f (t) = . Sketch f and show that f is differentiable over all of R, and 0 t=0 1 find the derivative function f . Note that f is not continuous, and −1 f (t)dt is not well defined, whereas this integral would equal f (1) − f (−1) if f were continuous. (b) Let Xt = Af (t), where A is a random variable with mean zero and variance one. Show that X is m.s. differentiable. (c) Find RX . Show that ∂1 RX and ∂2 ∂1 RX exist but are not continuous. 129 130 Chapter 6 Random pro cesses in linear systems and sp ectral analysis Random processes can be passed through linear systems in much the same way as deterministic signals can. A time-invariant linear system is described in the time domain by an impulse response function, and in the frequency domain by the Fourier transform of the impulse response function. In a sense we shall see that Fourier transforms provide a diagonalization of WSS random processes, just as the Karhunen-Lo`ve expansion allows for the diagonalization of a random process defined e on a finite interval. While a m.s continuous random process on a finite interval has a finite average energy, a WSS random process has a finite mean average energy per unit time, called the power. Nearly all the definitions and results of this chapter can be carried through in either discrete time or continuous time. The set of frequencies relevant for contiuous-time random processes is all of R, while the set of frequencies relevant for discrete-time random processes is the interval [−π , π ]. For ease of notation we shall primarily concentrate on continuous time processes and systems in the first two sections, and give the corresponding definition for discrete time in the third section. Representations of baseband random processes and narrowband random processes are discussed in Sections 6.4 and 6.5. Roughly speaking, baseband random processes are those which have power only in low frequencies. A baseband random process can be recovered from samples taken at a sampling frequency that is at least twice as large as the largest frequency component of the process. Thus, operations and statistical calculations for a continuous-time baseband process can be reduced to considerations for the discrete time sampled process. Roughly speaking, narrowband random processes are those processes which have power only in a band (i.e. interval) of frequencies. A narrowband random process can be represented as baseband random processes that is modulated by a deterministic sinusoid. Complex random processes naturally arise as baseband equivalent processes for real-valued narrowband random processes. A related discussion of complex random processes is given in the last section of the chapter. 6.1 Basic definitions The output (Yt : t ∈ R) of a linear system with impluse response function h(s, t) and a random process input (Xt : t ∈ R) is defined by Ys = ∞ h(s, t)Xt dt −∞ 131 (6.1) See Figure 6.1. For example, the linear system could be a simple integrator from time zero, defined X Y h Figure 6.1: A linear system with input X , impulse response function h, and output Y by s 0 Xt dt Ys = 0 s≥0 s < 0, in which case the impulse response function is 1 s≥t≥0 0 otherwise. h(s, t) = The integral (6.1) defining the output Y will be interpreted in the m.s. sense. Thus, the integral defining Ys for s fixed exists if and only if the following Riemann integral exists and is finite: ∞ ∞ −∞ −∞ h∗ (s, τ )h(s, t)RX (t, τ )dtdτ (6.2) A sufficient condition for Ys to be well defined is that RX is a bounded continuous function, and ∞ h(s, t) is continuous in t with −∞ |h(s, t)|dt < ∞. The mean function of the output is given by µY (s) = E [ ∞ h(s, t)Xt dt] = −∞ ∞ h(s, t)µX (t)dt (6.3) −∞ As illustrated in Figure 6.2, the mean function of the output is the result of passing the mean function of the input through the linear system. The cross correlation function between the output µX µY h Figure 6.2: A linear system with input µX and impulse response function h and input processes is given by ∞ RY X (s, τ ) = E [ −∞ ∞ = ∗ h(s, t)Xt dtXτ ] h(s, t)RX (t, τ )dt (6.4) −∞ and the correlation function of the output is given by ∞ RY (s, u) = E Ys = = ∞ h(u, τ )Xτ dτ ∗ −∞ h∗ (u, τ )RY X (s, τ )dτ −∞ ∞ ∞ −∞ −∞ h∗ (u, τ )h(s, t)RX (t, τ )dtdτ 132 (6.5) (6.6) Recall that Ys is well defined as a m.s. integral if and only if the integral (6.2) is well defined and finite. Comparing with (6.6), it means that Ys is well defined if and only if the right side of (6.6) with u = s is well defined and gives a finite value for E [|Ys |2 ]. The linear system is time invariant if h(s, t) depends on s, t only through s − t. If the system is time invariant we write h(s − t) instead of h(s, t), and with this substitution the defining relation (6.1) becomes a convolution: Y = h ∗ X . A linear system is called bounded input bounded output (bibo) stable if the output is bounded whenever the input is bounded. In case the system is time invariant, bibo stability is equivalent to the condition ∞ −∞ |h(τ )|dτ < ∞. (6.7) In particular, if (6.7) holds and if an input signal x satisfies |xs | < L for all s, then the output signal y = x ∗ h satisfies |y (t)| ≤ ∞ −∞ |h(t − s)|Lds = L ∞ −∞ |h(τ )|dτ for all t. If X is a WSS random process then by the Schwarz inequality, RX is bounded by RX (0). Thus, if X is WSS and m.s. continuous, and if the linear system is time-invariant and bibo stable, the integral in (6.2) exists and is bounded by ∞ ∞ −∞ RX (0) −∞ |h(s − τ )||h(s − t)|dtdτ = RX (0)( ∞ −∞ |h(τ )|dτ )2 < ∞ Thus, the output of a linear, time-invariant bibo stable system is well defined in the m.s. sense if the input is a stationary, m.s. continuous process. A paragraph about convolutions is in order. It is useful to be able to recognize convolution integrals in disguise. If f and g are functions on R, the convolution is the function f ∗ g defined by f ∗ g (t) = or equivalently f ∗ g (t) = or equivalently, for any real a and b f ∗ g (a + b) = ∞ −∞ ∞ −∞ ∞ −∞ f (s)g (t − s)ds f (t − s)g (s)ds f (a + s)g (b − s)ds. A simple change of variable shows that the above three expressions are equivalent. However, in order to immediately recognize a convolution, the salient feature is that the convolution is the integral of the product of f and g , with the arguments of both f and g ranging over R in such a way that the sum of the two arguments is held constant. The value of the constant is the value at which the convolution is being evaluated. Convolution is commutative: f ∗ g = g ∗ f and associative: (f ∗ g ) ∗ k = f ∗ (g ∗ k ) for three functions f , g , k . We simply write f ∗ g ∗ k for (f ∗ g ) ∗ k . The convolution f ∗ g ∗ k is equal to a double integral of the product of f ,g , and k , with the arguments 133 of the three functions ranging over all triples in R3 with a constant sum. The value of the constant is the value at which the convolution is being evaluated. For example, ∞ ∞ −∞ f ∗ g ∗ k (a + b + c) = −∞ f (a + s + t)g (b − s)k (c − t)dsdt. Suppose that X is WSS and that the linear system is time invariant. Then (6.3) becomes µY (s) = ∞ −∞ h(s − t)µX dt = µX ∞ h(t)dt −∞ Observe that µY (s) does not depend on s. Equation (6.4) becomes ∞ RY X (s, τ ) = −∞ h(s − t)RX (t − τ )dt = h ∗ RX (s − τ ), (6.8) which in particular means that RY X (s, τ ) is a function of s − τ alone. Equation (6.5) becomes RY (s, u) = ∞ −∞ h∗ (u − τ )RY X (s − τ )dτ . (6.9) The right side of (6.9) looks nearly like a convolution, but as τ varies the sum of the two arguments is u − τ + s − τ , which is not constant as τ varies. To arrive at a true convolution, define the new function h by h(v ) = h∗ (−v ). Using the definition of h and (6.8) in (6.9) yields RY (s, u) = ∞ −∞ h(τ − u)h ∗ RX (s − τ )dτ = h ∗ (h ∗ RX )(s − u) = h ∗ h ∗ RX (s − u) which in particular means that RY (s, u) is a function of s − u alone. To summarize, if X is WSS and if the linear system is time invariant, then X and Y are jointly WSS with ∞ µY = µX h(t)dt RY X = h ∗ RX RY = h ∗ h ∗ RX . (6.10) −∞ The convolution h ∗ h, equal to h ∗ h, can also be written as h ∗ h(t) = = ∞ −∞ ∞ −∞ h(s)h(t − s)ds h(s)h∗ (s − t)ds (6.11) The expression shows that h ∗ h(t) is the correlation between h and h∗ translated by t from the origin. The equations derived in this section for the correlation functions RX , RY X and RY also hold for the covariance functions CX , CY X , and CY . The derivations are the same except that covariances rather than correlations are computed. In particular, if X is WSS and the system is linear and time invariant, then CY X = h ∗ CX and CY = h ∗ h ∗ CX . 134 6.2 Fourier transforms, transfer functions and p ower sp ectral densities Fourier transforms convert convolutions into products, so this is a good point to begin using Fourier transforms. The Fourier transform of a function g mapping R to the complex numbers C is formally defined by ∞ g (ω ) = e−j ωt g (t)dt (6.12) −∞ Some important properties of Fourier transforms are stated next. Linearity: ag + bh = ag + bh Inversion: g (t) = ∞ j ωt g (ω ) dω 2π −∞ e Convolution to multiplication: g ∗ h = g h and g ∗ h = 2π g h Parseval’s identity: ∞ ∗ −∞ g (t)h (t)dt = ∞ dω ∗ −∞ g (ω )h (ω ) 2π Transform of time reversal: h = h∗ , where h(t) = h∗ (−t) Differentiation to multiplication by j ω : dg dt (ω ) = (j ω )g (ω ) Pure sinusoid to delta function: For ωo fixed: ej ωo t (ω ) = 2π δ (ω − ωo ) Delta function to pure sinusoid: For to fixed: δ (t − to )(ω ) = e−j ωto The inversion formula above shows that a function g can be represented as an integral (basically a limiting form of linear combination) of sinusoidal functions of time ej ωt , and g (ω ) is the coefficient in the representation for each ω . Paresval’s identity applied with g = h yields that the total energy of g (the square of the L2 norm) can be computed in either the time or frequency domain: ∞ ∞ ||g ||2 = −∞ |g (t)|2 dt = −∞ |g (ω )|2 dω . The factor 2π in the formulas can be attributed to the use 2π of frequency ω in radians. If ω = 2π f , then f is the frequency in Hertz (Hz) and dω is simply df . 2π The Fourier transform can be defined for a very large class of functions, including generalized functions such as delta functions. In these notes we won’t attempt a systematic treatment, but will use Fourier transforms with impunity. In applications, one is often forced to determine in what senses the transform is well defined on a case-by-case basis. Two sufficient conditions for the Fourier transform of g to be well defined are mentioned in the remainder of this paragraph. The relation (6.12) defining a Fourier transform of g is well defined if, for example, g is a continuous function ∞ which is integrable: −∞ |g (t)|dt < ∞, and in this case the dominated convergence theorem implies that g is a continuous function. The Fourier transform can also be naturally defined whenever g has a finite L2 norm, through the use of Parseval’s identity. The idea is that if g has finite L2 norm, then it is the limit in the L2 norm of a sequence of functions gn which are integrable. Owing to Parseval’s identity, the Fourier transforms gn form a Cauchy sequence in the L2 norm, and hence have a limit, which is defined to be g . Return now to consideration of a linear time-invariant system with an impulse response function h = (h(τ ) : τ ∈ R). The Fourier transform of h is used so often that a special notation, H (ω ), is used for it. The output signal y = (yt : t ∈ R) for an input signal x = (xt : t ∈ R) is given in the 135 time domain by the convolution y = x ∗ h. In the frequency domain this becomes y (ω ) = H (ω )x(ω ). For example, given a < b let H[a,b] (ω ) be the ideal bandpass transfer function for frequency band [a, b], defined by 1 a≤ω≤b H[a,b] (ω ) = (6.13) 0 otherwise. If x is the input and y is the output of a linear system with transfer function H[a,b] , then the relation y (ω ) = H[a,b] (ω )x(ω ) shows that the frequency components of x in the frequency band [a, b] pass through the filter unchanged, and the frequency components of x outside of the band are completely nulled. The total energy of the output function y can therefore be interpreted as the energy of x in the frequency band [a, b]. Therefore, Energy of x in frequency interval [a, b] = ||y ||2 = ∞ −∞ |H[a,b] (ω )|2 |x(ω )|2 b dω = 2π a |x(ω )|2 dω . 2π Consequently, it is appropriate to call |x(ω )|2 the energy spectral density of the deterministic signal x. Given a WSS random process X = (Xt : t ∈ R), the Fourier transform of its correlation function RX is denoted by SX . For reasons that we will soon see, the function SX is called the power spectral density of X . Similarly, if Y and X are jointly WSS, then the Fourier transform of RY X is denoted by SY X , called the cross power spectral density function of Y and X . The Fourier transform of the time reverse complex conjugate function h is equal to H ∗ , so |H (ω )|2 is the Fourier transform of h ∗ h. With the above notation, the second moment relationships in (6.10) become: SY X (ω ) = H (ω )SX (ω ) SY (ω ) = |H (ω )|2 SX (ω ) ∞ Let us examine some of the properties of the power spectral density, SX . If −∞ |RX (t)|dt < ∞ then SX is well defined and is a continuous function. Because RY X = RX Y , it follows that ∗ SY X = SX Y . In particular, taking Y = X yields RX = RX and SX = S ∗ , meaning that SX is real-valued. ∞ The Fourier inversion formula applied to SX yields that RX (τ ) = −∞ ej ωτ SX (ω ) dω . In partic2π ular, ∞ dω (6.14) E [|Xt |2 ] = RX (0) = SX (ω ) . 2π −∞ The expectation E [|Xt |2 ] is called the power (or total power) of X , because if Xt is a voltage or current accross a resistor, |Xt |2 is the instantaneous rate of dissipation of heat energy. Therefore, (6.14) means that the total power of X is the integral of SX over R. This is the first hint that the name power spectral density for SX is justified. Let a < b and let Y denote the output when the WSS process X is passed through the linear time-invariant system with transfer function H[a,b] defined by (6.13). The process Y represents the part of X in the frequency band [a, b]. By the relation SY = |H[a,b] |2 SX and the power relationship (6.14) applied to Y , we have Power of X in frequency interval [a, b] = E [|Yt |2 ] = ∞ −∞ SY (ω ) dω = 2π b a SX (ω ) dω 2π (6.15) Two observations can be made concerning (6.15). First, the integral of SX over any interval [a, b] is nonnegative. If SX is continuous, this implies that SX is nonnegative. Even if SX is not continuous, we can conclude that SX is nonnegative except possibly on a set of zero measure. The 136 second observation is that (6.15) fully justifies the name “power spectral density of X ” given to SX . Example 6.1 Suppose X is a WSS process and that Y is a moving average of X with averaging window duration T for some T > 0: Yt = 1 T t Xs d s t−T Equivalently, Y is the output of the linear time-invariant system with input X and impulse response function h given by 1 0≤τ ≤T T h(τ ) = 0 else The output correlation function is given by RY = h ∗ h ∗ RX . Using (6.11) and referring to Figure 6.3 we find that h ∗ h is a triangular shaped waveform: h ∗ h(τ ) = 1 |τ | (1 − )+ T T Similarly, CY = h ∗ h ∗ CX . Let’s find in particular an expression for the variance of Yt in terms h(s) h(s!t) 1 T ~ h*h s 0 t T !T T Figure 6.3: Convolution of two rectangle functions of the function CX . ∞ Var(Yt ) = CY (0) = = 1 T T −T −∞ (1 − (h ∗ h)(0 − τ )CX (τ )dτ |τ | )CX (τ )dτ T (6.16) The expression in (6.16) arose earlier in these notes, in the section on mean ergodicity. Let’s see the effect of the linear system on the power spectral density of the input. Observe that H (ω ) = ∞ e−j ωt h(t)dt = −∞ = 2e−j ωT /2 Tω = e−j ωT /2 1 T e−j ωT − 1 −j ω ej ωT /2 − e−j ωT /2 2j T sin( ω2 ) ωT 2 Equivalently, using the substitution ω = 2π f , H (2π f ) = e−j πf T sinc(f T ) 137 where in these notes the sinc function is defined by sinc(u) = sin(π u) πu u=0 u = 0. 1 (6.17) (Some authors use somewhat different definitions for the sinc function.) Therefore |H (2π f )|2 = |sinc(f T )|2 , so that the output power spectral density is given by SY (2π f ) = SX (2π f )|sinc(f T )|2 . See Figure 6.4. sinc( u) H(2! f) f u 0 1 2 0 1 T 2 T Figure 6.4: The sinc function and the impulse response function Example 6.2 Consider two linear time-invariant systems in parallel as shown in Figure 6.5. The X Y h k U V Figure 6.5: Parallel linear systems first has input X , impulse response function h, and output U . The second has input Y , impulse response function k , and output V . Suppose that X and Y are jointly WSS. We can find RU V as follows. The main trick is notational: to use enough different variables of integration so that none are used twice. ∞ RU V (t, τ ) = E = = ∞ −∞ ∞ −∞ h(t − s)Xs ds −∞ ∞ −∞ ∞ −∞ k (τ − v )Yv dv ∗ h(t − s)RX Y (s − v )k ∗ (τ − v )dsdv {h ∗ RX Y (t − v )} k ∗ (τ − v )dsdv = h ∗ k ∗ RX Y (t − τ ). Note that RU V (t, τ ) is a function of t − τ alone. Together with the fact that U and V are individually WSS, this implies that U and V are jointly WSS, and RU V = h ∗ k ∗ RX Y . The relationship is expressed in the frequency domain as SU V = H K ∗ SX Y , where K is the Fourier transform of k . Special cases of this example include the case that X = Y or h = k . Example 6.3 Consider the circuit with a resistor and a capacitor shown in Figure 6.6. Take as the input signal the voltage difference on the left side, and as the output signal the voltage across 138 R + C x(t) q (t) + ! y(t) ! Figure 6.6: An RC circuit modeled as a linear system the capacitor. Also, let qt denote the charge on the upper side of the capacitor. Let us first identify the impulse response function by assuming a deterministic input x and a corresponding output y . The elementary equations for resistors and capacitors yield dq 1 qt = (xt − yt ) and yt = dt R C Therefore dy 1 = (xt − yt ) dt RC which in the frequency domain is j ω y (ω ) = 1 (x(ω ) − y (ω )) RC so that y = H x for the system transfer function H given by H (ω ) = 1 1 + RC j ω Suppose, for example, that the input X is a real-valued, stationary Gaussian Markov process, so that its autocorrelation function has the form RX (τ ) = A2 e−α|τ | for some constants A2 and α > 0. Then 2 A2 α SX (ω ) = 2 ω + α2 and 2 A2 α SY (ω ) = SX (ω )|H (ω )|2 = 2 (ω + α2 )(1 + (RC ω )2 ) Example 6.4 A random signal, modeled by the input random process X , is passed into a linear time-invariant system with feedback and with noise modeled by the random process N , as shown in Figure 6.7. The output is denoted by Y . Assume that X and N are jointly WSS and that the random variables comprising X are orthogonal to the random variables comprising N : RX N = 0. Assume also, for the sake of system stability, that the magnitude of the gain around the loop satisfies |H3 (ω )H1 (ω )H2 (ω )| < 1 for all ω such that SX (ω ) > 0 or SN (ω ) > 0. We shall express the output power spectral density SY in terms the power spectral densities of X and N , and the three transfer functions H1 , H2 , and H3 . An expression for the signal-to-noise power ratio at the output will also be computed. 139 Nt Xt H (!) 1 + Y H2(!) t H3(!) Figure 6.7: A feedback system Xt H1(!)H2(!) ~ Xt 1!H (!)H ( )H2(!) 3 1! ~~~ + Nt H2(!) !)H ( )H2(!) 1!H ( 1 ! 3 Yt =Xt+Nt ~ Nt Figure 6.8: An equivalent representation Under the assumed stability condition, the linear system can be written in the equivalent form ˜ ˜ shown in Figure 6.8. The process X is the output due to the input signal X , and N is the output due to the input noise N . The structure in Figure 6.8 is the same as considered in Example 6.2. Since RX N = 0 it follows that RX N = 0, so that SY = SX + SN . Consequently, ˜˜ ˜ ˜ SY (ω ) = SX (ω ) + SN (ω ) = ˜ ˜ |H2 (ω )2 | |H1 (ω )2 |SX (ω ) + SN (ω ) |1 − H3 (ω )H1 (ω )H2 (ω )|2 The output signal-to-noise ratio is the ratio of the power of the signal at the output to the power of the noise at the output. For this example it is given by ˜ E [|Xt |2 ] = ˜ E [|Nt |2 ] ∞ |H2 (ω )H1 (ω )|2 SX (ω ) −∞ |1−H3 (ω )H1 (ω )H2 (ω )|2 ∞ |H2 (ω )|2 SN (ω ) −∞ |1−H3 (ω )H1 (ω )H2 (ω )|2 dω 2π dω 2π Example 6.5 Consider the linear time-invariant system defined as follows. For input signal x the output signal y is defined by y + y + y = x + x . We seek to find the power spectral density of the output process if the input is a white noise process X with RX (τ ) = σ 2 δ (τ ) and SX (ω ) = σ 2 for all ω . To begin, we identify the transfer function of the system. In the frequency domain, the system is described by ((j ω )3 + j ω + 1)y (ω ) = (1 + j ω )x(ω ), so that H (ω ) = Hence, 1 + jω 1 + jω = 3 1 + j ω + (j ω ) 1 + j (ω − ω 3 ) SY (ω ) = SX (ω )|H (ω )|2 = σ 2 (1 + ω 2 ) σ 2 (1 + ω 2 ) = . 1 + (ω − ω 3 )2 1 + ω 2 − 2ω 4 + ω 6 140 Observe that ∞ output power = SY (ω ) −∞ 6.3 dω < ∞. 2π Discrete-time pro cesses in linear systems The basic definitions and use of Fourier transforms described above carry over naturally to discrete time. In particular, if the random process X = (Xk : k ∈ Z) is the input of a linear, discrete-time system with impule response function h, then the output Y is the random process given by ∞ Yk = h(k , n)Xn . n=−∞ The equations in Section 6.1 can be modified to hold for discrete time simply by replacing integration over R by summation over Z. In particular, if X is WSS and if the linear system is time-invariant then (6.10) becomes µY = µX ∞ h(n) n=−∞ RY X = h ∗ R X RY = h ∗ h ∗ RX , (6.18) where the convolution in (6.18) is defined for functions g and h on Z by ∞ g ∗ h(n) = k=−∞ g (n − k )h(k ) Again, Fourier transforms can be used to convert convolution to multiplication. The Fourier transform of a function g = (g (n) : n ∈ Z) is the function g on [−π , π ] defined by g (ω ) = ∞ e−j ωn g (n). −∞ Some of the most basic properties are: Linearity: ag + bh = ag + bh Inversion: g (n) = π j ω n g (ω ) dω 2π −π e Convolution to multiplication: g ∗ h = g h and g ∗ h = Parseval’s identity: ∞ ∗ n=−∞ g (n)h (n) = π −π 1 2π g h g (ω )h∗ (ω ) dω 2π Transform of time reversal: h = h∗ , where h(t) = h(−t)∗ Pure sinusoid to delta function: For ωo ∈ [−π , π ] fixed: ej ωo n (ω ) = 2π δ (ω − ωo ) Delta function to pure sinusoid: For no fixed: I{n=no } (ω ) = e−j ωno 141 The inversion formula above shows that a function g on Z an be represented as an integral (basically a limiting form of linear combination) of sinusoidal functions of time ej ωn , and g (ω ) is the coefficient in the representation for each ω . Paresval’s identity applied with g = h yields that the total energy of g (the square of the L2 norm) can be computed in either the time or frequency π domain: ||g ||2 = ∞ −∞ |g (n)|2 = −π |g (ω )|2 dω . n= 2π The Fourier transform and its inversion formula for discrete-time functions are equivalent to the Fourier series representation of functions in L2 [− , π ] using the complete orthogonal basis (ej ωn : n ∈ Z) for L2 [−π , π ], as discussed in connection with the Karhunen-Lo`ve expansion. The e functions in this basis all have norm 2π . Recall that when we considered the Karhunen-Lo`ve e expansion for a periodic WSS random process of period T , functions on a time interval were 1 important and the power was distributed on the integers Z scaled by T . In this section, Z is considered to be the time domain and the power is distributed over an interval. That is, the role of Z and a finite interval are interchanged. The transforms used are essentially the same, but with j replaced by −j . Given a linear time-invariant system in discrete time with an impulse response function h = (h(τ ) : τ ∈ Z), the Fourier transform of h is denoted by H (ω ). The defining relation for the system in the time domain, y = h ∗ x, becomes y (ω ) = H (ω )x(ω ) in the frequency domain. For −π ≤ a < b ≤ π , b dω Energy of x in frequency interval [a, b] = |x(ω )|2 . 2π a so it is appropriate to call |x(ω )|2 the energy spectral density of the deterministic, discrete-time signal x. Given a WSS random process X = (Xn : n ∈ Z), the Fourier transform of its correlation function RX is denoted by SX , and is called the power spectral density of X . Similarly, if Y and X are jointly WSS, then the Fourier transform of RY X is denoted by SY X , called the cross power spectral density function of Y and X . With the above notation, the second moment relationships in (6.18) become: SY X (ω ) = H (ω )SX (ω ) SY (ω ) = |H (ω )|2 SX (ω ) The Fourier inversion formula applied to SX yields that RX (n) = ular, π dω E [|Xn |2 ] = RX (0) = SX (ω ) . 2π −π π j ω n S (ω ) dω . X 2π −π e In partic- The expectation E [|Xn |2 ] is called the power (or total power) of X , and for −π < a < b ≤ π we have b dω Power of X in frequency interval [a, b] = SX (ω ) 2π a 6.4 Baseband random pro cesses Deterministic baseband signals are considered first. Let x be a continuous-time signal (i.e. a ∞ function on R) such that its energy, −∞ |x(t)|2 dt, is finite. By the Fourier inversion formula, the signal x is an integral, which is essentially a sum, of sinusoidal functions of time, ej ωt . The weights are given by the Fourier transform x(w). Let fo > 0 and let ωo = 2π fo . The signal x is called a 142 baseband signal, with one-sided band limit fo H z , or equivalently ωo radians/second, if x(ω ) = 0 for |ω | ≥ ωo . For such a signal, the Fourier inversion formula becomes ωo x(t) = ej ωt x(ω ) −ωo dω 2π (6.19) Equation (6.19) displays the baseband signal x as a linear combination of the functions ej ωt indexed by ω ∈ [−ωo , ωo ]. A celebrated theorem of Nyquist states that the baseband signal x is completely determined by 1 its samples taken at sampling frequency 2fo . Specifically, define T by T = 2fo . Then x(t) = ∞ t − nT T x(nT ) sinc n=−∞ (6.20) . where the sinc function is defined by (6.17). Nyquist’s equation (6.20) is indeed elegant. It obviously holds by inspection if t = mT for some integer m, because for t = mT the only nonzero term in the sum is the one indexed by n = m. The equation shows that the sinc function gives the correct interpolation of the narrowband signal x for times in between the integer multiples of T . We shall give a proof of (6.20) for deterministic signals, before considering its extension to random processes. A proof of (6.20) goes as follows. Henceforth we will use ωo more often than fo , so it is worth remembering that ωo T = π . Taking t = nT in (6.19) yields ωo x(nT ) = −ωo ωo = ej ωnT x(ω ) dω 2π x(ω )(e−j ωnT )∗ −ωo dω 2π (6.21) Equation (6.21) shows that x(nT ) is given by an inner product of x and e−j ωnT . The functions ˆ e−j ωnT , considered on the interval −ωo < ω < ωo and indexed by n ∈ Z, form a complete orthogonal ωo basis for L2 [−ωo , ωo ], and −ωo T |e−j ωnT |2 dω = 1. Therefore, x over the interval [−ωo , ωo ] has the 2π following Fourier series representation: x(ω ) = T ∞ e−j ωnT x(nT ) n=−∞ ω ∈ [−ωo , ωo ] (6.22) Plugging (6.22) into (6.19) yields ∞ x(t) = x(nT )T ωo ej ωt e−j ωnT −ωo n=−∞ dω . 2π (6.23) The integral in (6.23) can be simplified using ωo T −ωo ej ωτ dω τ = sinc 2π T . (6.24) with τ = t − nT to yield (6.20) as desired. The sampling theorem extends naturally to WSS random processes. A WSS random process X with spectral density SX is said to be a baseband random process with one-sided band limit ωo if SX (ω ) = 0 for | ω |≥ ωo . 143 Prop osition 6.4.1 Suppose X is a WSS baseband random process with one-sided band limit ωo and let T = π /ωo . Then for each t ∈ R Xt = ∞ XnT sinc n=−∞ t − nT T (6.25) m.s. If B is the process of samples defined by Bn = XnT , then the power spectral densities of B and X are related by ω 1 SX T T SB (ω ) = Pro of Fix t ∈ R. It must be shown that zero as N → ∞: εN = E Xt − for | ω |≤ π (6.26) defined by the following expectation converges to N N XnT sinc n=−N 2 t − nT t ∗ When the square is expanded, terms of the form E [Xa Xb ] arise, where a and b take on the values t or nT for some n. But ∗ E [Xa Xb ] = RX (a − b) = ∞ ej ωa (ej ωb )∗ SX (ω ) −∞ dω . 2π Therefore, εN can be expressed as an integration over ω rather than as an expectation: εN = ∞ N j ωt e −∞ − sinc j ω nT e n=−N t − nT T 2 SX (ω ) dω . 2π (6.27) For t fixed, the function (ej ωt : −ωo < ω < ωo ) has a Fourier series representation (use (6.24)) ej ωt = T = ∞ −∞ ∞ j ω nT e ωo ej ωnT ej ωt e−j ωnT −ωo sinc −∞ t − nT T dω 2π . so that the quantity inside the absolute value signs in (6.27) is the approximation error for the N th partial Fourier series sum for ej ωt . Since ej ωt is continuous in ω , a basic result in the theory of Fourier series yields that the Fourier approximation error is bounded by a single constant for all N and ω , and as N → ∞ the Fourier approximation error converges to 0 uniformly on sets of the form | ω |≤ ωo − ε. Thus εN → 0 as N → ∞ by the dominated convergence theorem. The representation (6.25) is proved. Clearly B is a WSS discrete time random process with µB = µX and RB (n) = RX (nT ) = = 144 ∞ dω 2π −∞ ωo dω ej nT ω SX (ω ) , 2π −ωo ej nT ω SX (ω ) so, using a change of variable ν = T ω and the fact T = π RB (n) = ej nν −π π ωo yields 1 ν dν SX ( ) . T T 2π (6.28) But SB (ω ) is the unique function on [π , π ] such that RB (n) = π ej nω SB (ω ) −π dω 2π so (6.26) holds. The proof of Proposition 6.4.1 is complete.2 As a check on (6.26), we note that B (0) = X (0), so the processes have the same total power. Thus, it must be that π SB (ω ) −π dω 2π ∞ = −∞ SX (ω ) dω , 2π (6.29) which is indeed consistent with (6.26). Example 6.6 If µX = 0 and the spectral density SX of X is constant over the interval [−ωo , ωo ], then µB = 0 and SB (ω ) is constant over the interval [−π , π ]. Therefore RB (n) = CB (n) = 0 for n = 0, and the samples (B (n)) are mean zero, uncorrelated random variables. Theoretical Exercise What does (6.26) become if X is W S S and has a power spectral density, but X is not a baseband signal? 6.5 Narrowband random pro cesses Deterministic narrowband signals are considered first. Let ωc > ωo > 0 and let u and v be realvalued baseband signals, each with one-sided bandwidth less than ωo , as defined in the beginning of the previous section. Define a signal x by x(t) = u(t) cos(ωc t) − v (t) sin(ωc t). (6.30) Since cos(ωc t) = (ej ωc t + e−j ωc t )/2 and − sin(ωc t) = (j ej ωc t − j e−j ωc t )/2, (6.30) becomes x(ω ) = 1 {u(ω − ωc ) + u(ω + ωc ) + j v (ω − ωc ) − j v (ω + ωc )} 2 (6.31) j 1 Graphically, x is obtained by sliding 1 u to the right by ωc , 2 u to the left by ωc , 2 v to the right by 2 ωc , and −j v to the left by ωc , and then adding. Of course x is real-valued by its definition. The 2 reader is encouraged to verify from (6.31) that x(ω ) = x∗ (−ω ). Note that x is a narrowband signal, by which we mean x(ω ) = 0 unless ω is in the union of two intervals: the upper band, (ωc − ωo , ωc + ωo ), and the lower band, (−ωc − ωo , −ωc + ωo ). More compactly, x(ω ) = 0 if || ω | −ωc | ≥ ωo . A convenient alternative expression for x is obtained by defining a complex valued baseband signal z by z (t) = u(t) + j v (t). Then x(t) = Re(z (t)ej ωc t ). It is a good idea to keep in mind the case that ωc is much larger than ωo (written ωc ωo ). Then z varies slowly compared to the 145 complex sinusoid ej ωc t . In a small neighborhood of a fixed time t, x is approximately a sinusoid with frequency ωc , peak amplitude |z (t)|, and phase given by the argument of z (t). The signal z is called the complex envelope of x and |z (t)| is called the real envelope of x. So far we have shown that a real-valued narrowband signal x results by modulating a pair of real-valued baseband signals, or equivalently, by modulating a complex-valued baseband signal. Does every real-valued narrowband signal have such a representation? The answer is yes, as we now show. Let x be a real-valued narrowband signal with finite energy. One attempt to obtain a baseband signal from x is to consider e−j ωc t x(t). This has Fourier transform x(ω + ωc ), and the graph of this transform is obtained by sliding the graph of x(ω ) to the left by ωc . As desired, that shifts the portion of x in the upper band to the baseband interval (−ωo , ωo ). However, the portion of x in the lower band gets shifted to an interval centered about −2ωc , so that e−j ωc t x(t) is not a baseband signal. An elegant solution to this problem is to use the Hilbert transform of x, denoted by x. By ˇ definition, x(ω ) is the signal with Fourier transform −j sgn(ω )x(ω ), where ˇ sgn(ω ) = 1 ω>0 0 ω=0 −1 ω < 0 Therefore x can be viewed as the result of passing x through a linear, time-invariant system with ˇ transfer function −j sgn(ω ) as pictured in Figure 6.9. Since this transfer function satisfies H ∗ (ω ) = H (−ω ), the output signal x is again real-valued. In addition, |H (ω )| = 1 for all ω , except ω = 0, so ˇ x !j sgn(!) x Figure 6.9: The Hilbert transform as a linear, time-invariant system that the Fourier transforms of x and x have the same magnitude for all nonzero ω . In particular, ˇ x and x have equal energies. ˇ Consider the Fourier transform of x + j x. It is equal to 2x(ω ) in the upper band and it is zero ˇ elsewhere. Thus, z defined by z (t) = (x(t) + j x(t))e−j ωc t is a baseband complex valued signal. Note ˇ that x(t) = Re(x(t)) = Re(x(t) + j x(t)), or equivalently ˇ x(t) = Re z (t)ej ωc t (6.32) If we let u(t) = Re(z (t)) and v (t) = I m(z (t)), then u and v are real-valued baseband signals such that z (t) = u(t) + j v (t), and (6.32) becomes (6.30). In summary, any finite energy real-valued narrowband signal x can be represented as (6.30) or (6.32), where z (t) = u(t) + j v (t). The Fourier transform z can be expressed in terms of x by z (ω ) = 2x(ω + ωc ) |ω | ≤ ωo 0 else, and u is the Hermetian symmetric part of z and v is −j times the Hermetian antisymmetric part of z : 146 u(ω ) = 1 (z (ω ) + z ∗ (−ω )) 2 v (ω ) = −j (z (ω ) − z ∗ (−ω )) 2 In the other direction, x can be expressed in terms of u and v by (6.31). A similar development is considered next for WSS random processes. Let U and V be jointly WSS real-valued baseband random processes, and let X be defined by Xt = Ut cos(ωc t) − Vt sin(ωc t) (6.33) or equivalently, defining Zt by Zt = Ut + j Vt , Xt = Re Zt ej ωc t (6.34) In some sort of generalized sense, we expect that X is a narrowband process. However, such an X need not even be WSS. Let us find the conditions on U and V that make X WSS. First, in order that µX (t) not depend on t, it must be that µU = µV = 0. Using the notation ct = cos(ωc t), st = sin(ωc t), and τ = a − b, RX (a, b) = RU (τ )ca cb − RU V (τ )ca sb − RV U (τ )sa cb + RV (τ )sa sb . Using the trigonometric identities such as ca cb = (ca−b + ca+b )/2, this can be rewritten as RX (a, b) = + RU (τ ) + RV (τ ) 2 RU (τ ) − RV (τ ) 2 ca−b + ca+b − RU V (τ ) − RV U (τ ) 2 RU V (τ ) + RV U (τ ) 2 sa−b sa+b . Therefore, in order that RX (a, b) is a function of a − b, it must be that RU = RV and RU V = −RV U . Since in general RU V (τ ) = RV U (−τ ), the condition RU V = −RV U means that RU V is an odd function: RU V (τ ) = −RU V (−τ ). We summarize the results as a proposition. Prop osition 6.5.1 Suppose X is given by (6.33) or (6.34), where U and V are jointly WSS. Then X is WSS if and only if U and V are mean zero with RU = RV and RU V = −RV U . Equivalently, X is WSS if and only if Z = U + j V is mean zero and E [Za Zb ] = 0 for al l a, b. If X is WSS then RX (τ ) = RU (τ ) cos(ωc τ ) + RU V (τ ) sin(ωc τ ) 1 [SU (ω − ωc ) + SU (ω + ωc ) − j SU V (ω − ωc ) + j SU V (ω + ωc )] SX (ω ) = 2 ∗ and, with RZ (τ ) defined by RZ (a − b) = E [Za Zb ], RX (τ ) = 1 Re(RZ (τ )ej ωc τ ) 2 The functions SX , SU , and SV are nonnegative, even functions, and SU V is a purely imaginary odd function (i.e. SU V (ω ) = I m(SU V (ω )) = −SU V (−ω ).) 147 Let X by any WSS real-valued random process with a spectral density SX , and continue to let ωc > ωo > 0. Then X is defined to be a narrowband random process if SX (ω ) = 0 whenever | |ω | − ωc |≥ ωo . Equivalently, X is a narrowband random process if RX (t) is a narrowband function. We’ve seen how such a process can be obtained by modulating a pair of jointly WSS baseband random processes U and V . We show next that all narrowband random processes have such a representation. To proceed as in the case of deterministic signals, we first wish to define the Hilbert transform ˇ ˇ of X , denoted by X . A slight concern about defining X is that the function −j ω does not have finite energy. However, we can replace this function by the function given by H (ω ) = −j sgn(ω )I|ω|≤ωo +ωc , ˇ which has finite energy and it has a real-valued inverse transform h. Define X as the output when X is passed through the linear system with impulse response h. Since X and h are real valued, the ˇ random process X is also real valued. As in the deterministic case, define random processes Z , U , ˇ and V by Zt = (Xt + j Xt )e−j ωc t , Ut = Re(Zt ), and Vt = I m(Zt ). Prop osition 6.5.2 Let X be a narrowband WSS random process, with spectral density SX satisfying SX (ω ) = 0 unless ωc − ωo ≤ |ω | ≤ ωc + ωo , where ωo < ωc . Then µX = 0 and the fol lowing representations hold Xt = Re(Zt ej ωc t ) = Ut cos(ωc t) − Vt sin(ωc t) where Zt = Ut + j Vt , and U and V are jointly WSS real-valued random processes with mean zero and SU (ω ) = SV (ω ) = [SX (ω − ωc ) + SX (ω + ωc )] I|ω|≤ωo (6.35) and (6.36) ˇ RU (τ ) = RV (τ ) = RX (τ ) cos(ωc τ ) + RX (τ ) sin(ωc τ ) (6.37) ˇ RU V (τ ) = RX (τ ) sin(ωc τ ) − RX (τ ) cos(ωc τ ) Equivalently, SU V (ω ) = j [SX (ω + ωc ) − SX (ω − ωc )] I|ω|≤ωo (6.38) and . Pro of To show that µX = 0, consider passing X through a linear, time-invariant system with transfer function H (ω ) = 1 if ω is in either the upper band or lower band, and H (ω ) = 0 otherwise. ∞ Then µY = µX −∞ h(τ )dτ = µX H (0) = 0. Since H (ω ) = 1 for all ω such that SX (ω ) > 0, it follows that RX = RY = RX Y = RY X . Therefore E [|Xt − Yt |2 ] = 0 so that Xt has the same mean as Yt , namely zero, as claimed. By the definitions of the processes Z , U , and V , using the notation ct = cos(ωc t) and st = sin(ωc t), we have ˇ Ut = Xt ct + Xt st 148 ˇ Vt = −Xt st + Xt ct The remainder of the proof consists of computing RU , RV , and RU V as functions of two variables, since it is not yet clear that U and V are jointly WSS. ˇ ˇ By the fact X is WSS and the definition of X , the processes X and X are jointly WSS, and the various spectral densities are given by SX X = H ∗ SX = −H SX ˇ SX X = H S X ˇ SX = |H |2 SX = SX ˇ Therefore, ˇ RX X = RX ˇ ˇ RX X = −RX ˇ RX = R X ˇ Thus, for real numbers a and b, ˇ X (a)ca + X (a)sa ˇ X (b)cb + X (b)sb ˇ = RX (a − b)(ca cb + sa sb ) + RX (a − b)(sa cb − ca sb ) ˇ = RX (a − b)ca−b + RX (a − b)sa−b RU (a, b) = E Thus, RU (a, b) is a function of a − b, and RU (τ ) is given by the right side of (6.37). The proof that RV also satisfies (6.37), and the proof of (6.38) are similar. Finally, it is a simple matter to derive (6.35) and (6.36) from (6.37) and (6.38), respectively. 2 Equations (6.35) and (6.36) have simple graphical interpretations, as illustrated in Figure 6.10. Equation (6.35) means that SU and SV are each equal to the sum of the upper lobe of SX shifted SX SU=SV + = SUV + j = j j Figure 6.10: A narrowband power spectral density and associated baseband spectral densities to the left by ωc and the lower lobe of SX shifted to the right by ωc . Similarly, equation (6.35) means that SU V is equal to the sum of j times the upper lobe of SX shifted to the left by ωc and −j times the lower lobe of SX shifted to the right by ωc . Equivalently, SU and SV are each twice the symmetric part of the upper lobe of SX , and SU V is j times the antisymmetric part of the upper lobe of SX . Since RU V is an odd function of τ , if follows that RU V (0) = 0. Thus, for any fixed time t, Ut and Vt are uncorrelated. That does not imply that Us and Vt are uncorrelated for all s and t, for the cross correlation function RX Y is identically zero if and only if the upper lobe of SX is symmetric about ωc . 149 Example 6.7 Simulation of a narrowband random process Let ωo and ωc be postive numbers with 0 < ωo < ωc . Suppose SX is a nonnegative function which is even (i.e. SX (ω ) = SX (−ω ) for all ω ) with SX (ω ) = 0 if ||ω | − ωc | ≥ ωo . We discuss briefly the problem of writing a computer simulation to generate a real-valued WSS random process X with power spectral density SX . By Proposition 6.5.1, it suffices to simulate baseband random processes U and V with the power spectral densities specified by (6.35) and cross power spectral density specified by (6.36). For increased tractability, we impose an additional assumption on SX , namely that the upper lobe of SX is symmetric about ωc . This assumption is equivalent to the assumption that SU V vanishes, and therefore that the processes U and V are uncorrelated with each other. Thus, the processes U and V can be generated independently. In turn, the processes U and V can be simulated by first generating sequences of random 1 variables UnT and VnT for sampling frequency T = 2fo = ωo . A discrete time random process π with power spectral density SU can be generated by passing a discrete-time white noise sequence with unit variance through a discrete-time linear time-invariant system with real-valued impulse response function such that the transfer function H satisfies SU = |H |2 . For example, taking H (ω ) = SU (ω ) works, though it might not be the most well behaved linear system. (The problem of finding a transfer function H with additional properties such that SU = |H |2 is called the problem of spectral factorization, which we shall return to in the next chapter.) The samples VkT can be generated similarly. For a specific example, suppose that (using k H z for kilohertz, or thousands of Hertz) SX (2π f ) = 1 9, 000 k H z < |f | < 9, 020 k H z . 0 else (6.39) Notice that the parameters ωo and ωc are not uniquely determined by SX . They must simply be positive numbers with ωo < ωc such that (9, 000 k H z , 9, 020 k H z ) ⊂ (fc − fo , fc + fo ) However, only the choice fc = 9, 010 k H z makes the upper lobe of SX symmetric around fc . Therefore we take fc = 9, 010 k H z . We take the minmum allowable value for fo , namely fo = 10 k H z . For this choice, (6.35) yields SU (2π f ) = SV (2π f ) = 2 |f | < 10 k H z 0 else (6.40) and (6.36) yields SU V (2π f ) = 0 for all f . The processes U and V are continuous-time baseband random processes with one-sided bandwidth limit 10 k H z . To simulate these processes it is therefore enough to generate samples of them with sampling period T = 0.5 × 10−4 , and then use the Nyquist sampling representation described in Section 6.4. The processes of samples will, according to (6.26), have power spectral density equal to 4 × 104 over the interval [−π , π ]. Consequently, the samples can be taken to be uncorrelated with E [|Ak |2 ] = E [|Bk |2 ] = 4 × 104 . For example, these variables can be taken to be independent real Gaussian random variables. Putting the steps together, we find the following representation for X : Xt = cos(ωc t) ∞ n=−∞ An sinc t − nT T − sin(ωc t) 150 ∞ n=−∞ Bn sinc t − nT T 6.6 Complexification, Part I I A complex random variable Z is said to be circularly symmetric if Z has the same distribution as ej θ Z for every real value of θ. If Z has a pdf fZ , circular symmetry of Z means that fZ (z ) is invariant under rotations about zero, or, equivalently, fZ (z ) depends on z only through |z |. A collection of random variables (Zi : i ∈ I ) is said to be jointly circularly symmetric if for every real value of θ, the collection (Zi : i ∈ I ) has the same finite dimensional distributions as the collection (Zi ej θ : i ∈ I ). Note that if (Zi : i ∈ I ) is jointly circularly symmetric, and if (Yj : j ∈ J ) is another collection of random variables such that each Yj is a linear combination of Zi ’s (with no constants added in) then the collection (Yj : j ∈ J ) is also jointly circularly symmetric. Recall that a complex random vector Z , expressed in terms of real random vectors U and V as Z = U + j V , has mean E Z = E U + j E V and covariance matrix Cov(Z ) = E [(Z − E Z )(Z − E Z )∗ ]. The pseudo-covariance matrix of Z is defined by Covp (Z ) = E [(Z − E Z )(Z − E Z )T ], and it differs from the covariance of Z in that a transpose, rather than a Hermitian transpose, is involved. Note that Cov(Z ) and Covp (Z ) are readily expressed in terms of Cov(U ), Cov(V ), and Cov(U, V ) as: Cov(Z ) = Cov(U ) + Cov(V ) + j (Cov(V , U ) − Cov(U, V )) Covp (Z ) = Cov(U ) − Cov(V ) + j (Cov(V , U ) + Cov(U, V )) where Cov(V , U ) = Cov(U, V )T . Conversely, Cov(U ) = Re (Cov(Z ) + Covp (Z )) /2, Cov(V ) = Re (Cov(Z ) − Covp (Z )) /2, and Cov(U, V ) = I m (−Cov(Z ) + Covp (Z )) /2. The vector Z is defined to be Gaussian if the random vectors U and V are jointly Gaussian. Suppose that Z is a complex Gaussian random vector. Then its distribution is fully determined by its mean and the matrices Cov(U ), Cov(V ), and Cov(U, V ), or equivalently by its mean and the matrices Cov(Z ) and Covp (Z ). Therefore, for a real value of θ, Z and ej θ Z have the same distribution if and only if they have the same mean, covariance matrix, and pseudo-covariance matrix. Since E [ej θ Z ] = ej θ E Z , Cov(ej θ Z ) = Cov(Z ), and Covp (ej θ Z ) = ej 2θ Covp (Z ), Z and ej θ Z have the same distribution if and only if (ej θ − 1)E Z = 0 and (ej 2θ − 1)Covp (Z ) = 0. Hence, if θ is not a multiple of π , Z and ej θ Z have the same distribution if and only if E Z = 0 and Covp (Z ) = 0. Consequently, a Gaussian random vector Z is circularly symmetric if and only if its mean vector and pseudo-covariance matrix are zero. The joint density function of a circularly symmetric complex random vector Z with n complex dimensions and covariance matrix K , with det K = 0, has the particularly elegant form: fZ (z ) = exp(−z ∗ K −1 z ) . π n det(K ) (6.41) Equation (6.41) can be derived in the same way the density for Gaussian vectors with real components is derived. Namely, (6.41) is easy to verify if K is diagonal. If K is not diagonal, the Hermetian symmetric positive definite matrix K can be expressed as K = U ΛU ∗ , where U is a unitary matrix and Λ is a diagonal matrix with strictly positive diagonal entries. The random vector Y defined by Y = U ∗ Z is Gaussian and circularly symmetric with covariance matrix Λ, and 151 ∗ −1 Λ since det(Λ) = det(K ), it has pdf fY (y ) = exp(−y t(K ) y) . Since | det(U )| = 1, fZ (z ) = fY (U ∗ x), π n de which yields (6.41). Let us switch now to random processes. Let Z be a complex-valued random process and let U and V be the real-valued random processes such that Zt = Ut + j Vt . Recall that Z is Gaussian if U and V are jointly Gaussian, and the covariance function of Z is defined by CZ (s, t) = Cov(Zs , Zt ). p The pseudo-covariance function of Z is defined by CZ (s, t) = Covp (Zs , Zt ). As for covariance p matrices of vectors, both CZ and CZ are needed to determine CU , CV , and CU V . Following the vast ma jority of the literature, we define Z to be WSS if µZ (t) is constant and if CZ (s, t) (or RZ (s, t)) is a function of s − t alone. Some authors use a stronger definition of WSS, by defining Z to be WSS if either of the following two equivalent conditions is satisfied: p • µZ (t) is constant, and both CZ (s, t) and CZ (s, t) are functions of s − t • U and V are jointly WSS If Z is Gaussian then it is stationary if and only if it satisfies the stronger definition of WSS. A complex random process Z = (Zt : t ∈ T) is called circularly symmetric if the random variables of the process, (Zt : t ∈ T), are jointly circularly symmetric. If Z is a complex Gaussian random process, it is circularly symmetric if and only if it has mean zero and Covp (s, t) = 0 for Z all s, t. Proposition 6.5.2 shows that the baseband equivalent process Z for a Gaussian real-valued narrowband WSS random process X is circularly symmetric. Nearly all complex valued random processes in applications arise in this fashion. For circularly symmetric complex random processes, the definition of WSS we adopted, and the stronger definition mentioned in the previous paragraph, are equivalent. A circularly symmetric complex Gaussian random process is stationary if and only if it is WSS. The interested reader can find more related to the material in this section in Neeser and Massey, “Proper Complex Random Processes with Applications to Information Theory,” IEEE Transactions on Information Theory, vol. 39, no. 4, July 1993. 152 6.7 Problems 6.1. On filtering a WSS random pro cess Suppose Y is the output of a linear time-invariant system with WSS input X , impulse response function h, and transfer function H . Indicate whether the following statements are true or false. Justify your answers. (a) If |H (ω )| ≤ 1 for all ω then the power of Y is less than or equal to the power of X . (b) If X is periodic (in addition to being WSS) then Y is WSS and periodic. (c) If X has mean zero and strictly positive total power, and if ||h||2 > 0, then the output power is strictly positive. 6.2. On the cross sp ectral density Suppose X and Y are jointly WSS such that the power spectral densities SX , SY , and SX Y are continuous. Show that for each ω , |SX Y (ω )|2 ≤ SX (ω )SY (ω ). Hint: Fix ωo , let > 0, and let J denote the interval of length centered at ωo . Consider passing both X and Y through a linear time-invariant system with transfer function H (ω ) = IJ (ω ). Apply the Schwarz inequality to the output processes sampled at a fixed time, and let → 0. 6.3. Mo dulating and filtering a stationary pro cess 2 Let X = (Xt : t ∈ Z ) be a discrete-time mean-zero stationary random process with power E [X0 ] = 1. Let Y be the stationary discrete time random process obtained from X by modulation as follows: Yt = Xt cos(80π t + Θ), where Θ is independent of X and is uniformly distributed over [0, 2π ]. Let Z be the stationary discrete time random process obtained from Y by the linear equations: Zt+1 = (1 − a)Zt + aYt+1 for all t, where a is a constant with 0 < a < 1. (a) Why is the random process Y stationary? (b) Express the autocorrelation function of Y , RY (τ ) = E [Yτ Y0 ], in terms of the autocorrelation function of X . Similarly, express the power spectral density of Y , SY (ω ), in terms of the power spectral density of X , SX (ω ). (c) Find and sketch the transfer function H (ω ) for the linear system describing the mapping from Y to Z . (d) Can the power of Z be arbitrarily large (depending on a)? Explain your answer. (e) Describe an input X satisfying the assumptions above so that the power of Z is at least 0.5, for any value of a with 0 < a < 1. 6.4. Filtering a Gauss Markov pro cess Let X = (Xt : −∞ < t < +∞) be a stationary Gauss Markov process with mean zero and autocorrelation function RX (τ ) = exp(−|τ |). Define a random process Y = (Yt : t ∈ R) by the differential ˙ equation Yt = Xt − Yt . (a) Find the cross correlation function RX Y . Are X and Y jointly stationary? (b) Find E [Y5 |X5 = 3]. What is the approximate numerical value? (c) Is Y a Gaussian random process? Justify your answer. (d) Is Y a Markov process? Justify your answer. 153 6.5. Slight smo othing Suppose Y is the output of the linear time-invariant system with input X and impulse response 1 function h, such that X is WSS with RX (τ ) = exp(−|τ |), and h(τ ) = a I{|τ |≤ a } for a > 0. If a 2 is small, then h approximates the delta function δ (τ ), and consequently Yt ≈ Xt . This problem explores the accuracy of the approximation. (a) Find RY X (0), and use the power series expansion of eu to show that RY X (0) = 1 − a + o(a) as 4 a → 0. Here, o(a) denotes any term such that o(a)/a → 0 as a → 0. (b) Find RY (0), and use the power series expansion of eu to show that RY (0) = 1 − a + o(a) as 3 a → 0. (c) Show that E [|Xt − Yt |2 ] = a + o(a) as a → 0. 6 6.6. A stationary two-state Markov pro cess Let X = (Xk : k ∈ Z) be a stationary Markov process with state space S = {1, −1} and one-step transition probability matrix 1−p p P= , p 1−p where 0 < p < 1. Find the mean, correlation function and power spectral density function of X . Hint: For nonnegative integers k : Pk = 1 2 1 2 1 2 1 2 1 2 + (1 − 2p)k −1 2 −1 2 1 2 . 6.7. A stationary two-state Markov pro cess in continuous time Let X = (Xt : t ∈ R) be a stationary Markov process with state space S = {1, −1} and Q matrix Q= −α α α −α , where α > 0. Find the mean, correlation function and power spectral density function of X . (Hint: Recall from the example in the chapter on Markov processes that for s < t, the matrix of transition probabilities pij (s, t) is given by H (τ ), where τ = t − s and H (τ ) = 1+e−2ατ 2 1−e−2ατ 2 1−e−2ατ 2 1+e−2ατ 2 . 6.8. A linear estimation problem Suppose X and Y are possibly complex valued jointly WSS processes with known autocorrelation functions, cross-correlation function, and associated spectral densities. Suppose Y is passed through a linear time-invariant system with impulse response function h and transfer function H , and let Z be the output. The mean square error of estimating Xt by Zt is E [|Xt − Zt |2 ]. (a) Express the mean square error in terms of RX , RY , RX Y and h. (b) Express the mean square error in terms of SX , SY , SX Y and H . (c) Using your answer to part (b), find the choice of H that minimizes the mean square error. (Hint: Try working out the problem first assuming the processes are real valued. For the complex case, 2 note that for σ 2 > 0 and complex numbers z and zo , σ 2 |z |2 − 2Re(z ∗ zo ) is equal to |σ z − zo |2 − |zo2| , σ σ z which is minimized with respect to z by z = σo .) 2 154 6.9. Linear time invariant, uncorrelated scattering channel A signal transmitted through a scattering environment can propagate over many different paths on its way to a receiver. The channel gains along distinct paths are often modeled as uncorrelated. The paths may differ in length, causing a delay spread. Let h = (hu : u ∈ Z) consist of uncorrelated, possibly complex valued random variables with mean zero and E [|hu |2 ] = gu . Assume that G = u gu < ∞. The variable hu is the random complex gain for delay u, and g = (gu : u ∈ Z) is the energy gain delay mass function with total gain G. Given a deterministic signal x, the channel output is the random signal Y defined by Yi = ∞ −∞ hu xi−u . u= (a) Determine the mean and autocorrelation function for Y in terms of x and g . (b) Express the average total energy of Y : E [ i Yi2 ], in terms of x and g . (c) Suppose instead that the input is a WSS random process X with autocorrelation function RX . The input X is assumed to be independent of the channel h. Express the mean and autocorrelation function of the output Y in terms of RX and g . Is Y WSS? (d) Since the impulse response function h is random, so is its Fourier transform, H = (H (ω ) : −π ≤ ω ≤ π ). Express the autocorrelation function of the random process H in terms of g . 6.10. The accuracy of approximate differentiation Let X be a WSS baseband random process with power spectral density SX , and let ωo be the one-sided band limit of X . The process X is m.s. differentiable and X can be viewed as the output of a time-invariant linear system with transfer function H (ω ) = j ω . (a) What is the power spectral density of X ? − (b) Let Yt = Xt+a2aXt−a , for some a > 0. We can also view Y = (Yt : t ∈ R) as the output of a time-invariant linear system, with input X . Find the impulse response function k and transfer function K of the linear system. Show that K (ω ) → j ω as a → 0. (c) Let Dt = Xt − Yt . Find the power spectral density of D. (d) Find a value of a, depending only on ωo , so that E [|Dt |2 ] ≤ (0.01)E [|Xt |]2 . In other words, for such a, the m.s. error of approximating Xt by Yt is less than one percent of E [|Xt |2 ]. You can use the 2 ( fact that 0 ≤ 1 − sinuu) ≤ u for all real u. (Hint: Find a so that SD (ω ) ≤ (0.01)SX (ω ) for |ω | ≤ ωo .) 6 6.11. Sampling a cub ed Gaussian pro cess Let X = (Xt : t ∈ R) be a baseband mean zero stationary real Gaussian random process with 1 3 one-sided band limit fo Hz. Thus, Xt = ∞ −∞ XnT sinc t−nT where T = 2fo . Let Yt = Xt for n= T each t. (a) Is Y stationary? Express RY in terms of RX , and SY in terms of SX and/or RX . (Hint: If A, B are jointly Gaussian and mean zero, Cov(A3 , B 3 ) = 6Cov(A, B )3 + 9E [A2 ]E [B 2 ]Cov(A, B ).) 1 ? (b) At what rate T should Y be sampled in order that Yt = ∞ −∞ YnT sinc t−nT n= T (c) Can Y be recovered with fewer samples than in part (b)? Explain. 6.12. An approximation of white noise White noise in continuous time can be approximated by a piecewise constant process as follows. Let T be a small positive constant, let AT be a positive scaling constant depending on T , and let (Bk : k ∈ Z) be a discrete-time white noise process with RB (k ) = σ 2 I{k=0} . Define (Nt : t ∈ R) by Nt = Bk for t ∈ [k T , (k + 1)T ). 1 (a) Sketch a typical sample path of N and express E [| 0 Ns ds|2 ] in terms of AT , T and σ 2 . For 1 simplicity assume that T = K for some large integer K . 155 (b) What choice of AT makes the expectation found in part (a) equal to σ 2 ? This choice makes N a good approximation to a continuous-time white noise process with autocorrelation function σ 2 δ (τ ). (c) What happens to the expectation found in part (a) as T → 0 if AT = 1 for all T ? 6.13. Simulating a baseband random pro cess Suppose a real-valued Gaussian baseband process X = (Xt : t ∈ R) with mean zero and power spectral density 1 if |f | ≤ 0.5 SX (2π f ) = 0 else is to be simulated over the time interval [−500, 500] through use of the sampling theorem with sampling time T = 1. (a) What is the joint distribution of the samples, Xn : n ∈ Z ? (b) Of course a computer cannot generate infinitely many random variables in a finite amount of time. Therefore, consider approximating X by X (N ) defined by (N ) Xt N = n=−N Xn sinc(t − n) (N ) Find a condition on N to guarantee that E [(Xt − Xt )2 ] ≤ 0.01 for t ∈ [−500, 500]. (Hint: Use 1 |sinc(τ )| ≤ π|τ | and bound the series by an integral. Your choice of N should not depend on t because the same N should work for all t in the interval [−500, 500] ). 6.14. Filtering to maximize signal to noise ratio Let X and N be continuous time, mean zero WSS random processes. Suppose that X has power spectral density SX (ω ) = |ω |I{|ω|≤ωo } , and that N has power spectral density SN (ω ) = σ 2 for all ω . Suppose also that X and N are uncorrelated with each other. Think of X as a signal, and N as noise. Suppose the signal plus noise X + N is passed through a linear time-invariant filter with transfer function H , which you are to specify. Let X denote the output signal and N denote the output noise. What choice of H , sub ject the constraints (i) |H (ω )| ≤ 1 for all ω , and (ii) (power of X ) ≥ (power of X )/2, minimizes the power of N ? 6.15. Finding the envelop e of a deterministic signal (a) Find the complex envelope z (t) and real envelope |z (t)| of x(t) = cos(2π (1000)t)+cos(2π (1001)t), using the carrier frequency fc = 1000.5H z . Simplify your answer as much as possible. (b) Repeat part (a), using fc = 995H z . (Hint: The real envelope should be the same as found in part (a).) (c) Explain why, in general, the real envelope of a narrowband signal does not depend on which frequency fc is used to represent the signal (as long as fc is chosen so that the upper band of the signal is contained in an interval [fc − a, fc + a] with a << fc .) 6.16. Sampling a signal or pro cess that is not band limited (a) Fix T > 0 and let ωo = π /T . Given a finite energy signal x, let xo be the band-limited signal with Fourier transform xo (ω ) = I{|ω|≤ωo } ∞ −∞ x(ω + 2nωo ). Show that x(nT ) = xo (nT ) for all n= integers n. (b) Explain why xo (t) = ∞ −∞ x(nT )sinc t−nT . n= T o (c) Let X be a mean zero WSS random process, and let RX be the autocorrelation function 156 o o for power spectral density SX (ω ) defined by SX (ω ) = I{|ω|≤ωo } ∞ −∞ SX (ω + 2nωo ). Show n= o that RX (nT ) = RX (nT ) for all integers n. (d) Explain why the random process Y defined by ∞ t−nT o o is WSS with autocorrelation function RX . (e) Find SX in case Yt = n=−∞ XnT sinc T SX (ω ) = exp(−α|ω |) for ω ∈ R. 6.17. A narrowband Gaussian pro cess Let X be a real-valued stationary Gaussian process with mean zero and RX (τ ) = cos(2π (30τ ))(sinc(6τ ))2 . (a) Find and carefully sketch the power spectral density of X . (b) Sketch a sample path of X . (c) The process X can be represented by Xt = Re(Zt e2πj 30t ), where Zt = Ut + j Vt for jointly stationary narrowband real-valued random processes U and V . Find the spectral densities SU , SV , and SU V . (d) Find P [|Z33 | > 5]. Note that |Zt | is the real envelope process of X . 6.18. Another narrowband Gaussian pro cess Suppose a real-valued Gaussian random process R = (Rt : t ∈ R) with mean 2 and power spectral 4 density SR (2π f ) = e−|f |/10 is fed through a linear time-invariant system with transfer function H (2π f ) = 0.1 5000 ≤ |f | ≤ 6000 0 else (a) Find the mean and power spectral density of the output process X = (Xt : t ∈ R). (b) Find P [X25 > 6]. (c) The random process X is a narrowband random process. Find the power spectral densities SU , SV and the cross spectral density SU V of jointly WSS baseband random processes U and V so that Xt = Ut cos(2π fc t) − Vt sin(2π fc t), using fc = 5500. (d) Repeat part (c) with fc = 5000. 6.19. Another narrowband Gaussian pro cess (version 2) Suppose a real-valued Gaussian white noise process N (we assume white noise has mean zero) with power spectral density SN (2π f ) ≡ No for f ∈ R is fed through a linear time-invariant system with 2 transfer function H specified as follows, where f represents the frequency in gigahertz (GHz) and a gigahertz is 109 cycles per second. 1 19.10 ≤ |f | ≤ 19.11 19.12−|f | H (2π f ) = 19.11 ≤ |f | ≤ 19.12 0.01 0 else (a) Find the mean and power spectral density of the output process X = (Xt : t ∈ R). (b) Express P [X25 > 2] in terms of No and the standard normal complementary CDF function Q. (c) The random process X is a narrowband random process. Find and sketch the power spectral densities SU , SV and the cross spectral density SU V of jointly WSS baseband random processes U and V so that Xt = Ut cos(2π fc t) − Vt sin(2π fc t), using fc = 19.11 GHz. (d) The complex envelope process is given by Z = U + j V and the real envelope process is given by |Z |. Specify the distributions of Zt and |Zt | for t fixed. 157 6.20. Declaring the center frequency for a given random pro cess Let a > 0 and let g be a nonnegative function on R which is zero outside of the interval [a, 2a]. Suppose X is a narrowband WSS random process with power spectral density function SX (ω ) = g (|ω |), or equivalently, SX (ω ) = g (ω ) + g (−ω ). The process X can thus be viewed as a narrowband signal for carrier frequency ωc , for any choice of ωc in the interval [a, 2a]. Let U and V be the baseband random processes in the usual complex envelope representation: Xt = Re((Ut + j Vt )ej ωc t ). (a) Express SU and SU V in terms of g and ωc . ∞ (b) Describe which choice of ωc minimizes −∞ |SU V (ω )|2 dω . (Note: If g is symmetric arround dπ some frequency ν , then ωc = ν . But what is the answer otherwise?) 6.21*. Cyclostationary random pro cesses A random process X = (Xt : t ∈ R) is said to be cyclostationary with period T , if whenever s is an integer multiple of T , X has the same finite dimensional distributions as (Xt+s : t ∈ R). This property is weaker than stationarity, because stationarity requires equality of finite dimensional distributions for all real values of s. (a) What properties of the mean function µX and autocorrelation function RX does any second order cyclostationary process possess? A process with these properties is called a wide sense cyclostationary process. (b) Suppose X is cyclostationary and that U is a random variable independent of X that is uniformly distributed on the interval [0, T ]. Let Y = (Yt : t ∈ R) be the random process defined by Yt = Xt+U . Argue that Y is stationary, and express the mean and autocorrelation function of Y in terms of the mean function and autocorrelation function of X . Although X is not necessarily WSS, it is reasonable to define the power spectral density of X to equal the power spectral density of Y . (c) Suppose B is a stationary discrete-time random process and that g is a deterministic function. Let X be defined by Xt = ∞ −∞ g (t − nT )Bn . Show that X is a cyclostationary random process. Find the mean function and autocorrelation function of X in terms g , T , and the mean and autocorrelation function of B . If your answer is complicated, identify special cases which make the answer nice. (d) Suppose Y is defined as in part (b) for the specific X defined in part (c). Express the mean µY , autocorrelation function RY , and power spectral density SY in terms of g , T , µB , and SB . 6.22*. Zero crossing rate of a stationary Gaussian pro cess Consider a zero-mean stationary Gaussian random process X with SX (2π f ) = |f | − 50 for 50 ≤ |f | ≤ 60, and SX (2π f ) = 0 otherwise. Assume the process has continuous sample paths (it can be shown that such a version exists.) A zero crossing from above is said to occur at time t if X (t) = 0 and X (s) > 0 for all s in an interval of the form [t − , t) for some > 0. Determine the mean rate of zero crossings from above for X . If you can find an analytical solution, great. Alternatively, you can estimate the rate (aim for three significant digits) by Monte Carlo simulation of the random process. 158 Chapter 7 Wiener filtering 7.1 Return of the orthogonality principle Consider the problem of estimating a random process X at some fixed time t given observation of a random process Y over an interval [a, b]. Suppose both X and Y are mean zero second order random processes and that the minimum mean square error is to be minimized. Let Xt denote the best linear estimator of Xt based on the observations (Ys : a ≤ s ≤ b). In other words, define Vo = {c1 Ys1 + · · · + cn Ysn : for some constants n, a ≤ s1 , . . . , sn ≤ b, and c1 , . . . , cn } and let V be the m.s. closure of V , which includes Vo and any random variable that is the m.s. limit of a sequence of random variables in Vo . Then Xt is the random variable in V that minimizes the mean square error, E [|Xt − Xt |2 ]. By the orthogonality principle, the estimator Xt exists and it is unique in the sense that any two solutions to the estimation problem are equal with probability one. Perhaps the most useful part of the orthogonality principle is that a random variable W is equal to Xt if and only if (i) W ∈ V and (ii) (Xt − W ) ⊥ Z for all Z ∈ V . Equivalently, W is equal to Xt if and only if (i) W ∈ V and (ii) (Xt − W ) ⊥ Yu for all u ∈ [a, b]. Furthermore, the minimum mean square error (i.e. the error for the optimal estimator Xt ) is given by E [|Xt |2 ] − E [|Xt |2 ]. b Note that m.s. integrals of the form a h(t, s)Ys ds are in V , because m.s. integrals are m.s. limits of finite linear combinations of the random variables of Y . Typically the set V is larger than the set of all m.s. integrals of Y . For example, if u is a fixed time in [a, b] then Yu ∈ V . In addition, if Y is m.s. differentiable, then Yu is also in V . Typically neither Yu nor Yu can be expressed as a m.s. integral of (Ys : s ∈ R). However, Yu can be obtained as an integral of the process Y multiplied by a delta function, though the integration has to be taken in a generalized sense. b The integral a h(t, s)Ys ds is the MMSE estimator if and only if b Xt − a or equivalently E [(Xt − or equivalently h(t, s)Ys ds ⊥ Yu b a for u ∈ [a, b] ∗ h(t, s)Ys ds)Yu ] = 0 for u ∈ [a, b] b RX Y (t, u) = a h(t, s)RY (s, u)ds for u ∈ [a, b]. 159 Suppose now that the observation interval is the whole real line R and suppose that X and Y are jointly WSS. Then for t and v fixed, the problem of estimating Xt from (Ys : s ∈ R) is the same as the problem of estimating Xt+v from (Ys+v : s ∈ R). Therefore, if h(t, s) for t fixed is the optimal function to use for estimating Xt from (Ys : s ∈ R), then it is also the optimal function to use for estimating Xt+v from (Ys+v : s ∈ R). Therefore, h(t, s) = h(t + v , s + v ), so that h(t, s) is a function of t − s alone, meaning that the optimal impulse response function h corresponds to a time∞ ˆ invariant system. Thus, we seek to find an optimal estimator of the form Xt = −∞ h(t − s)Ys ds. The optimality condition becomes Xt − ∞ −∞ h(t − s)Ys ds ⊥ Yu for u ∈ R which is equivalent to the condition RX Y (t − u) = ∞ −∞ h(t − s)RY (s − u)ds for u ∈ R or RX Y = h ∗ RY . In the frequency domain the optimality condition becomes SX Y (ω ) = H (ω )SY (ω ) for all ω . Consequently, the optimal filter H is given by H (ω ) = SX Y (ω ) SY (ω ) and the corresponding minimum mean square error is given by E [|Xt − Xt |2 ] = E [|Xt |2 ] − E [|Xt |2 ] = ∞ −∞ SX (ω ) − |SX Y (ω )|2 SY (ω ) dω 2π Example 7.1 Consider estimating a random process from observation of the random process plus noise, as shown in Figure 7.1. Assume that X and N are jointly WSS with mean zero. Suppose X X Y + h ^ X N Figure 7.1: An estimator of a signal from signal plus noise, as the output of a linear filter. and N have known autocorrelation functions and suppose that RX N ≡ 0, so the variables of the process X are uncorrelated with the variables of the process N . The observation process is given by Y = X + N . Then SX Y = SX and SY = SX + SN , so the optimal filter is given by H (ω ) = SX Y (ω ) SX (ω ) = SY (ω ) SX (ω ) + SN (ω ) The associated minimum mean square error is given by E [|Xt − Xt |2 ] = = ∞ −∞ ∞ −∞ SX (ω )2 SX (ω ) + SN (ω ) SX (ω )SN (ω ) dω SX (ω ) + SN (ω ) 2π SX (ω ) − 160 dω 2π Example 7.2 This example is a continuation of the previous example, for a particular choice of power spectral densities. Suppose that the signal process X is WSS with mean zero and power 1 spectral density SX (ω ) = 1+ω2 , suppose the noise process N is WSS with mean zero and power −|τ | 4 spectral density 4+ω2 , and suppose SX N ≡ 0. Equivalently, RX (τ ) = e 2 , RN (τ ) = e−2|τ | and RX N ≡ 0. We seek the optimal linear estimator of Xt given (Ys : s ∈ R), where Y = X + N . Seeking an estimator of the form Xt = ∞ −∞ h(t − s)Ys ds we find from the previous example that the transform H of h is given by H (ω ) = 1 1+ω 2 SX (ω ) = SX (ω ) + SN (ω ) + 1 1+ω 2 4 4+ω 2 = 4 + ω2 8 + 5ω 2 We will find h by finding the inverse transform of H . First, note that 8 12 12 + ω2 1 4 + ω2 5 5 =5 + =+ 8 + 5ω 2 8 + 5ω 2 8 + 5ω 2 5 8 + 5ω 2 We know that 1 δ (t) ↔ 1 . Also, for any α > 0, 5 5 e−α|t| ↔ 2α , ω 2 + α2 (7.1) so 1 = 8 + 5ω 2 1 5 8 5 + ω2 = 1 5·2 5 8 2 8 (5 8 5 + ω2) ↔ 1 √ 4 10 − e q 8 |t| 5 Therefore the optimal filter is given in the time domain by 1 h(t) = δ (t) + 5 3 √ 5 10 − e q 8 |t| 5 The associated minimum mean square error is given by (one way to do the integration is to use the ∞ fact that if k ↔ K then −∞ K (ω ) dω = k (0)): 2π E [|Xt − Xt |2 ] = ∞ −∞ SX (ω )SN (ω ) dω = SX (ω ) + SN (ω ) 2π ∞ −∞ 4 dω =4 8 + 5 ω 2 2π 1 √ 4 10 1 =√ 10 In an example later in this chapter we will return to the same random processes, but seek the best linear estimator of Xt given (Ys : s ≤ t). 7.2 The causal Wiener filtering problem A linear system is causal if the value of the output at any given time does not depend on the future of the input. That is to say that the impulse response function satisfies h(t, s) = 0 for s > t. In the case of a linear, time-invariant system, causality means that the impulse response function 161 satisfies h(τ ) = 0 for τ < 0. Suppose X and Y are mean zero and jointly WSS. In this section we will consider estimates of X given Y obtained by passing Y through a causal linear time-invariant system. For convenience in applications, a fixed parameter T is introduced. Let Xt+T |t be the minimum mean square error linear estimate of Xt+T given (Ys : s ≤ t). Note that if Y is the same process as X and T > 0, then we are addressing the problem of predicting Xt+T from (Xs : s ≤ t). ∞ An estimator of the form −∞ h(t − s)Ys ds is sought such that h corresponds to a causal system. Once again, the orthogonality principle implies that the estimator is optimal if and only if it satisfies Xt+T − ∞ −∞ h(t − s)Ys ds ⊥ Yu for u ≤ t which is equivalent to the condition RX Y (t + T − u) = ∞ −∞ h(t − s)RY (s − u)ds for u ≤ t or RX Y (t + T − u) = h ∗ RY (t − u). Setting τ = t − u and combining this optimality condition with the constraint that h is a causal function, the problem is to find an impulse response function h satisfying: RX Y (τ + T ) = h ∗ RY (τ ) for τ ≥ 0 h(v ) = 0 for v < 0 (7.2) (7.3) Equations (7.2) and (7.3) are called the Wiener-Hopf equations. We shall show how to solve them in the case the power spectral densities are rational functions by using the method of spectral factorization. The next section describes some of the tools needed for the solution. 7.3 Causal functions and sp ectral factorization A function h on R is said to be causal if h(τ ) = 0 for τ < 0, and it is said to be anticausal if h(τ ) = 0 for τ > 0. Any function h on R can be expressed as the sum of a causal function and an anticausal function as follows. Simply let u(t) = I{t≥0} and notice that h(t) is the sum of the causal function u(t)h(t) and the anticausal function (1 − u(t))h(t). More compactly, we have the representation h = uh + (1 − u)h. A transfer function H is said to be of positive type if the corresponding impulse response function h is causal, and H is said to be of negative type if the corresponding impulse response function is anticausal. Any transfer function can be written as the sum of a positive type transfer function and a negative type transfer function. Indeed, suppose H is the Fourier transform of an impulse response function h. Define [H ]+ to be the Fourier transform of uh and [H ]− to be the Fourier transform of (1 − u)h. Then [H ]+ is called the positive part of H and [H ]− is called the negative part of H . The following properties hold: • H = [H ]+ + [H ]− (because h = uh + (1 − u)h) • [H ]+ = H if and only if H is positive type • [H ]− = 0 if and only if H is positive type • [[H ]+ ]− = 0 for any H 162 • [[H ]+ ]+ = [H ]+ and [[H ]− ]− = [H ]− • [H + G]+ = [H ]+ + [G]+ and [H + G]− = [H ]− + [G]− Note that uh is the casual function that is closest to h in the L2 norm. That is, uh is the pro jection of h onto the space of causal functions. Indeed, if k is any causal function, then ∞ −∞ 0 |h(t) − k (t)|2 dt = −∞ 0 ≥ −∞ ∞ |h(t)|2 dt + 0 |h(t) − k (t)|2 dt |h(t)|2 dt (7.4) and equality holds in (7.4) if and only if k = uh (except possibly on a set of measure zero). By Parseval’s relation, it follows that [H ]+ is the positive type function that is closest to H in the L2 norm. Equivalently, [H ]+ is the pro jection of H onto the space of positive type functions. Similarly, [H ]− is the pro jection of H onto the space of negative type functions. Up to this point in these notes, Fourier transforms have been defined for real values of ω only. However, for the purposes of factorization to be covered later, it is useful to consider the analytic continuation of the Fourier transforms to larger sets in C. We use the same notation H (ω ) for the function H defined for real values of ω only, and its continuation defined for complex ω . The following examples illustrate the use of the pro jections [ ]+ and [ ]− , and consideration of transforms for complex ω . Example 7.3 Let g (t) = e−α|t| for a constant α > 0. The functions g , ug and (1 − u)g are pictured g(t) t u(t)g(t) t (1!u(t))g(t) t Figure 7.2: Decomposition of a two-sided exponential function in Figure 7.2. The corresponding transforms are given by: 1 jω + α 0 0 1 [G]− (ω ) = eαt e−j ωt dt = −j ω + α −∞ 2α G(ω ) = [G]+ (ω ) + [G]− (ω ) = 2 ω + α2 [G]+ (ω ) = ∞ e−αt e−j ωt dt = Note that [G]+ has a pole at ω = j α, so that the imaginary part of the pole of [G]+ is positive. Equivalently, the pole of [G]+ is in the upper half plane. More generally, suppose that G(ω ) has the representation N1 G(ω ) = n=1 γn + j ω + αn 163 N n=N1 +1 γn −j ω + αn where Re(αn ) > 0 for all n. Then N1 [G]+ (ω ) = n=1 N γn j ω + αn [G]− (ω ) = n=N1 +1 γn −j ω + αn Example 7.4 Let G be given by G(ω ) = 1 − ω2 (j ω + 1)(j ω + 3)(j ω − 2) Note that G has only three simple poles. The numerator of G has no factors in common with the denominator, and the degree of the numerator is smaller than the degree of the denominator. By the theory of partial fraction expansions in complex analysis, it therefore follows that G can be written as γ2 γ3 γ1 + + G(ω ) = jω + 1 jω + 3 jω − 2 In order to identify γ1 , for example, multiply both expressions for G by (j ω + 1) and then let j ω = −1. The other constants are found similarly. Thus γ1 = γ2 = γ3 = 1 − ω2 (j ω + 3)(j ω − 2) 1 − ω2 (j ω + 1)(j ω − 2) 1− (j ω + 1)(j ω + 3) ω2 = j ω =− 1 = j ω =− 3 = j ω =2 1 1 + (−1)2 =− (−1 + 3)(−1 − 2) 3 1 + 32 =1 (−3 + 1)(−3 − 2) 1 1 + 22 = (2 + 1)(2 + 3) 3 Consequently, [G]+ (ω ) = − 1 1 + 3(j ω + 1) j ω + 3 and [G]− (ω ) = 1 3(j ω − 2) −j ω T Example 7.5 Suppose that G(ω ) = (e ω+α) . Multiplication by e−j ωT in the frequency domain j represents a shift by T in the time domain, so that g (t) = e−α(t−T ) t ≥ T , 0 t<T as pictured in Figure 7.3. Consider two cases. First, if T ≥ 0, then g is causal, G is positive type, and therefore [G]+ = G and [G]− = 0. Second, if T ≤ 0 then g (t)u(t) = eαT e−αt t ≥ 0 0 t<0 164 g(t) T>0: t T T<0: g(t) t T Figure 7.3: Exponential function shifted by T −j ω T αT αT −e so that [G]+ (ω ) = je +α and [G]− (ω ) = G(ω ) − [G]+ (ω ) = e (j ω+α) . We can also find [G]− by ω computing the transform of (1 − u(t))g (t) (still assuming that T ≤ 0): [G]− (ω ) = 0 α(T −t) −j ω t e e T eαT −(α+j ω)t dt = −(α + j ω ) 0 = t=T e−j ωT − eαT (j ω + α) Example 7.6 Suppose H is the transfer function for impulse response function h. Let us unravel the notation and express ∞ 2 dω ej ωT H (ω ) + 2π −∞ in terms of h and T . (Note that the factor ej ωT is used, rather than e−j ωT as in the previous example.) Multiplication by ej ωT in the frequency domain corresponds to shifting by −T in the time domain, so that ej ωT H (ω ) ↔ h(t + T ) and thus ej ωT H (ω ) + ↔ u(t)h(t + T ) Applying Parseval’s identity, the definition of u, and a change of variables yields ∞ −∞ ej ωT H (ω ) 2 + dω 2π ∞ = = −∞ ∞ 0 = ∞ T |u(t)h(t + T )|2 dt |h(t + T )|2 dt |h(t)|2 dt The integral decreases from the energy of h to zero as T ranges from −∞ to ∞. Example 7.7 Suppose [H ]− = [K ]− = 0. Let us find [H K ]− . As usual, let h denote the inverse transform of H , and k denote the inverse transform of K . The supposition implies that h and k 165 are both causal functions. Therefore the convolution h ∗ k is also a causal function. Since H K is the transform of h ∗ k , it follows that H K is a positive type function. Equivalently, [H K ]− = 0. The decomposition H = [H ]+ + [H ]− is an additive one. Next we turn to multiplicative decomposition, concentrating on rational functions. A function H is said to be rational if it can be written as the ratio of two polynomials. Since polynomials can be factored over the complex numbers, a rational function H can be expressed in the form H (ω ) = γ (j ω + β1 )(j ω + β2 ) · · · (j ω + βK ) (j ω + α1 )(j ω + α2 ) · · · (j ω + αN ) for complex constants γ , α1 , . . . , αN , β1 , . . . , βK . Without loss of generality, we assume that {αi } ∩ {βj } = ∅. We also assume that the real parts of the constants α1 , . . . , αN , β1 , . . . , βK are nonzero. The function H is positive type if and only if Re(αi ) > 0 for all i, or equivalently, if and only if all the poles of H (ω ) are in the upper half plane I m(ω ) > 0. A positive type function H is said to have minimum phase if Re(βi ) > 0 for all i. Thus, a positive type function H is minimum phase if and only if 1/H is also positive type. Suppose that SY is the power spectral density of a WSS random process and that SY is a ∗ rational function. The function SY , being nonnegative, is also real-valued, so SY = SY . Thus, if the denominator of SY has a factor of the form j ω + α then the denominator must also have a factor of the form −j ω + α∗ . Similarly, if the numerator of SY has a factor of the form j ω + β then the numerator must also have a factor of the form −j ω + β ∗ . Example 7.8 The function SY given by SY (ω ) = 8 + 5ω 2 (1 + ω 2 )(4 + ω 2 ) can be factored as SY (ω ) = √ 5 (j ω + 8 5) (j ω + 2)(j ω + 1) √ 5 (−j ω + 8 5) (−j ω + 2)(−j ω + 1) (7.5) − SY (ω ) + SY (ω ) + − where SY is a positive type, minimum phase function and SY is a negative type function with − + SY = (SY )∗ . Note that the operators [ ]+ and [ ]− give us an additive decomposition of a function H into the sum of a positive type and a negative type function, whereas spectral factorization has to do with products. At least formally, the factorization can be accomplished by taking a logarithm, doing an additive decomposition, and then exponentiating: SX (ω ) = exp([ln SX (ω )]+ ) exp([ln SX (ω )]− ) . − S X (ω ) + SX (ω ) Notice that if h ↔ H then, formally, 1+h+ h∗h h∗h∗h H2 H2 + · · · ↔ exp(H ) = 1 + H + + ··· 2! 3! 2! 3! 166 (7.6) + so that if H is positive type, then exp(H ) is also positive type. Thus, the factor SX in (7.6) is − indeed a positive type function, and the factor SX is a negative type function. Use of (7.6) is called the cepstrum method. Unfortunately, there is a host of problems, both numerical and analytical, in using the method, so that it will not be used further in these notes. 7.4 Solution of the causal Wiener filtering problem for rational p ower sp ectral densities The Wiener-Hopf equations (7.2) and ( 7.3) can be formulated in the frequency domain as follows: Find a positive type transfer function H such that ej ωT SX Y − H SY + =0 (7.7) + +− Suppose SY is factored as SY = SY SY such that SY is a minimum phase, positive type transfer − + − function and SY = (SY )∗ . Then SY and S1 are negative type functions. Since the product of − Y two negative type functions is again negative type, (7.7) is equivalent to the equation obtained by multiplying the quantity within square brackets in (7.7) by S1 , yielding the equivalent problem: − Y Find a positive type transfer function H such that ej ωT SX Y + − H SY − SY =0 (7.8) + + The function H SY , being the product of two positive type functions, is itself positive type. Thus (7.8) becomes ej ωT SX Y + − H SY = 0 − SY + Solving for H yields that the optimal transfer function is given by H= 1 ej ωT SX Y + − SY SY (7.9) + The orthogonality principle yields that the mean square error satisfies E [|Xt+T − Xt+T |t |2 ] = E [|Xt+T |2 ] − E [|Xt+T |t |2 ] = RX (0) − = RX (0) − ∞ −∞ ∞ −∞ |H (ω )|2 SY (ω ) ej ωT SX Y − SY dω 2π 2 + dω 2π + where we used the fact that |SY |2 = SY . Another expression for the MMSE, which involves the optimal filter h, is the following: MMSE = E [(Xt+T − Xt+T |t )(Xt+T − Xt+T |t )∗ ] ∗ = E [(Xt+T − Xt+T |t )Xt+T ] = RX (0) − RX X (t, t + T ) b = RX (0) − ∞ −∞ ∗ h(s)RX Y (s + T )ds. 167 (7.10) Exercise Evaluate the limit as T → −∞ and the limit as T → ∞ in (7.10). Example 7.9 This example involves the same model as in an example in Section 7.1, but here a causal estimator is sought. The observed random process is Y = X + N , were X is WSS with 1 mean zero and power spectral density SX (ω ) = 1+ω2 , N is WSS with mean zero and power spectral 4 density SN (ω ) = 4+ω2 , and SX N = 0. We seek the optimal casual linear estimator of Xt given (Ys : s ≤ t). The power spectral density of Y is given by SY (ω ) = SX (ω ) + SN (ω ) = 8 + 5ω 2 (1 + ω 2 )(4 + ω 2 ) + − and its spectral factorization is given by (7.5), yielding SY and SY . Since RX N = 0 it follows that SX Y (ω ) = SX (ω ) = Therefore SX Y (ω ) − SY (ω ) = = 1 . (j ω + 1)(−j ω + 1) (−j ω + 2) √ 5(j ω + 1)(−j ω + γ1 γ2 + j ω + 1 −j ω + 8 8 5) 5 where γ1 = γ2 = Therefore and thus √ −j ω + 2 5(−j ω + −j ω + 2 √ 5(j ω + 1) 8 5 ) j ω = −1 j ω= SX Y (ω ) − SY (ω ) q = + 8 5 =√ − =√ 3 √ 5+ 8 +2 √ 5+ 8 8 5 γ1 jω + 1 (7.11) 2− γ1 (j ω + 2) 3 √ 1 + H (ω ) = √ = 5 + 2 10 5(j ω + 8 ) jω + 5 8 5 8 5 so that the optimal causal filter is 3 √ h(t) = 5 + 2 10 δ (t) + (2 − q 8 −t )u(t)e 5 8 5 Finally, by (7.10) with T = 0, (7.11), and (7.1), the minimum mean square error is given by E [|Xt − Xt |2 ] = RX (0) − ∞ −∞ 2 γ1 dω 1 γ2 = − 1 ≈ 0.3246 2 2π 1+ω 2 2 168 which is slightly larger than √1 10 ≈ 0.3162, the MMSE found for the best noncausal estimator (see the example in Section 7.1), and slightly smaller than 1 , the MMSE for the best “instantaneous” 3 estimator of Xt given Yt , which is Xt . 3 Example 7.10 A special case of the causal filtering problem formulated above is when the observed process Y is equal to X itself. This leads to the pure prediction problem. Let X be a WSS mean zero random process and let T > 0. Then the optimal linear predictor of Xt+T given (Xs : s ≤ t) corresponds to a linear time-invariant system with transfer function H given by (since SX Y = SX , − − + + SY = SX , SY = SX , and SY = SX ): H= To be more specific, suppose that SX (ω ) = SX (ω ) = 1 + j ωT +Se SX X 1 , ω 4 +4 (7.12) + The factorization of SX is given by 1 1 (j ω + (1 + j ))(j ω + (1 − j )) (−j ω + (1 + j ))(−j ω + (1 − j )) − SX ( ω ) + SX (ω ) so that 1 (j ω + (1 + j ))(j ω + (1 − j )) γ1 γ2 + j ω + (1 + j ) j ω + (1 − j ) + SX (ω ) = = where γ1 = γ2 = 1 j ω + (1 − j ) 1 j ω + (1 + j ) = j ω =−(1+j ) = j ω =−1+j j 2 −j 2 + yielding that the inverse Fourier transform of SX is given by + SX ↔ Hence + SX (ω )ej ωT ↔ j −(1+j )t j e u(t) − e−(1−j )t u(t) 2 2 j −(1+j )(t+T ) 2e j − 2 e−(1−j )(t+T ) t ≥ −T 0 else so that j e−(1−j )T j e−(1+j )T − + 2(j ω + (1 + j )) 2(j ω + (1 − j )) The formula (7.12) for the optimal transfer function yields + SX (ω )ej ωT = j e−(1+j )T (j ω + (1 − j )) j e−(1−j )T (j ω + (1 + j )) − 2 2 j T − e−j T ) j T (1 + j ) − e−j T (1 − j ) j ω (e e + = e−T 2j 2j H (ω ) = = e−T [cos(T ) + sin(T ) + j ω sin(T )] 169 so that the optimal predictor for this example is given by Xt+T |t = Xt e−T (cos(T ) + sin(T )) + Xt e−T sin(T ) 7.5 Discrete time Wiener filtering Causal Wiener filtering for discrete-time random processes can be handled in much the same way that it is handled for continuous time random processes. An alternative approach can be based on the use of whitening filters and linear innovations sequences. Both of these approaches will be discussed in this section, but first the topic of spectral factorization for discrete-time processes is discussed. Spectral factorization for discrete time processes naturally involves z -transforms. The z transform of a function (hk : k ∈ Z) is given by ∞ H(z ) = h(k )z −k k=−∞ for z ∈ C. Setting z = ej ω yields the Fourier transform: H (ω ) = H(ej ω ) for 0 ≤ ω ≤ 2π . Thus, the z -transform H restricted to the unit circle in C is equivalent to the Fourier transform H on [0, 2π ], and H(z) for other z ∈ C is an analytic continuation of its values on the unit circle. Let h(k ) = h∗ (−k ) as before. Then the z -transform of h is related to the z -transform H of h as follows: ∞ k=−∞ h(k )z −k = ∞ k=−∞ ∗ h (−k )z −k ∞ = ∗ h (l)z = l l=−∞ ∞ ∗ −l h(l)(1/z ) l=−∞ ∗ = H∗ (1/z ∗ ) The impulse response function h is called causal if h(k ) = 0 for k < 0. The z -transform H is said to be positive type if h is causal. Note that if H is positive type, then lim|z |→∞ H(z ) = h(0). The pro jection [H]+ is defined as it was for Fourier transforms–it is the z transform of the function u(k )h(k ), where u(k ) = I{k≥0} . (We will not need to define or use [ ]− for discrete time functions.) If X is a discrete-time WSS random process with correlation function RX , the z -transform of RX is denoted by SX . Similarly, if X and Y are jointly WSS then the z -transform of RX Y is denoted by SX Y . Recall that if Y is the output random process when X is passed through a linear time-invariant system with impulse response function h, then X and Y are jointly WSS and RY X = h ∗ RX RX Y = h ∗ RX RY = h ∗ h ∗ RX which in the z -transform domain becomes: SY X (z ) = H(z )SX (z ) SX Y (z ) = H∗ (1/z ∗ )SX (z ) SY (z ) = H(z )H∗ (1/z ∗ )SX (z ) Example 7.11 Suppose Y is the output process when white noise W with RW (k ) = I{k=0} is passed through a linear time invariant system with impulse response function h(k ) = ρk I{k≥0} , where ρ is a complex constant with |ρ| < 1. Let us find H, SY , and RY . To begin, H(z ) = ∞ (ρ/z )k = k=0 170 1 1 − ρ/z and the z -transform of h is 1 1 − ρ∗ z . Note that the z -transform for h converges absolutely for |z | > |ρ|, whereas the z -transform for h converges absolutely for |z | < 1/|ρ|. Then SY (z ) = H(z )H∗ (1/z ∗ )SX (z ) = 1 (1 − ρ/z )(1 − ρ∗ z ) The autocorrelation function RY can be found either in the time domain using RY = h ∗ h ∗ RW or by inverting the z -transform SY . Taking the later approach, factor out z and use the method of partial fraction expansion to obtain z (z − ρ)(1 − ρ∗ z ) 1 1 =z + 2 )(z − ρ) ∗ ) − ρ)(1 − ρ∗ z ) (1 − |ρ| ((1/ρ 1 1 z ρ∗ = + (1 − |ρ|2 ) 1 − ρ/z 1 − ρ∗ z SY (z ) = which is the z -transform of RY (k ) = ρk 1−|ρ|2 (ρ∗ )−k 1−|ρ|2 k≥0 k<0 The z -transform SY of RY converges absolutely for |ρ| < z < 1/|ρ|. Suppose that H(z ) is a rational function of z , meaning that it is a ratio of two polynomials of z with complex coefficients. We assume that the numerator and denominator have no zeros in common, and that neither has a root on the unit circle. The function H is positive type (the z -transform of a causal function) if its poles (the zeros of its denominator polynomial) are inside the unit circle in the complex plane. If H is positive type and if its zeros are also inside the unit circle, then h and H are said to be minimum phase functions (in the time domain and z -transform domain, respectively). A positive-type, minimum phase function H has the property that both H and its inverse 1/H are causal functions. Two linear time-invariant systems in series, one with transfer function H and one with transfer function 1/H, passes all signals. Thus if H is positive type and minimum phase, we say that H is causal and causally invertible. Assume that SY corresponds to a WSS random process Y and that SY is a rational function with no poles or zeros on the unit circle in the complex plane. We shall investigate the symmetries of SY , with an eye towards its factorization. First, RY = RY so that ∗ SY (z ) = SY (1/z ∗ ) (7.13) ∗ Therefore, if z0 is a pole of SY with z0 = 0, then 1/z0 is also a pole. Similarly, if z0 is a zero of ∗ is also a zero of S . These observations imply that S can b e uniquely SY with z0 = 0, then 1/z0 Y Y factored as − + SY (z ) = SY (z )SY (z ) such that for some constant β > 0: + • SY is a minimum phase, positive type z -transform − + • SY (z ) = (SY (1/z ∗ ))∗ 171 + • lim|z |→∞ SY (z ) = β There is an additional symmetry if RY is real-valued: SY (z ) = ∞ k=−∞ ∞ RY (k )z −k = ∗ (RY (k )(z ∗ )−k )∗ = SY (z ∗ ) (for real-valued RY ) (7.14) k=−∞ ∗ Therefore, if RY is real and if z0 is a nonzero pole of SY , then z0 is also a pole. Combining (7.13) and (7.14) yields that if RY is real then the real-valued nonzero poles of SY come in pairs: z0 and ∗ ∗ 1/z0 , and the other nonzero poles of SY come in quadruples: z0 , z0 , 1/z0 , and 1/z0 . A similar statement concerning the zeros of SY also holds true. Some example factorizations are as follows (where |ρ| < 1 and β > 0): SY (z ) = β β 1 − ρ/z 1 − ρ∗ z − S Y (z ) + S Y (z ) SY (z ) = β (1 − .8z ) β (1 − .8/z ) (1 − .6/z )(1 − .7/z ) (1 − .6z )(1 − .7z ) + S Y (z ) + S Y (z ) SY (z ) = − S Y (z ) − S Y (z ) β β (1 − ρ/z )(1 − ρ∗ /z ) (1 − ρz )(1 − ρ∗ z ) An important application of spectral factorization is the generation of a discrete-time WSS random process with a specified correlation function RY . The idea is to start with a discrete-time white noise process W with RW (k ) = I{k=0} , or equivalently, with SW (z ) ≡ 1, and then pass it through an appropriate linear, time-invariant system. The appropriate filter is given by taking + H(z ) = SY (z ), for then the spectral density of the output is indeed given by + − H(z )H∗ (1/z ∗ )SW (z ) = SY (z )SY (z ) = SY (z ) The spectral factorization can be used to solve the causal filtering problem in discrete time. Arguing just as in the continuous time case, we find that if X and Y are jointly WSS random processes, then the best estimator of Xn+T given (Yk : k ≤ n) having the form Xn+T |n = ∞ k=−∞ Yk h(n − k ) for a causal function h is the function h satisfying the Wiener-Hopf equations (7.2) and (7.3), and the z transform of the optimal h is given by H= 1 z T SX Y + − SY SY (7.15) + Finally, an alternative derivation of (7.15) is given, based on the use of a whitening filter. The idea is the same as the idea of linear innovations sequence considered in Chapter 3. The first step 172 is to notice that the causal estimation problem is particularly simple if the observation process is white noise. Indeed, if the observed process Y is white noise with RY (k ) = I{k=0} then for each k ≥ 0 the choice of h(k ) is simply made to minimize the mean square error when Xn+T is estimated by the single term h(k )Yn−k . This gives h(k ) = RX Y (T + k )I{k≥0} . Another way to get the same result is to solve the Wiener-Hopf equations (7.2) and (7.3) in discrete time in case RY (k ) = I{k=0} . In general, of course, the observation process Y is not white, but the idea is to replace Y by an equivalent observation process Z that is white. Let Z be the result of passing Y through a filter with transfer function G (z ) = 1/S + (z ). Since + (z ) is a minimum phase function, G is a p ositive typ e function and the system is causal. Thus, S any random variable in the m.s. closure of the linear span of (Zk : k ≤ n) is also in the m.s. closure of the linear span of (Yk : k ≤ n). Conversely, since Y can be recovered from Z by passing Z through the causal linear time-invariant system with transfer function S + (z ), any random variable in the m.s. closure of the linear span of (Yk : k ≤ n) is also in the m.s. closure of the linear span of (Zk : k ≤ n). Hence, the optimal causal linear estimator of Xn+T based on (Yk : k ≤ n) is equal to the optimal causal linear estimator of Xn+T based on (Zk : k ≤ n). By the previous paragraph, such estimator is obtained by passing Z through the linear time-invariant system with impulse response function RX Z (T + k )I{k≥0} , which has z transform [z T SX Z ]+ . See Figure 7.4. Y 1 S +(z) Y Z [z TS (z)]+ XZ ^ Xt+T|t Figure 7.4: Optimal filtering based on whitening first. The transfer function for two linear, time-invariant systems in series is the product of their z -transforms. In addition, SX Z (z ) = G ∗ (1/z ∗ )SX Y (z ) = SX Y (z ) − SY (z ) Hence, the series system shown in Figure 7.4 is indeed equivalent to passing Y through the linear time invariant system with H(z ) given by (7.15). Example 7.12 Suppose that X and N are discrete-time mean zero WSS random processes such that RX N = 0. Suppose SX (z ) = (1−ρ/z1 −ρz ) where 0 < ρ < 1, and suppose that N is a discrete)(1 time white noise with SN (z ) ≡ σ 2 and RN (k ) = σ 2 I{k=0} . Let the observed process Y be given by Y = X + N . Let us find the minimum mean square error linear estimator of Xn based on (Yk : k ≤ n). We begin by factoring SY . SY (z ) = SX (z ) + SN (z ) = 2 = z + σ2 (z − ρ)(1 − ρz ) −σ 2 ρ z 2 − ( 1+ρ + ρ 1 )z σ2 ρ +1 (z − ρ)(1 − ρz ) The quadratic expression in braces can be expressed as (z − z0 )(z − 1/z0 ), where z0 is the smaller 173 root of the expression in braces, yielding the factorization SY (z ) = β (1 − z0 /z ) β (1 − z0 z ) (1 − ρ/z ) (1 − ρz ) where β2 = σ2ρ z0 − S Y (z ) + S Y (z ) Using the fact SX Y = SX , and appealing to a partial fraction expansion yields SX Y (z ) − SY (z ) = = 1 β (1 − ρ/z )(1 − z0 z ) 1 z + β (1 − ρ/z )(1 − z0 ρ) β ((1/z0 ) − ρ)(1 − z0 z ) (7.16) The first term in (7.16) is positive type, and the second term in (7.16) is the z transform of a XY . function that is supported on the negative integers. Thus, the first term is equal to SS − + Finally, dividing by SY yields that the z -transform of the optimal filter is given by H(z ) = 1 β 2 (1 − z0 ρ)(1 − z0 /z ) or in the time domain h(n) = n z0 I{n≥0} β 2 (1 − z0 ρ) 174 Y + 7.6 Problems 7.1. A quadratic predictor Suppose X is a mean zero, stationary discrete-time random process and that n is an integer with n ≥ 1. Consider estimating Xn+1 by a nonlinear one-step predictor of the form n n j h2 (j, k )Xj Xk h1 (k )Xk + Xn+1 = h0 + j =1 k=1 k=1 (a) Find equations in term of the moments (second and higher, if needed) of X for the triple (h0 , h1 , h2 ) to minimize the one step prediction error: E [(Xn+1 − Xn+1 )2 ]. (b) Explain how your answer to part (a) simplifies if X is a Gaussian random process. 7.2. A smo othing problem Suppose X and Y are mean zero, second order random processes in continuous time. Suppose the MMSE estimator of X5 is to be found based on observation of (Yu : u ∈ [0, 3] ∪ [7, 10]). Assuming the estimator takes the form of an integral, derive the optimality conditions that must be satisfied by the kernal function (the function that Y is multiplied by before integrating). Use the orthogonality principle. 7.3. A simple, noncausal estimation problem Let X = (Xt : t ∈ R) be a real valued, stationary Gaussian process with mean zero and autocorrelation function RX (t) = A2 sinc(fo t), where A and fo are positive constants. Let N = (Nt : t ∈ R) be a real valued Gaussian white noise process with RN (τ ) = σ 2 δ (τ ), which is independent of X . ∞ Define the random process Y = (Yt : t ∈ R) by Yt = Xt + Nt . Let Xt = −∞ h(t − s)Ys ds, where 2 the impulse response function h, which can be noncausal, is chosen to minimize E [Dt ] for each t, where Dt = Xt − Xt . (a) Find h. (b) Identify the probability distribution of Dt , for t fixed. (c) Identify the conditional distribution of Dt given Yt , for t fixed. (d) Identify the autocorrelation function, RD , of the error process D, and the cross correlation function, RDY . 7.4. Interp olating a Gauss Markov pro cess Let X be a real-valued, mean zero stationary Gaussian process with RX (τ ) = e−|τ | . Let a > 0. Suppose X0 is estimated by X0 = c1 X−a + c2 Xa where the constants c1 and c2 are chosen to minimize the mean square error (MSE). (a) Use the orthogonality principle to find c1 , c2 , and the resulting minimum MSE, E [(X0 − X0 )2 ]. (Your answers should depend only on a.) (b) Use the orthogonality principle again to show that X0 as defined above is the minimum MSE estimator of X0 given (Xs : |s| ≥ a). (This implies that X has a two-sided Markov property.) 7.5. Estimation of a filtered narrowband random pro cess in noise Suppose X is a mean zero real-valued stationary Gaussian random process with the spectral density shown. S (2 ! f) X 8 Hz 1 8 Hz f 10 4 Hz 10 175 4 Hz (a) Explain how X can be simulated on a computer using a pseudo-random number generator that generates standard normal random variables. Try to use the minimum number per unit time. How many normal random variables does your construction require per simulated unit time? (b) Suppose X is passed through a linear time-invariant system with approximate transfer function H (2π f ) = 107 /(107 + f 2 ). Find an approximate numerical value for the power of the output. (c) Let Zt = Xt + Wt where W is a Gaussian white noise random process, independent of X , with RW (τ ) = δ (τ ). Find h to minimize the mean square error E [(Xt − Xt )2 ], where X = h ∗ Z . (d) Find the mean square error for the estimator of part (c). 7.6. Prop ortional noise Suppose X and N are second order, mean zero random processes such that RX N ≡ 0, and let Y = X + N . Suppose the correlation functions RX and RN are known, and that RN = γ 2 RX for some nonnegative constant γ 2 . Consider the problem of estimating Xt using a linear estimator based on (Yu : a ≤ u ≤ b), where a, b, and t are given times with a < b. (a) Use the orthogonality principle to show that if t ∈ [a, b], then the optimal estimator is given by Xt = κYt for some constant κ, and identify the constant κ and the corresponding MSE. (b) Suppose in addition that X and N are WSS and that Xt+T is to be estimated from (Ys : s ≤ t). Show how the equation for the optimal causal filter reduces to your answer to part (a) in case T ≤ 0. (c) Continue under the assumptions of part (b), except consider T > 0. How is the optimal filter for estimating Xt+T from (Ys : s ≤ t) related to the problem of predicting Xt+T from (Xs : s ≤ t)? 7.7. Predicting the future of a simple WSS pro cess Let X be a mean zero, WSS random process with power spectral density SX (ω ) = ω4 +131ω2 +36 . + + (a) Find the positive type, minimum phase rational function SX such that SX (ω ) = |SX (ω )|2 . (b) Let T be a fixed known constant with T ≥ 0. Find Xt+T |t , the MMSE linear estimator of Xt+T given (Xs : s ≤ t). Be as explicit as possible. (Hint: Check that your answer is correct in case T = 0 and in case T → ∞). (c) Find the MSE for the optimal estimator of part (b). 7.8. Short answer filtering questions (a) Prove or disprove: If H is a positive type function then so is H 2 . (b) Prove or disprove: Suppose X and Y are jointly WSS, mean zero random processes with continuous spectral densities such that SX (2π f ) = 0 unless |f | ∈[9012 MHz, 9015 MHz] and SY (2π f ) = 0 unless |f | ∈[9022 MHz, 9025 MHz]. Then the best linear estimate of X0 given (Yt : t ∈ R) is 0. (c) Let H (2π f ) = sinc(f ). Find [H ]+ . 7.9. On the MSE for causal estimation Recall that if X and Y are jointly WSS and have power spectral densities, and if SY is rational with a spectral factorization, then the mean square error for linear estimation of Xt+T using (Ys : s ≤ t) is given by (MSE) = RX (0) − ∞ −∞ ej ωT SX Y − SY 2 + dω . 2π Evaluate and interpret the limits of this expression as T → −∞ and as T → ∞. 176 7.10. A singular estimation problem Let Xt = Aej 2πfo t , where fo > 0 and A is a mean zero complex valued random variable with 2 2 E [A2 ] = 0 and E [|A|2 ] = σA . Let N be a white noise process with RN (τ ) = σN δ (τ ). Let Yt = Xt + Nt . Let X denote the output process when Y is filtered using the impulse response function h(τ ) = αe−(α−j 2πfo )t I{t≥0} . (a) Verify that X is a WSS periodic process, and find its power spectral density (the power spectral density only exists as a generalized function–i.e. there is a delta function in it). (b) Give a simple expression for the output of the linear system when the input is X . ˆ (c) Find the mean square error, E [|Xt − Xt |2 ]. How should the parameter α be chosen to approximately minimize the MSE? 7.11. Filtering a WSS signal plus noise Suppose X and N are jointly WSS, mean zero, continuous time random processes with RX N ≡ 0. The processes are the inputs to a system with the block diagram shown, for some transfer functions K1 (ω ) and K2 (ω ): X K1 + K2 Y=X out+Nout N Suppose that for every value of ω , Ki (ω ) = 0 for i = 1 and i = 2. Because the two subsystems are linear, we can view the output process Y as the sum of two processes, Xout , due to the input X , plus Nout , due to the input N . Your answers to the first four parts should be expressed in terms of K1 , K2 , and the power spectral densities SX and SN . (a) What is the power spectral density SY ? (b) Find the signal-to-noise ratio at the output (the power of Xout divided by the power of Nout ). (c) Suppose Y is passed into a linear system with transfer function H , designed so that the output at time t is Xt , the best linear estimator of Xt given (Ys : s ∈ R). Find H . (d) Find the resulting minimum mean square error. (e) The correct answer to part (d) (the minimum MSE) does not depend on the filter K2 . Why? 7.12. A prediction problem Let X be a mean zero WSS random process with correlation function RX (τ ) = e−|τ | . Using the Wiener filtering equations, find the optimal linear MMSE estimator (i.e. predictor) of Xt+T based on (Xs : s ≤ t), for a constant T > 0. Explain why your answer takes such a simple form. 7.13. Prop erties of a particular Gaussian pro cess Let X be a zero-mean, wide-sense stationary Gaussian random process in continuous time with autocorrelation function RX (τ ) = (1 + |τ |)e−|τ | and power spectral density SX (ω ) = (2/(1 + ω 2 ))2 . Answer the following questions, being sure to provide justification. (a) Is X mean ergodic in the m.s. sense? (b) Is X a Markov process? (c) Is X differentiable in the m.s. sense? (d) Find the causal, minimum phase filter h (or its transform H ) such that if white noise with autocorrelation function δ (τ ) is filtered using h then the output autocorrelation function is RX . (e) Express X as the solution of a stochastic differential equation driven by white noise. 177 7.14. Sp ectral decomp osition and factorization (a) Let x be the signal with Fourier transform given by x(2π f ) = sinc(100f )ej 2πf T + . Find the energy of x for all real values of the constant T . (b) Find the spectral factorization of the power spectral density S (ω ) = ω4 +161 2 +100 . (Hint: 1 + 3j ω is a pole of S .) 7.15. A continuous-time Wiener filtering problem Let (Xt ) and (Nt ) be uncorrelated, mean zero random processes with RX (t) = exp(−2|t|) and SN (ω ) ≡ No /2 for a positive constant No . Suppose that Yt = Xt + Nt . (a) Find the optimal (noncausal) filter for estimating Xt given (Ys : −∞ < s < +∞) and find the resulting mean square error. Comment on how the MMSE depends on No . (b) Find the optimal causal filter with lead time T , that is, the Wiener filter for estimating Xt+T given (Ys : −∞ < s ≤ t), and find the corresponding MMSE. For simplicity you can assume that T ≥ 0. Comment on the limiting value of the MMSE as T → ∞, as No → ∞, or as No → 0. 7.16. Estimation of a random signal, using the Karhunen-Lo`ve expansion e Suppose that X is a m.s. continuous, mean zero process over an interval [a, b], and suppose N is a white noise process, with RX N ≡ 0 and RN (s, t) = σ 2 δ (s − t). Let (φk : k ≥ 1) be a complete orthonormal basis for L2 [a, b] consisting of eigenfunctions of RX , and let (λk : k ≥ 1) denote the corresponding eigenvalues. Suppose that Y = (Yt : a ≤ t ≤ b) is observed. (a) Fix an index i. Express the MMSE estimator of (X, φi ) given Y in terms of the coordinates, (Y , φ1 ), (Y , φ2 ), . . . of Y , and find the corresponding mean square error. (b) Now suppose f is a function in L2 [a, b]. Express the MMSE estimator of (X, f ) given Y in terms of the coordinates ((f , φj ) : j ≥ 1) of f , the coordinates of Y , the λ’s, and σ . Also, find the mean square error. 7.17. Noiseless prediction of a baseband random pro cess Fix positive constants T and ωo , suppose X = (Xt : t ∈ R) is a baseband random process with k one-sided frequency limit ωo , and let H (n) (ω ) = n=0 (j ωT ) , which is a partial sum of the power k k! (n) series of ej ωT . Let Xt+T |t denote the output at time t when X is passed through the linear time (n) invariant system with transfer function H (n) . As the notation suggests, Xt+T |t is an estimator (not necessarily optimal) of Xt+T given (Xs : s ≤ t). (n) (a) Describe Xt+T |t in terms of X in the time domain. Verify that the linear system is causal. (b) Show that limn→∞ an = 0, where an = max|ω|≤ωo |ej ωT − H (n) (ω )|. (This means that the power series converges uniformly for ω ∈ [−ωo , ωo ].) (c) Show that the mean square error can be made arbitrarily small by taking n sufficiently large. (n) In other words, show that limn→∞ E [|Xt+T − Xt+T |t |2 ] = 0. (d) Thus, the future of a narrowband random process X can be predicted perfectly from its past. What is wrong with the following argument for general WSS processes? If X is an arbitrary WSS random process, we could first use a bank of (infinitely many) narrowband filters to split X into an equivalent set of narrowband random processes (call them “subprocesses”) which sum to X . By the above, we can perfectly predict the future of each of the subprocesses from its past. So adding together the predictions, would yield a perfect prediction of X from its past. 178 7.18. Linear innovations and sp ectral factorization Suppose X is a discrete time WSS random process with mean zero. Suppose that the z -transform version of its power spectral density has the factorization as described in the notes: SX (z ) = + − + − + SX (z )SX (z ) such that SX (z ) is a minimum phase, positive type function, SX (z ) = (SX (1/z ∗ ))∗ , + and lim|z |→∞ SX (z ) = for some β > 0. The linear innovations sequence of X is the sequence X such that Xk = Xk − Xk|k−1 , where Xk|k−1 is the MMSE predictor of Xk given (Xl : l ≤ k − 1). + − Note that there is no constant multiplying Xk in the definition of Xk . You should use SX (z ), SX (z ), and/or β in giving your answers. (a) Show that X can be obtained by passing X through a linear time-invariant filter, and identify the corresponding value of H. (b) Identify the mean square prediction error, E [|Xk − Xk|k−1 |2 ]. 7.19. A singular nonlinear estimation problem Suppose X is a standard Brownian motion with parameter σ 2 = 1 and suppose N is a Poisson random process with rate λ = 10, which is independent of X . Let Y = (Yt : t ≥ 0) be defined by Yt = Xt + Nt . (a) Find the optimal estimator of X1 among the estimators that are linear functions of (Yt : 0 ≤ t ≤ 1) and the constants, and find the corresponding mean square error. Your estimator can include a constant plus a linear combination, or limits of linear combinations, of Yt : 0 ≤ t ≤ 1. (Hint: There is a closely related problem elsewhere in this problem set.) (b) Find the optimal possibly nonlinear estimator of X1 given (Yt : 0 ≤ t ≤ 1), and find the corresponding mean square error. (Hint: No computation is needed. Draw sample paths of the processes.) 7.20. A discrete-time Wiener filtering problem Extend the discrete-time Wiener filtering problem considered at the end of the notes to incorporate a lead time T . Assume T to be integer valued. Identify the optimal filter in both the z -transform domain and in the time domain. (Hint: Treat the case T ≤ 0 separately. You need not identify the covariance of error.) 7.21. Causal estimation of a channel input pro cess Let X = (Xt : t ∈ R) and N = (Nt : t ∈ R) denote WSS random processes with RX (τ ) = 3 e−|τ | 2 and RN (τ ) = δ (τ ). Think of X as an input signal and N as noise, and suppose X and N are orthogonal to each other. Let k denote the impulse response function given by k (τ ) = 2e−3τ I{τ ≥0} , and suppose an output process Y is generated according to the block diagram shown: X k + Y N That is, Y = X ∗ k + N . Suppose Xt is to be estimated by passing Y through a causal filter with impulse response function h, and transfer function H . Find the choice of H and h in order to minimize the mean square error. 7.22. Estimation given a strongly correlated pro cess Suppose g and k are minimum phase causal functions in discrete-time, with g (0) = k (0) = 1, and 179 z -transforms G and K. Let W = (Wk : k ∈ Z) be a mean zero WSS process with SW (ω ) ≡ 1, let Xn = ∞ −∞ g (n − i)Wi and Yn = ∞ −∞ k (n − i)Wi . i= i= (a) Express RX , RY , RX Y , SX , SY , and SX Y in terms of g , k , G , K. (b) Find h so that Xn|n = ∞ −∞ Yi h(n − i) is the MMSE linear estimator of Xn given (Yi : i ≤ n). i= (c) Find the resulting mean square error. Give an intuitive reason for your answer. 7.23*. Resolution of Wiener and Kalman filtering Consider the state and observation models: Xn = F Xn−1 + Wn Yn = H T Xn + Vn where (Wn : −∞ < n < +∞) and (Vn : −∞ < n < +∞) are independent vector-valued random sequences of independent, identically distributed mean zero random variables. Let ΣW and ΣV denote the respective covariance matrices of Wn and Vn . (F , H and the covariance matrices must satisfy a stability condition. Can you find it? ) (a) What are the autocorrelation function RX and crosscorrelation function RX Y ? (b) Use the orthogonality principle to derive conditions for the causal filter h that minimizes E [ Xn+1 − ∞ h(j )Yn−j 2 ]. (i.e. derive the basic equations for the Wiener-Hopf method.) j =0 (c) Write down and solve the equations for the Kalman predictor in steady state to derive an expression for h, and verify that it satisfies the orthogonality conditions. 180 Chapter 8 App endix 8.1 Some notation The following notational conventions are used in these notes. Ac = AB = A−B = AB c Ai = {a : a ∈ Ai for some i} Ai = {a : a ∈ Ai for all i} a ∨ b = max{a, b} = a∧b = ∞ i=1 ∞ i=1 a+ = IA (x) = (a, b) = {x : a < x < b} [a, b) = {x : a ≤ x < b} complement of A A∩B a if a ≥ b b if a < b min{a, b} a ∨ 0 = max{a, 0} 1 if x ∈ A 0 else (a, b] = {x : a < x ≤ b} [a, b] = {x : a ≤ x ≤ b} a ≤ x, y ≤ b means a ≤ x ≤ b and a ≤ y ≤ b Z Z+ R R+ C − set of integers − set of real numbers = set of complex numbers − set of nonnegative integers − set of nonnegative real numbers 181 A1 × · · · × An = {(a1 , . . . , an )T : ai ∈ Ai for 1 ≤ i ≤ n} An = A × · · · × A t = t = n times greatest integer n such that n ≤ t least integer n such that n ≥ t All the trigonometric identities required in these notes can be easily derived from the two identities: cos(a + b) = cos(a) cos(b) − sin(a) sin(b) sin(a + b) = sin(a) cos(b) − cos(a) sin(b) and the facts cos(−a) = cos(a) and sin(−b) = − sin(b). A set of numbers is countably infinite if the numbers in the set can be listed in a sequence xi : i = 1, 2, . . .. For example, the set of rational numbers is countably infinite, but the set of all real numbers in any interval of positive length is not countably infinite. 8.2 Convergence of sequences of numb ers We begin with some basic definitions. Let (xn ) = (x1 , x2 , . . .) and (yn ) = (y1 , y2 , . . .) be sequences of numbers and let x be a number. By definition, xn converges to x as n goes to infinity if for each > 0 there is an integer n so that | xn − x |< for every n ≥ n . We write limn→∞ xn = x to denote that xn converges to x. +4 Example Let xn = 2n+1 . Let us verify that limn→∞ xn = 0. The inequality | xn |< holds if n2 2n + 4 ≤ (n2 + 1). Therefore it holds if 2n + 4 ≤ n2 . Therefore it holds if both 2n ≤ 2 n2 and 4 ≤ 2 n2 . So if n = max{ 4 , 8 } then n ≥ n implies that | xn |< . So limn→∞ xn = 0. Occasionally a two-dimensional array of numbers (am,n : m ≥ 1, n ≥ 1) is considered. By definition, amn converges to a number a as m and n jointly go to infinity if for each > 0 there is n > 0 so that | am,n − a |< for every m, n ≥ n . We write limm,n→∞ am,n = a to denote that am,n converges to a as m and n jointly go to infinity. Theoretical Exercise Let am,n = 1 if m = n and am,n = 0 if m = n. Show that limn→∞ (limm→∞ am,n ) = limm→∞ (limn→∞ amn ) = 0 but that limm,n→∞ am,n does not exist. m+n (−1) Theoretical Exercise Let am,n = min(m,n) . Show that limm→∞ am,n does not exist for any n and limn→∞ am,n does not exist for any m, but limm,n→∞ am,n = 0. Theoretical Exercise If both limm,n→∞ amn and limn→∞ (limm→∞ amn ) exist, then they are equal. A sequence a1 , a2 , . . . is said to be nondecreasing if ai ≤ aj for i < j . Similarly a function f on the real line is nondecreasing if f (x) ≤ f (y ) whenever x < y . The sequence is called strictly increasing if ai < aj for i < j and the function is called strictly increasing if f (x) < f (y ) whenever 182 x < y . 1 A strictly increasing or strictly decreasing sequence is said to be strictly monotone, and a nondecreasing or nonincreasing sequence is said to be monotone. The sum of an infinite sequence is defined to be the limit of the partial sums. That is, by definition, ∞ n yk = x means that k=1 yk = x lim n→∞ k=1 Often we want to show that a sequence converges even if we don’t explicitly know the value of the limit. A sequence (xn ) is bounded if there is a number L so that | xn |≤ L for all n. Any sequence that is bounded and monotone converges to a finite number. Example Consider the sum ∞ k −α for a constant α > 1. For each n the nth partial sum can k=1 be bounded by comparison to an integral, based on the fact that for k ≥ 2, the k t h term of the sum is less than the integral of x−α over the interval [k − 1, k ]: n k=1 k −α ≤ 1 + n x−α dx = 1 + 1 1 α 1 − n 1− α ≤1+ = (α − 1) α−1 α−1 The partial sums are also monotone nondecreasing (in fact, strictly increasing). Therefore the sum ∞ −α exists and is finite. k=1 k By definition, (xn ) converges to +∞ as n goes to infinity if for every K > 0 there is an integer nK so that xn ≥ K for every n ≥ nK . Convergence to −∞ is defined in a similar way. A sequence (xn ) is a Cauchy sequence if limm,n→∞ | xm − xn |= 0. It is not hard to show that if xn converges to a finite limit x then (xn ) is a Cauchy sequence. More useful is the converse statement, called the Cauchy criteria for convergence: If (xn ) is a Cauchy sequence then xn converges to a finite limit as n goes to infinity. Theoretical Exercise 1. Show that if limn→∞ xn = x and limn→∞ yn = y then limn→∞ xn yn = xy . 2. Find the limits and prove convergence as n → ∞ for the following sequences: n2 n2 1 (c) zn = n=2 k log k (a) xn = cos(+1) , (b) yn = log n k n2 8.3 Continuity of functions Let f be a function on Rn for some n, and let xo ∈ Rn . The function has a limit y at xo , and such situation is denoted by limx→xo f (x) = y , if the following is true. Given > 0, there exists δ > 0 so that | f (x) − y |≤ whenever 0 < x − xo < δ . Equivalently, f (xn ) → y for any sequence x1 , x2 , . . . from Rn − xo such that xn → xo . The function f is said to be continuous at xo if limx→xo f (x) = f (xo ). The function f is simply said to be continuous if it is continuous at every point in Rn . 1 We avoid simply saying “increasing,” because for some authors it means strictly increasing and for other authors it means nondecreasing. While inelegant, our approach is safer. 183 Let n = 1, so consider a function f on R, and let xo ∈ R. The function has a right limit y at xo , and such situation is denoted by f (xo +) = y , if the following is true. Given > 0, there exists δ > 0 so that | f (x) − y |≤ whenever 0 < x − xo < δ . Equivalently, f (xo +) = y if f (xn ) → y for any sequence x1 , x2 , . . . from (xo , +∞) such that xn → xo . The left limit f (xo −) is defined similarly. If f is monotone nondecreasing, then the left and right limits exist, and f (xo −) ≤ f (xo ) ≤ f (xo +) for all xo . A function f is called right continuous at xo if f (xo ) = f (xo +). A function f is simply called right continuous if it is right continuous at all points. 8.4 Derivatives of functions Let f be a function on R and let xo ∈ R. Then f is differentiable at xo if the following limit exists and is finite: lim x→x0 f (x) − f (x0 ) x − x0 The value of the limit is the derivative of f at x0 written as f (x0 ). The function f is differentiable if it is differentiable at all points. We write f for the derivative of f . For an integer n ≥ 0 we write f (n) to denote the result of differentiating f n times. Theorem 8.4.1 (Mean value form of Taylor’s Theorem) Let f be a function on an interval (a, b) such that its nth derivative f (n) exists on (a, b). Then for a < x, x0 < b, f (x) = n−1 k=0 f (n) (y )(x − x0 )n f (k) (x0 ) (x − x0 )k + k! n! for some y between x and x0 . Let g be a function from Rn to Rm . Thus for each vector x ∈ Rn , g (x) is an m vector. The derivative ∂ gi ∂g matrix of g at a point x, ∂ x (x), is the n × m matrix with ij th entry ∂ xj (x). Sometimes for brevity we write y = g (x) and think of y as a variable depending on x, and we write the derivative matrix ∂y as ∂ x (x). ∂y Theorem 8.4.2 (Implicit Function Theorem) If m = n and if ∂ x is continuous in a neighborhood ∂y of x0 and if ∂ x (x0 ) is nonsingular, then the inverse mapping x = g −1 (y ) is defined in a neighborhood of y0 = g (x0 ) and ∂y (x0 ) ∂x ∂x (y0 ) = ∂y 8.5 −1 Integration Riemann Integration Let g be a bounded function on a closed interval [a, b]. Given a = t0 ≤ v1 ≤ t1 ≤ v2 ≤ · · · ≤ tn−1 ≤ vn ≤ tn = b 184 the Riemann sum is n k=1 g (vk )(tk − tk−1 ) If the Riemann sum converges as maxk | tk − tk−1 |→ 0 then g is said to be Riemann integrable b over [a, b], and the value of the integral, denoted a g (x)dx, is the limit of the Riemann sums. For example, if g is continuous or monotone over [a, b] then g is Reimann integrable over [a, b]. Next, suppose g is defined over the whole real line. If for every interval [a, b], g is bounded over [a, b] and Riemann integrable over [a, b], then the Riemann integral of g over R is defined by ∞ g (x)dx = −∞ lim b a,b→∞ −a g (x)dx provided that the indicated limit exist as a, b jointly converge to +∞. The values +∞ or −∞ are possible. We will have occasion to use Riemann integrals in two dimensions. Let g be a bounded function on a closed rectangle [a1 , a2 ] × [b1 , b2 ]. Given i i i ai = ti ≤ v1 ≤ ti ≤ v2 ≤ · · · ≤ ti i −1 ≤ vni ≤ ti i = bi 0 1 n n for i = 1, 2, the Riemann sum is n1 n2 j =1 k=1 12 g (vj , vk )(t1 − t1−1 )(t2 − t2 −1 ) j j k k If the Riemann sum converges as maxi maxk | ti − ti −1 |→ 0 then g is said to be Riemann integrable k k over [a1 , a2 ] × [b1 , b2 ], and the value of the integral, denoted the Riemann sums. b1 b2 a1 a2 g (x1 , x2 )dx1 dx2 , is the limit of Lebesgue Integration Lebesgue integration with respect to a probability measure is defined in the section defining the expectation of a random variable X and is written as E [X ] = X (ω )P (dω ) Ω The idea is to first define the expectation for simple random variables, then for nonnegative random variables, and then for general random variables by E [X ] = E [X+ ] − E [X− ]. The same approach can be used to define the Lebesgue integral ∞ g (ω )dω −∞ for Borel measurable functions g on R. Such an integral is well defined if either ∞ or −∞ g− (ω )dω < +∞. Riemann-Stieltjes Integration 185 ∞ −∞ g+ (ω )dω < +∞ Let g be a bounded function on a closed interval [a, b] and let F be a nondecreasing function on [a, b]. The Riemann-Stieltjes integral b g (x)dF (x) (Riemann-Stieltjes) a is defined the same way as the Riemann integral, except that the Riemann sums are changed to n k=1 g (vk )(F (tk ) − F (tk−1 )) Extension of the integral over the whole real line is done as it is for Riemann integration. An ∞ alternative definition of −∞ g (x)dF (x), preferred in the context of these notes, is given next. Lebesgue-Stieltjes Integration ˜ Let F be a CDF. As seen in Section 1.3, there is a corresponding probability measure P on the Borel subsets of R. Given a Borel measurable function g on R, the Lebesgue-Stieltjes integral of g ˜ with respect to F is defined to be the Lebesgue integral of g with respect to P : ∞ (Lebesgue-Stieltjes) ∞ g (x)dF (x) = −∞ ˜ g (x)P (dx) (Lebesgue) −∞ ∞ The same notation −∞ g (x)dF (x) is used for both Riemann-Stieltjes (RS) and Lebesgue-Stieltjes (LS) integration. If g is continuous and the LS integral is finite, then the integrals agree. In ∞ particular, −∞ xdF (x) is identical as either an LS or RS integral. However, for equivalence of the integrals g (X (ω ))P (dω ) and ∞ g (x)dF (x), −∞ Ω even for continuous functions g , it is essential that the integral on the right be understood as an LS integral. Hence, in these notes, only the LS interpretation is used, and RS integration is not needed. If F has a corresponding pdf f , then ∞ (Lebesgue-Stieltjes) g (x)dF (x) = −∞ ∞ g (x)f (x)dx −∞ for any Borel measurable function g . 8.6 Matrices An m × n matrix over the reals R has the form a11 a12 · · · a21 a22 · · · A=. . . . . . am1 am2 · · · 186 a1n a2n . . . amn (Lebesgue) where aij ∈ R for all i, j . This matrix has m rows and n columns. A matrix over the complex numbers C has the same form, with aij ∈ C for all i, j . The transpose of an m × n matrix A = (aij ) is the n × m matrix AT = (aj i ). For example 103 211 12 = 0 1 31 T The matrix A is symmetric if A = AT . Symmetry requires that the matrix A be square: m = n. The diagonal of a matrix is comprised by the entries of the form aii . A square matrix A is called diagonal if the entries off of the diagonal are zero. The n × n identity matrix is the n × n diagonal matrix with ones on the diagonal. We write I to denote an identity matrix of some dimension n. If A is an m × k matrix and B is a k × n matrix, then the product AB is the m × n matrix with k th element ij l=1 ail blj . A vector x is an m × 1 matrix, where m is the dimension of the vector. Thus, vectors are written in column form: x= x1 x2 . . . xm The set of all dimension m vectors over R is the m dimensional Euclidean space Rm . The inner product of two vectors x and y of the same dimension m is the number xT y , equal to m xi yi . i=1 The vectors x and y are orthogonal if xT y = 0. The Euclidean length or norm of a vector x is given 1 by x = (xT x) 2 . A set of vectors ϕ1 , . . . , ϕn is orthonormal if the vectors are orthogonal to each other and ϕi = 1 for all i. A set of vectors v1 , . . . , vn in Rm is said to span Rm if any vector in Rm can be expressed as a linear combination α1 v1 + α2 v2 + · · · + αn vn for some α1 , . . . , αn ∈ R. An orthonormal set of vectors ϕ1 , . . . , ϕn in Rm spans Rm if and only if n = m. An orthonormal basis for Rm is an orthonormal set of m vectors in Rm . An orthonormal basis ϕ1 , . . . , ϕm corresponds to a coordinate system for Rm . Given a vector v in Rm , the coordinates of v relative to ϕ1 , . . . , ϕm are given by αi = ϕT v . i The coordinates α1 , . . . , αm are the unique numbers such that v = α1 ϕ1 + · · · + αm ϕm . A square matrix U is called orthonormal if any of the following three equivalent conditions is satisfied: 1. U T U = I 2. U U T = I 3. the columns of U form an orthonormal basis. Given an m × m orthonormal matrix U and a vector v ∈ Rm , the coordinates of v relative to U are given by the vector U T v . Given a square matrix A, a vector ϕ is an eigenvector of A and λ is an eigenvalue of A if the eigen relation Aϕ = λϕ is satisfied. A permutation π of the numbers 1, . . . , m is a one-to-one mapping of {1, 2, . . . , m} onto itself. That is (π (1), . . . , π (m)) is a reordering of (1, 2, . . . , m). Any permutation is either even or odd. 187 A permutation is even if it can be obtained by an even number of transpositions of two elements. Otherwise a permutation is odd. We write 1 if π is even −1 if π is odd (−1)π = The determinant of a square matrix A, written det(A), is defined by m det(A) = (−1) π π aiπ(i) i=1 The absolute value of the determinant of a matrix A is denoted by | A |. Thus | A |=| det(A) |. Some important properties of determinants are the following. Let A and B be m × m matrices. 1. If B is obtained from A by multiplication of a row or column of A by a scaler constant c, then det(B ) = c det(A). 2. If U is a subset of Rm and V is the image of U under the linear transformation determined by A: V = {Ax : x ∈ U } then (the volume of U ) = | A | × (the volume of V ) 3. det(AB ) = det(A) det(B ) 4. det(A) = det(AT ) 5. | U |= 1 if U is orthonormal. 6. The columns of A span Rn if and only if det(A) = 0. 7. The equation p(λ) = det(λI − A) defines a polynomial p of degree m called the characteristic polynomial of A. 8. The zeros λ1 , λ2 , . . . , λm of the characteristic polynomial of A, repeated according to multiplicity, are the eigenvalues of A, and det(A) = n λi . The eigenvalues can be complex i=1 valued with nonzero imaginary parts. If K is a symmetric m × m matrix, then the eigenvalues λ1 , λ2 , . . . , λm , are real-valued (not necessarily distinct) and there exists an orthonormal basis consisting of the corresponding eigenvectors ϕ1 , ϕ2 , . . . , ϕm . Let U be the orthonormal matrix with columns ϕ1 , . . . , ϕm and let Λ be the diagonal matrix with diagonal entries given by the eigenvalues 0 λ1 ∼ λ2 Λ= .. . 0 ∼ λm 188 Then the relations among the eigenvalues and eigenvectors may be written as K U = U Λ. Therefore K = U ΛU T and Λ = U T K U . A symmetric m × m matrix A is positive semidefinite if αT Aα ≥ 0 for all m-dimensional vectors α. A symmetric matrix is positive semidefinite if and only if its eigenvalues are nonnegative. The remainder of this section deals with matrices over C. The Hermitian transpose of a matrix A is the matrix A∗ , obtained from AT by taking the complex conjugate of each element of AT . For example, 1 2 ∗ 1 0 3 + 2j 0 −j = 2j 1 3 − 2j 1 The set of all dimension m vectors over C is the m-complex dimensional space Cm . The inner product of two vectors x and y of the same dimension m is the complex number y ∗ x, equal to m ∗ ∗ i=1 xi yi . The vectors x and y are orthogonal if x y = 0. The length or norm of a vector x is 1 given by x = (x∗ x) 2 . A set of vectors ϕ1 , . . . , ϕn is orthonormal if the vectors are orthogonal to each other and ϕi = 1 for all i. A set of vectors v1 , . . . , vn in Cm is said to span Cm if any vector in Cm can be expressed as a linear combination α1 v1 + α2 v2 + · · · + αn vn for some α1 , . . . , αn ∈ C. An orthonormal set of vectors ϕ1 , . . . , ϕn in Cm spans Cm if and only if n = m. An orthonormal basis for Cm is an orthonormal set of m vectors in Cm . An orthonormal basis ϕ1 , . . . , ϕm corresponds to a coordinate system for Cm . Given a vector v in Rm , the coordinates of v relative to ϕ1 , . . . , ϕm are given by αi = ϕ∗ v . i The coordinates α1 , . . . , αm are the unique numbers such that v = α1 ϕ1 + · · · + αm ϕm . A square matrix U over C is called unitary (rather than orthonormal) if any of the following three equivalent conditions is satisfied: 1. U ∗ U = I 2. U U ∗ = I 3. the columns of U form an orthonormal basis. Given an m × m unitary matrix U and a vector v ∈ Cm , the coordinates of v relative to U are given by the vector U ∗ v . Eigenvectors, eigenvalues, and determinants of square matrices over C are defined just as they are for matrices over R. The absolute value of the determinant of a matrix A is denoted by | A |. Thus | A |=| det(A) |. Some important properties of determinants of matrices over C are the following. Let A and B by m × m matrices. 1. If B is obtained from A by multiplication of a row or column of A by a constant c ∈ C, then det(B ) = c det(A). 2. If U is a subset of Cm and V is the image of U under the linear transformation determined by A: V = {Ax : x ∈ U } then (the volume of U ) = | A |2 × (the volume of V ) 189 3. det(AB ) = det(A) det(B ) 4. det∗ (A) = det(A∗ ) 5. | U |= 1 if U is unitary. 6. The columns of A span Cn if and only if det(A) = 0. 7. The equation p(λ) = det(λI − A) defines a polynomial p of degree m called the characteristic polynomial of A. 8. The zeros λ1 , λ2 , . . . , λm of the characteristic polynomial of A, repeated according to multiplicity, are the eigenvalues of A, and det(A) = n λi . The eigenvalues can be complex i=1 valued with nonzero imaginary parts. A matrix K is called Hermitian symmetric if K = K ∗ . If K is a Hermitian symmetric m × m matrix, then the eigenvalues λ1 , λ2 , . . . , λm , are real-valued (not necessarily distinct) and there exists an orthonormal basis consisting of the corresponding eigenvectors ϕ1 , ϕ2 , . . . , ϕm . Let U be the unitary matrix with columns ϕ1 , . . . , ϕm and let Λ be the diagonal matrix with diagonal entries given by the eigenvalues 0 λ1 ∼ λ2 Λ= .. . 0 ∼ λm Then the relations among the eigenvalues and eigenvectors may be written as K U = U Λ. Therefore K = U ΛU ∗ and Λ = U ∗ K U . A Hermitian symmetric m × m matrix A is positive semidefinite if α∗ Aα ≥ 0 for all α ∈ Cm . A Hermitian symmetric matrix is positive semidefinite if and only if its eigenvalues are nonnegative. Many questions about matrices over C can be addressed using matrices over R. If Z is an m × m matrix over C, then Z can be expressed as Z = A + B j , for some m × m matrices A and B over R. Similarly, if x is a vector in Cm then it can be written as x = u + j v for vectors u, v ∈ Rm . Then Z x = (Au − B v ) + j (B u + Av ). There is a one-to-one and onto mapping from Cm to R2m defined by u + j v → u . Multiplication of x by the matrix Z is thus equivalent to multiplication of u by v v A −B Z= . We will show that BA |Z |2 = det(Z ) (8.1) so that Property 2 for determinants of matrices over C follows from Property 2 for determinants of matrices over R. It remains to prove (8.1). Suppose that A−1 exists and examine the two 2m × 2m matrices A −B BA A 0 B A + B A−1 B and . (8.2) The second matrix is obtained from the first by right multiplying each sub-block in the right column of the first matrix by A−1 B , and adding the result to the left column. Equivalently, the second I A−1 B I A− 1 B matrix is obtained by right multiplying the first matrix by . But det = 0 I 0 I 190 1, so that the two matrices in (8.2) have the same determinant. Equating the determinants of the two matrices in (8.2) yields det(Z ) = det(A) det(A + B A−1 B ). Similarly, the following four matrices have the same determinant: A + Bj 0 0 A − Bj A + Bj 0 A − Bj A − Bj 2A A − Bj A − Bj A − Bj 2A A − Bj 0 A+B A−1 B 2 (8.3) Equating the determinants of the first and last of the matrices in (8.3) yields that |Z |2 = det(Z ) det∗ (Z ) = det(A + B j ) det(A − B j ) = det(A) det(A + B A−1 B ). Combining these observations yields that (8.1) holds if A−1 exists. Since each side of (8.1) is a continuous function of A, (8.1) holds in general. 191 192 Chapter 9 Solutions to even numb ered problems 1.2. Indep endent vs. mutually exclusive (a) If E is an event independent of itself, then P [E ] = P [E ∩ E ] = P [E ]P [E ]. This can happen if P [E ] = 0. If P [E ] = 0 then cancelling a factor of P [E ] on each side yields P [E ] = 1. In summary, either P [E ] = 0 or P [E ] = 1. (b) In general, we have P [A ∪ B ] = P [A] + P [B ] − P [AB ]. If the events A and B are independent, then P [A ∪ B ] = P [A] + P [B ] − P [A]P [B ] = 0.3 + 0.4 − (0.3)(0.4) = 0.58. On the other hand, if the events A and B are mutually exclusive, then P [AB ] = 0 and therefore P ]A ∪ B ] = 0.3 + 0.4 = 0.7. (c) If P [A] = 0.6 and P [B ] = 0.8, then the two events could be independent. However, if A and B were mutually exclusive, then P [A] + P [B ] = P [A ∪ B ] ≤ 1, so it would not possible for A and B to be mutually exclusive if P [A] = 0.6 and P [B ] = 0.8. 1.4. Frantic search Let D,T ,B , and O denote the events that the glasses are in the drawer, on the table, in the briefcase, or in the office, respectively. These four events partition the probability space. (a) Let E denote the event that the glasses were not found in the first drawer search. [E |T ] [ ] E P [T |E ] = P [TE ] ] = P [E |D]PPD]+PPETDc ]P [Dc ] = (0.1)((1.)(0.06) (0.1) = 0.06 ≈ 0.315 0.19 P[ [ [| 0 9)+(1) (b) Let F denote the event that the glasses were not found after the first drawer search and first BPB [ table search. P [B |F ] = PPB F ] = P [F |D]P [D]+P [F |T ]P [[F |]+]P [[F |]B ]P [B ]+P [F |O]P [O] [F ] PT (1)(0.03) = (0.1)(0.9)+(0.1)(0.06)+(1)(0.03)+(1)(0.01) ≈ 0.22 (c) Let G denote the event that the glasses were not found after the two drawer searches, two table searches, and one briefcase search. OP O P [O|G] = PP[OG] = P [G|D]P [D]+P [G|T ]P [[G|]+]P [[G|]B ]P [B ]+P [G|O]P [O] [G] PT = (1)(0.01) (0.1)2 (0.9)+(0.1)2 (0.06)+(0.1)(0.03)+(1)(0.01) ≈ 0.4424 1.6. Conditional probabilities–basic computations of iterative deco ding (a) Here is one of several approaches to this problem. Note that the n pairs (B1 , Y1 ), . . . , (Bn , Yn ) 193 def are mutually independent, and λi (bi ) = P [Bi = bi |Yi = yi ] = P [B = 1|Y1 = y1 , . . . , Yn = yn ] = b1 ,...,bn :b1 ⊕···⊕bn =1 qi (yi |bi ) qi (yi |0)+qi (yi |1) . Therefore P [B1 = b1 , . . . , Bn = bn |Y1 = y1 , . . . , Yn = yn ] n λi (bi ). = b1 ,...,bn :b1 ⊕···⊕bn =1 i=1 (b) Using the definitions, P [B = 1|Z1 = z1 , . . . , Zk = zk ] = = = p(1, z1 , . . . , zk ) p(0, z1 , . . . , zk ) + p(1, z1 , . . . , zk ) k j =1 rj (1|zj ) k k 1 j =1 rj (0|zj ) + 2 j =1 rj (1|zj ) 1 2 1 2 η 1+η k where η = j =1 rj (1|zj ) . rj (0|zj ) 1.8. Blue corners (a) There are 24 ways to color 5 corners so that at least one face has four blue corners (there are 6 choices of the face, and for each face there are four choices for which additional corner to color blue.) Since there are 8 = 56 ways to select 5 out of 8 corners, P [B |exactly 5 corners colored blue] = 5 24/56 = 3/7. (b) By counting the number of ways that B can happen for different numbers of blue corners we find P [B ] = 6p4 (1 − p)4 + 24p5 (1 − p)3 + 24p6 (1 − p)2 + 8p7 (1 − p) + p8 . 1.10. Recognizing cumulative distribution√ functions √ √ √ (a) Valid (draw a sketch) P [X 2 ≤ 5] = P [X ≤ − 5] + P [X ≥ 5] = F1 (− 5) + 1 − F1 ( 5) = (b) Invalid. F (0) > 1. Another reason is that F is not nondecreasing (c) Invalid, not right continuous at 0. e−5 2. 1.12. CDF and characteristic function of a mixed typ e random variable (a) Range of X is [0, 0.5]. For 0 ≤ c ≤ 0.5, P [X ≤ c] = P [U ≤ c + 0.5] = c + 0.5 Thus, 0 c<0 c + 0.5 0 ≤ c ≤ 0.5 FX (c) = 1 c ≥ 0.5 (b) ΦX (u) = 0.5 + 0.5 j ux dx 0e = 0.5 + ej u/2 −1 ju 1.14. Conditional exp ectation for uniform density over a triangular region (a) The triangle has base and height one, so the area of the triangle is 0.5. Thus the joint pdf is 2 inside the triangle. (b) x/2 2dy = x if 0 < x < 1 ∞ 0 x/2 fX (x) = fX Y (x, y )dy = 2dy = 2 − x if 1 < x < 2 x− 1 −∞ 0 else 194 (c) In view of part (c), the conditional density fY |X (y |x) is not well defined unless 0 < x < 2. In general we have 2 if 0 < x ≤ 1 and y ∈ [0, x ] x 2 0 if 0 < x ≤ 1 and y ∈ [0, x ] 2 2 if 1 < x < 2 and y ∈ [x − 1, x ] fY |X (y |x) = 2−x 2 0 if 1 < x < 2 and y ∈ [x − 1, x ] 2 not defined if x ≤ 0 or x ≥ 2 Thus, for 0 < x ≤ 1, the conditional distribution of Y is uniform over the interval [0, x ]. For 2 1 < x ≤ 2, the conditional distribution of Y is uniform over the interval [x − 1, x ]. 2 (d) Finding the midpoints of the intervals that Y is conditionally uniformly distributed over, or integrating x against the conditional density found in part (c), yields: x if 0 < x ≤ 1 4 3 x− 2 if 1 < x < 2 E [Y |X = x] = 4 not defined if x ≤ 0 or x ≥ 2 1.16. Density of a function of a random variable 3 (a) P [X ≥ 0.4|X ≤ 0.8] = P [0.4 ≤ X ≤ 0.8|X ≤ 0.8] = (0.82 − 0.42 )/0.82 = 4 . (b) The range of Y is the interval [0, +∞). For c ≥ 0, 1 P {− log(X ) ≤ c} = P {log(X ) ≥ −c} = P {X ≥ e−c } = e−c 2xdx = 1 − e−2c so fY (c) = 2 exp(−2c) c ≥ 0 That is, Y is an exponential random variable with parameter 2. 0 else 1.18. Functions of indep endent exp onential random variables (a) Z takes values in the positive real line. So let z ≥ 0. P [Z ≤ z ] = P [min{X1 , X2 } ≤ z ] = P [X1 ≤ z or X2 ≤ z ] = 1 − P [X1 > z and X2 > z ] = 1 − P [X1 > z ]P [X2 > z ] = 1 − e−λ1 z e−λ2 z = 1 − e−(λ1 +λ2 )z Differentiating yields that (λ1 + λ2 )e−(λ1 +λ2 )z , z ≥ 0 0, z<0 fZ (z ) = That is, Z has the exponential distribution with parameter λ1 + λ2 . (b) R takes values in the positive real line and by independence the joint pdf of X1 and X2 is the product of their individual densities. So for r ≥ 0 , P [R ≤ r] = P [ X1 ≤ r] = P [X1 ≤ rX2 ] X2 ∞ = 0 = 0 ∞ r x2 λ1 e−λ1 x1 λ2 e−λ2 x2 dx1 dx2 0 (1 − e−rλ1 x2 )λ2 e−λ2 x2 dx2 = 1 − Differentiating yields that fR (r) = λ1 λ2 (λ1 r +λ2 )2 0, 195 r≥0 r<0 λ2 . r λ1 + λ2 1.20. Gaussians and the Q function (a) Cov(3X + 2Y , X + 5Y + 10) = 3Cov(X, X ) + 10Cov(Y , Y ) = 3Var(X ) + 10Var(Y ) = 13. +4 (b) X + 4Y is N (0, 17), so P {X + 4Y ≥ 2} = P { X√17Y ≥ √2 } = Q( √2 ). 17 17 − (c) X − Y is N (0, 2), so P {(X − Y )2 > 9} = P {(X − Y ) ≥ 3 orX − Y ≤ −3} = 2P { X√2Y ≥ 3 2Q( √2 ). 1.22. Working with a joint density (a) The density must integrate to one, so c = 4/19. (b) 2 4 19 1 (1 + xy )dy = fX (x) = 0 fY (y ) = 4 19 3 2 (1 + xy )dx = 0 4 19 [1 + 3x 2] + 5y 2] = 2≤x≤3 else 4 19 [1 3 √} 2 1≤y≤2 else Therefore fX |Y (x|y ) is well defined only if 1 ≤ y ≤ 2. For 1 ≤ y ≤ 2: fX |Y (x|y ) = 1+xy 1+ 5 y 2 0 2≤x≤3 for other x 1.24. Density of a difference (a) Method 1 The joint density is the product of the marginals, and for any c ≥ 0, the probability P {|X − Y | ≤ c} is the integral of the joint density over the region of the positive quadrant such that {|x − y | ≤ c}, which by symmetry is one minus twice the integral of the density over the region ∞ {y ≥ 0 and y ≤ y +c}. Thus, P {X −Y | ≤ c} = 1−2 0 exp(−λ(y +c))λ exp(−λy )dy = 1−exp(−λc). λ exp(−λc) c ≥ 0 Thus, fZ (c) = That is, Z has the exponential distribution with parameter 0 else λ. (Method 2 The problem can be solved without calculation by the memoryless property of the exponential distribution, as follows. Suppose X and Y are lifetimes of identical lightbulbs which are turned on at the same time. One of them will burn out first. At that time, the other lightbulb will be the same as a new light bulb, and |X − Y ] is equal to how much longer that lightbulb will last. 1.26. Some characteristic functions (a) Differentiation is straight-forward, yielding j E X = Φ (0) = 2j or E X = 2, and j 2 E [X 2 ] = Φ (0) = −14, so Var(x) = 14 − 22 = 10. In fact, this is the characteristic function of a N (10, 22 ) random variable. (b) Evaluation of the derivatives at zero requires l’Hospital’s rule, and is a little tedious. A simpler way is to use the Taylor series expansion exp(j u) = 1 + (j u) + (j u)2 /2! + (j u)3 /3!... The result is E X = 0.5 and Var(X ) = 1/12. In fact, this is the characteristic function of a U (0, 1) random variable. (c) Differentiation is straight-forward, yielding E X = Var(X ) = λ. In fact, this is the characteristic function of a P oi(λ) random variable. 1.28. A transformation of jointly continuous random variables (a) We are using the mapping, from the square region {(u, v ) : 0 ≤ u, v ≤ 1} in the u − v plane to 196 the triangular region with corners (0,0), (3,0), and (3,1) in the x − y plane, given by x = 3u y = uv . The mapping is one-to-one, meaning that for any (x, y ) in the range we can recover (u, v ). Indeed, the inverse mapping is given by x 3 3y . x u= v= The Jacobian determinant of the transformation is J (u, v ) = det ∂x ∂u ∂y ∂u ∂x ∂v ∂y ∂v = det 30 = 3u = 0, for all u, v ∈ (0, 1)2 . vu Therefore the required pdf is fX,Y (x, y ) = fU,V (u, v ) 9u2 v 2 9y 2 = = 3uv 2 = |J (u, v )| |3u| x within the triangle with corners (0,0), (3,0), and (3,1), and fX,Y (x, y ) = 0 elsewhere. Integrating out y from the joint pdf yields fX (x) = x 3 0 0 9y 2 x dy x2 9 = if 0 ≤ x ≤ 3 else Therefore the conditional density fY |X (y |x) is well defined only if 0 ≤ x ≤ 3. For 0 ≤ x ≤ 3, fX,Y (x, y ) fY |X (y |x) = = fX (x) 1.30. Jointly distributed variables ∞ 1 V2 (a) E [ 1+U ] = E [V 2 ]E [ 1+U ] = 0 v 2 λe−λv dv (b) P {U ≤ V } = 1∞ −λv dv du 0 u λe = 81y 2 x3 0 11 0 1+u du 1 −λu du 0e if 0 ≤ y ≤ else 2 = ( λ2 )(ln(2)) = x 3 2 ln 2 . λ2 = (1 − e−λ )/λ. (c) The support of both fU V and fY Z is the strip [0, 1] × [0, ∞), and the mapping (u, v ) → (y , z ) 1 defined by y = u2 and z = uv is one-to-one. Indeed, the inverse mapping is given by u = y 2 and 1 ∂ (x,y v = z y − 2 . The absolute value of the Jacobian determinant of the forward mapping is | ∂ (u,v) | = ) 2u 0 vu = 2u2 = 2y . Thus, 1 fY ,Z (y , z ) = λ −λz y − 2 2y e 0 197 (y , z ) ∈ [0, 1] × [0, ∞) otherwise. 2.2. The limit of the pro duct is the pro duct of the limits (a) There exists n1 so large that |yn − y | ≤ 1 for n ≥ n1 . Thus, |yn | ≤ L for all n, where L = max{|y1 |, |y2 |, . . . , |yn1 −1 |, |y | + 1}.. (b) Given > 0, there exists n so large that |xn − x| ≤ 2L and |yn − y | ≤ 2(|x|+1) . Thus, for n ≥ n , |xn yn − xy | ≤ |(xn − x)yn | + |x(yn − y )| ≤ |xn − x|L + |x||yn − y | ≤ 2 + 2 ≤. So xn yn → xy as n → ∞. 2.4. Convergence of sequences of random variables (a) The distribution of Xn is the same for all n, so the sequence converges in distribution to any random variable with the distribution of X1 . To check for mean square convergence, use the fact cos(a) cos(b) = (cos(a + b) + cos(a − b))/2 to calculate that E [Xn Xm ] = 1 if n = m and E [Xn Xm ] = 0 2 if n = m. Therefore, limn,m→∞ E [Xn Xm ] does not exist, so the sequence (Xn ) does not satisfy the Cauchy criteria for m.s. convergence, so it doesn’t converge in the m.s. sense. Since it is a bounded sequence, it therefore does not converge in the p. sense either. (Because for bounded sequences, convergence p. implies convergence m.s.) Therefore the sequence doesn’t converge in the a.s. sense either. In summary, the sequence converges in distribution but not in the other three senses. (Another approach is to note that the distribution of Xn − X2n is the same for all n, so that the sequence doesn’t satisfy the Cauchy criteria for convergence in probability.) (b) If ω is such that 0 < Θ(ω ) < 2π , then |1 − Θ(ω) | < 1 so that limn→∞ Yn (ω ) = 0 for such ω . π Since P [0 < Θ(ω ) < 2π ] = 1, it follows that (Yn ) converges to zero in the a.s. sense, and hence also in the p. and d. senses. Since the sequence is bounded, it also converges to zero in the m.s. sense. 2.6. Convergence of a sequence of discrete random variables 1 (a) The CDF of Xn is shown in Figure 9.1. Since Fn (x) = FX x − n it follows that limn→∞ Fn (x) = 1F X n 0 0 1 2 34 56 Figure 9.1: Fx FX (x−) all x. So limn→∞ Fn (x) = FX (x) unless FX (x) = FX (x−) i.e., unless x = 1, 2, 3, 4, 5, or 6. (b) FX is continuous at x unless x ∈ {1, 2, 3, 4, 5, 6}. (c) Yes, limn→∞ Xn = X d. by definition. 2.8. Convergence of a minimum (a) The sequence (Xn ) converges to zero in all four senses. Here is one proof, and there are others. 198 For any with 0 < < 1, P [|Xn − 0| ≥ ] = P [U1 ≥ , . . . , Un ≥ ] = (1 − )n , which converges to zero as n → ∞. Thus, by definition, Xn → 0 p. Thus, the sequence converges to zero in d. sense and, since it is bounded, in the m.s. sense. For each ω , as a function of n, the sequence of numbers X1 (ω ), X2 (ω ), . . . is a nonincreasing sequence of numbers bounded below by zero. Thus, the sequence Xn converges in the a.s. sense to some limit random variable. If a limit of random variables exists in different senses, the limit random variable has to be the same, so the sequence (Xn ) converges a.s. to zero. (b) For n fixed, the variable Yn is distributed over the interval [0, nθ ], so let c be a number in that interval. Then P [Yn ≤ c] = P [Xn ≤ cn−θ ] = 1 − P [Xn > cn−θ ] = 1 − (1 − cn−θ )n . Thus, if θ = 1, c limn→∞ P [Yn ≤ c] = 1 − limn→∞ (1 − n )n = 1 − exp(−c) for any c ≥ 0. Therefore, if θ = 1, the sequence (Yn ) converges in distribution, and the limit distribution is the exponential distribution with parameter one. 2.10. Limits of functions of random variables (a) Yes. Since g is a continuous function, if a sequence of numbers an converges to a limit a, then g (an ) converges to g (a). Therefore, for any ω such that limn→∞ Xn (ω ) = X (ω ), it holds that limn→∞ g (Xn (ω )) = g (X (ω )). If Xn → X a.s., then the set of all such ω has probability one, so g (Xn ) → g (X ) a.s. (b) Yes. A direct proof is to first note that |g (b) − g (a)| ≤ |b − a| for any numbers a and b. So, if Xn → X m.s., then E [|g (Xn ) − g (X )|2 ] ≤ E [|X − Xn |2 ] → 0 as n → ∞. Therefore g (Xn ) → g (X ) m.s. A slightly more general proof would be to use the continuity of g (implying uniform continuity on bounded intervals) to show that g (Xn ) → g (X ) p., and then, since g is bounded, use the fact that convergence in probability for a bounded sequence implies convergence in the m.s. sense.) (c) No. For a counter example, let Xn = (−1)n /n. Then Xn → 0 deterministically, and hence in the a.s. sense. But h(Xn ) = (−1)n , which converges with probability zero, not with probability one. (d) No. For a counter example, let Xn = (−1)n /n. Then Xn → 0 deterministically, and hence in the m.s. sense. But h(Xn ) = (−1)n does not converge in the m.s. sense. (For a proof, note that E [h(Xm )h(Xn )] = (−1)m+n , which does not converge as m, n → ∞. Thus, h(Xn ) does not satisfy the necessary Cauchy criteria for m.s. convergence.) . 2.12. Sums of i.i.d. random variables, I I √ (a) ΦX1 (u) = 1 ej u + 1 e−j u = cos(u), so ΦSn (u) = ΦX1 (u)n = (cos(u))n , and ΦVn (u) = ΦSn (u/ n) = 2 √n2 cos(u/ n) . (b) if u is an even multiple of π 1 does not exist if u is an odd multiple of π lim ΦSn (u) = n→∞ 0 if u is not a multiple of π . lim ΦVn (u) = n→∞ lim n→∞ 1 1− 2 u √ n 2 +o u2 n n = e− u2 2 . (c) Sn does not converge in distribution, because, for example, limn→∞ ΦSn (π ) = limn→∞ (−1)n does not exist. So Sn does not converge in the m.s., a.s. or p. sense either. The limit of ΦVn is the characteristic function of the N (0, 1) distribution, so that (Vn ) converges in distribution and 199 the limit distribution is N (0, 1). It will next be proved that Vn does not converge in probability. The intuitive idea is that if m is much larger than n, then most of the random variables in the sum defining Vm are independent of the variables defining Vn . Hence, there is no reason for Vm to be close to Vn with high probability. The proof below looks at the case m = 2n. Note that X 1 + · · · + X 2n X 1 + · · · + X n √ √ − n 2n √ 2 − 2 X1 + · · · + Xn 1 Xn+1 + · · · + X2n √ √ +√ 2 n n 2 V2n − V n = = The two terms within the two pairs of braces are independent, and by the central limit theorem, each converges in distribution to the N (0, 1) distribution. Thus limn→∞ d. V2n − Vn = W, where √ 2 2 √ 2− 2 1 W is a normal random variable with mean 0 and Var(W ) = + √2 = 2 − 2. Thus, 2 limn→∞ P (|V2n − Vn | > ) = 0 so by the Cauchy criteria for convergence in probability, Vn does not converge in probability. Hence Vn does not converge in the a.s. sense or m.s. sense either. 2.14. Limit b ehavior of a sto chastic dynamical system Due to the persistent noise, just as for the example following Theorem 2.1.5 in the notes, the sequence does not converge to an ordinary random variables in the a.s., p., or m.s. senses. To gain some insight, imagine (or simulate on a computer) a typical sample path of the process. A typical sample sequence hovers around zero for a while, but eventually, since the Gaussian variables can be arbitrarily large, some value of Xn will cross above any fixed threshold with probability one. After that, Xn would probably converge to infinity quickly. For example, if Xn = 3 for some n, and if the noise were ignored from that time forward, then X would go through the sequence 9, 81, 6561, 43046721, 1853020188851841, 2.43 × 1030 , . . ., and one suspects the noise terms would not stop the growth. This suggests that Xn → +∞ in the a.s. sense (and hence in the p. and d. senses as well. (Convergence to +∞ in the m.s. sense is not well defined.) Of course, then, Xn does not converge in any sense to an ordinary random variable. We shall follow the above intuition to prove that Xn → ∞ a.s. If Wn−1 ≥ 3 for some n, then Xn ≥ 3. Thus, the sequence Xn will eventually cross above the threshold 3. We say that X diverges nicely from time n if the event En = {Xn+k ≥ 3 · 2k for all k ≥ 0} is true. Note that if Xn+k ≥ 3 · 2k and Wn+k ≥ −3 · 2k , then Xn+k+1 ≥ (3 · 2k )2 − 3 · 2k = 3 · 2k (3 · 2k − 1) ≥ 3 · 2k+1 . Therefore, En ⊃ {Xn ≥ 3 and Wn+k ≥ −3 · 2k for all k ≥ 0}. Thus, using a union bound and the bound Q(u) ≤ 1 exp(−u2 /2) for u ≥ 0: 2 P [En |Xn ≥ 3] ≥ P {Wn+k ≥ −3 · 2k for all k ≥ 0} = 1 P ∪∞ {Wn+k ≤ −3 · 2k } k=0 ≥ 1− ≥ 1− ∞ P {Wn+k ≤ −3 · 2 } = 1 − k=0 ∞ 1 2 k=0 k exp(−(3 · 2k )2 ) ≥ 1 − 1 2 ∞ k=0 ∞ k=0 Q(3 · 2k · √ 2) (e−9 )k+1 = 1 − e−9 ≥ 0.9999. 2(1 − e−9 ) The pieces are put together as follows. Let N1 be the smallest time such that XN1 ≥ 3. Then N1 is finite with probability one, as explained above. Then X diverges nicely from time N1 with 200 probability at least 0.9999. However, if X does not diverge nicely from time N1 , then there is some first time of the form N1 + k such that XN1 +k < 3 · 2k . Note that the future of the process beyond that time has the same evolution as the original process. Let N2 be the first time after that such that XN2 ≥ 3. Then X again has chance at least 0.9999 to diverge nicely to infinity. And so on. Thus, X will have arbitrarily many chances to diverge nicely to infinity, with each chance having probability at least 0.9999. The number of chances needed until success is a.s. finite (in fact it has the geometric distribution), so that X diverges nicely to infinity from some time, with probability one. 2.16. Convergence analysis of successive averaging (b) The means µn of Xn for all n are determined by the recursion µ0 = 0, µ1 = 1, and, for n ≥ 1, n n µn+1 = (µn + µn−1 )/2. This second order recursion has a solution of the form µn = Aθ1 + B θ2 , 2 = (1 + θ )/2. This yields µ = 2 (1 − (− 1 )n ). where θ1 and θ2 are the solutions to the equation θ n 3 2 (c) It is first proved that limn→∞ Dn = 0 a.s.. Note that Dn = U1 · · · Un−1 . Since log Dn = 1 log(U1 ) + · · · log(Un−1 ) and E [log Ui ] = 0 log(u)du = (x log x − x)|1 = −1, the strong law of large 0 D numbers implies that limn→∞ log−1n = −1 a.s., which in turn implies limn→∞ log Dn = −∞ a.s., n or equivalently, limn→∞ Dn = 0 a.s., which was to be proved. By the hint, for each ω such that Dn (ω ) converges to zero, the sequence Xn (ω ) is a Cauchy sequence of numbers, and hence has a limit. The set of such ω has probability one, so Xn converges a.s. 2.18. Mean square convergence of a random series Let Yn = X1 + · · · + Xn . We are interested in determining whether limn→∞ Yn exists in the m.s. sense. By Proposition 2.2.2, the m.s. limit exists if and only if the limit limm,n→∞ E [Ym Yn ] exists ∞ n∧m 2 2 and is finite. But E [Ym Yn ] = k=1 σk as n, m → ∞. Thus, (Yn ) k=1 σk with converges to ∞ 2 < ∞. converges in the m.s. sense if and only if k=1 σk 2.20. A large deviation 2 e Since E [X1 ] = 2 > 1, Cram´r’s theorem implies that b = (2), which we now compute. Note, for 2 a > 0, ∞ −ax2 dx −∞ e = x ∞ − 2( 1 ) e 2a d x −∞ = π a. 2 M (θ) = log E [eθx ] = log So ∞ −∞ 1 1 21 √ e−x ( 2 −θ) dx = − log(1 − 2θ) 2 2π (a) = max θa + θ 1 log(1 − 2θ) 2 1 1 1− 2 a 1 1 θ∗ = (1 − ) 2 a 1 b = (2) = (1 − log 2) = 0.1534 2 −100b e = 2.18 × 10−7 . = 2.22. A rappro chement b etween the central limit theorem and large deviations (a) Differentiating with respect to θ yields M (θ) = ( dE [exp(θX )] )/E [exp(θX )]. Differentiating again dθ 201 d2 E [X exp(θX )] E [exp(θX )] − ( dE [exp(θX )] )2 /E [exp(θX )]2 . dθ (dθ)2 dk E [exp(θX )] k exp(θ X )]. Therefore, expectation yields = E [X (dθ)k yields M (θ) = Interchanging differen- tiation and M (θ) = E [X exp(θX )]/E [exp(θX )], which is the mean for the tilted distribution fθ , and M (θ) = E [X 2 exp(θX )]E [exp(θX )] − E [X exp(θX )]2 /E [exp(θX )]2 , which is the second moment, minus the first moment squared, or simply the variance, for the tilted density fθ . (b) In particular, M (0) = 0 and M (0) = Var(X ) = σ 2 , so the second order Taylor’s approximation 2 σ2 2 for M near zero is M (θ) = θ2 σ 2 /2. Therefore, (a) for small a satisfies (a) ≈ maxθ (aθ − θ 2 ) = 2a 2 , √ √σ so as n → ∞, the large deviations upper bound behaves as P [Sn ≥ b n] ≤ exp(−n (b/ n)) ≈ 2 b2 exp(n 2σ2 n ) = exp(− 2b 2 ). The exponent is the same as in the bound/approximation to the central σ limit approximation described in the problem statement. Thus, for moderately large b, the central limit theorem approximation and large deviations bound/approximation are consistent with each other. 2.24. Large deviations of a mixed sum Modifying the derivation for iid random variables, we find that for θ ≥ 0: P Sn ≥a n ≤ E [eθ(Sn −an) ] = E [eθX1 ]nf E [eθY1 ]n(1−f ) e−nθa = exp(−n[θa − f MX (θ) − (1 − f )MY (θ)]) where MX and MY are the log moment generating functions of X1 and Y1 respectively. Therefore, l(f , a) = max θa − f MX (θ) − (1 − f )MY (θ) θ where MX (θ) = − ln(1 − θ) θ < 1 +∞ θ≥1 MY (θ) = ln ∞ k=0 eθk e−1 θ = ln(ee −1 ) = eθ − 1, k! Note that l(a, 0) = a ln a + 1 − a (large deviations exponent for the P oi(1) distribution) and l(a, 1) = a − 1 − ln(a) (large deviations exponent for the E xp(1) distribution). For 0 < f < 1 we compute l(f , a) by numerical optimization. The result is 0 0+ 1 /3 2/3 1 f l(f , 4) 2.545 2.282 1.876 1.719 1.614 Note: l(4, f ) is discontinuous in f at f = 0. In fact, adding only one exponentially distributed random variable to a sum of Poisson random variables can change the large deviations exponent. 3.2. Linear approximation of the cosine function over [0, π ] 2 v( 1π E [Y |Θ] = E [Y ] + CoarΘ,Y ) (Θ − E [Θ]), where E [Y ] = π 0 cos(θ)dθ = 0, E [Θ] = π , Var(Θ) = π , 2 12 V (Θ) π π 2 2 E [ΘY ] = 0 θ cos(θ) dθ = θ sin(θ) |π − 0 sin(θ) dθ = − π , and Cov(Θ, Y ) = E [ΘY ] − E [Θ]E [Y ] = − π . 0 π π π 24 12 24 Therefore, E [Y |Θ] = − π3 (Θ − π ), so the optimal choice is a = π2 and b = − π3 . 2 202 3.4. Valid covariance matrix Set a = 1 to make K symmetric. Choose b so that the determinants of the following seven matrices are nonnegative: (2) (1) 21 11 (1) 2b b1 10 01 K itself The fifth matrix has determinant 2 − b2 and det(K ) = 2 − 1 − b2 = 1 − b2 . Hence K is a valid covariance matrix (i.e. symmetric and positive semidefinite) if and only if a = 1 and −1 ≤ b ≤ 1. 3.6. Conditional probabilities with joint Gaussians I I 1 (a) P [|X − 1| ≥ 2] = P [X ≤ −1 or X ≥ 3] = P [ X ≤ − 1 ] + P [ X ≥ 3 ] = Φ(− 2 ) + 1 − Φ( 3 ). 2 2 2 2 2 C ov (X,Y ) (b) Given Y = 3, the conditional density of X is Gaussian with mean E [X ] + Var(Y ) (3 − E [Y ]) = 1 2 2 ov (X 6 and variance Var(X ) − CVar(,Y)) = 4 − 18 = 2. Y (c) The estimation error X − E [X |Y ] is Gaussian, has mean zero and variance 2, and is independent of Y . (The variance of the error was calculated to be 2 in part (b)). Thus the probability is 1 1 1 1 Φ(− √2 ) + 1 − Φ( √2 ), which can also be written as 2Φ(− √2 ) or 2(1 − Φ( √2 )). 3.8. An MMSE estimation problem 1 1+x 5 (a) E [X Y ] = 2 0 2x xy dxdy = 12 . The other moments can be found in a similar way. Alternatively, note that the marginal densities are given by 0≤y≤1 y 2(1 − x) 0 ≤ x ≤ 1 2−y 1≤y ≤2 fX (x) = fY (y ) = 0 else 0 else so that E X = 1 , Var(X ) = 3 E [X | Y ] = E [e2 ] = 1 18 , E Y = 1, Var(Y ) = 1 , Cov(X, Y ) = 6 1 11 + ( )−1 (Y − 1) = 3 12 6 1 11 1 − ( )( )−1 ( ) = 18 12 6 12 5 12 − 1 3 = 1 12 . So 1 Y −1 + 3 2 1 = the MMSE for E [X |Y ] 72 Inspection of Figure 9.2 shows that for 0 ≤ y ≤ 2, the conditional distribution of X given Y = y is the uniform distribution over the interval [0, y /2] if 0 ≤ y ≤ 1 and the over the interval [y − 1, y /2] if 1 ≤ y ≤ 2. The conditional mean of X given Y = y is thus the midpoint of that interval, yielding: E [X |Y ] = Y 4 3Y − 2 4 0≤Y ≤1 1≤Y ≤2 To find the corresponding MSE, note that given Y , the conditional distribution of X is uniform over some interval. Let L(Y ) denote the length of the interval. Then 1 L(Y )2 ]. 12 1 1 y 1 y ( )2 dy = 12 0 2 96 E [e2 ] = E [E [e2 |Y ]] = E [ =2 203 y 2 E[X|Y=y] 1 E[X|Y=y] x 0 1 Figure 9.2: Sketch of E [X |Y = y ] and E [X |Y = y ]. For this example, the MSE for the best estimator is 25% smaller than the MSE for the best linear estimator. (b) ∞ ∞ 12 y 2 1 2 √ e− 2 y dy = and E Y = 0, EX = |y | √ e−y /2 dy = 2 π 2π 2π 0 −∞ 20 2 + Y≡ π1 π That is, the best linear estimator is the constant E X . The corresponding MSE is Var(X ) = 2 2 E [X 2 ] − (E X )2 = E [Y 2 ] − π = 1 − π . Note that |Y | is a function of Y with mean square error E [(X − |Y |)2 ] = 0. Nothing can beat that, so |Y | is the MMSE estimator of X given Y . So |Y | = E [X |Y ]. The corresponding MSE is 0, or 100% smaller than the MSE for the best linear estimator. Var(Y ) = 1, Cov(X, Y ) = E [|Y |Y ] = 0 so E [X |Y ] = 3.10. An estimator of an estimator To show that E [X |Y ] is the LMMSE estimator of E [X |Y ], it suffices by the orthogonality principle to note that E [X |Y ] is linear in (1, Y ) and to prove that E [X |Y ] − E [X |Y ] is orthogonal to 1 and to Y . However E [X |Y ] − E [X |Y ] can be written as the difference of two random variables (X − E [X |Y ]) and (X − E [X |Y ]), which are each orthogonal to 1 and to Y . Thus, E [X |Y ] − E [X |Y ] is also orthogonal to 1 and to Y , and the result follows. Here is a generalization, which can be proved in the same way. Suppose V0 and V1 are two closed linear subspaces of random variables with finite second moments, such that V0 ⊃ V1 . Let X be a random variable with finite second moment, and let Xi∗ be the variable in Vi with the ∗ minimum mean square distance to X , for i = 0 or i = 1. Then X1 is the variable in V1 with the ∗. minimum mean square distance to X0 Another solution to the original problem can be obtained by the using the formula for E [Z |Y ] applied to Z = E [X |Y ]: E [E [X |Y ]] = E [E [X |Y ]] + Cov(Y , E [X |Y ])Var(Y )−1 (Y − E Y ) 204 which can be simplified using E [E [X |Y ]] = E X and Cov(Y , E [X |Y [) = E [Y (E [X |Y [−E X )] = E [Y E [X |Y ]] − E Y · E X = E [E [X Y |Y ]] − E Y · E X to yield the desired result. = E [X Y ] − E X · E Y = Cov(X, Y ) 3.12. Some identities for estimators (a) True. The random variable E [X |Y ] cos(Y ) has the following two properties: • It is a function of Y with finite second moments (because E [X |Y ] is a function of Y with finite second moment and cos(Y ) is a bounded function of Y ) • (X cos(Y ) − E [X |Y ] cos(Y )) ⊥ g (Y ) for any g with E [g (Y )2 ] < ∞ (because for any such g , E [(X cos(Y )−E [X |Y ] cos(Y ))g (Y )] = E [(X −E [X |Y ])g (Y )] = 0, where g (Y ) = g (Y ) cos(Y ).) Thus, by the orthogonality principle, E [X |Y ] cos(Y ) is equal to E [X cos(Y )|Y ]. (b) True. The left hand side is the pro jection of X onto the space {g (Y ) : E [g (Y )2 ] < ∞} and the right hand side is the pro jection of X onto the space {f (Y 3 ) : E [f (Y 3 )2 ] < ∞}. But these two spaces are the same, because for each function g there is the function f (u) = g (u1/3 ). The point is that the function y 3 is an invertible function, so any function of Y can also be written as a function of Y 3 . (c) False. For example, let X be uniform on the interval [0, 1] and let Y be identically zero. Then 1 1 E [X 3 |Y ] = E [X 3 ] = 4 and E [X |Y ]3 = E [X ]3 = 8 . (d) False. For example, let P {X = Y = 1} = P {X = Y = −1} = 0.5. Then E [X |Y ] = Y while E [X |Y 2 ] = 0. The point is that the function y 2 is not invertible, so that not every function of Y can be written as a function of Y 2 . Equivalently, Y 2 can give less information than Y . (e) False. For example, let X be uniformly distributed on [−1, 1], and let Y = X . Then E [X |Y ] = Y 4 v( 3 while E [X |Y 3 ] = E [X ] + CoarX,Y ) ) (Y 3 − E [Y 3 ]) = E [X 6 ] Y 3 = 7 Y 3 . 3 5 E [X ] V (Y 3.14. Estimating a quadratic (a) Recall the fact that E [Z 2 ] = E [Z ]2 + Var(Z ) for any second order random variable Z . The idea is to apply the fact to the conditional distribution of X given Y . Given Y , the conditional distribution of X is Gaussian with mean ρY and variance 1 − ρ2 . Thus, E [X 2 |Y ] = (ρY )2 + 1 − ρ2 . (b) The MSE=E [(X 2 )2 ]−E [(E [X 2 |Y ])2 ] = E [X 4 ]−ρ4 E [Y 4 ]−2ρ2 E [Y 2 ](1−ρ2 )−(1−ρ2 )2 = 2(1−ρ4 ) (c) Since Cov(X 2 , Y ) = E [X 2 Y ] = 0, it follows that E [X 2 |Y ] = E [X 2 ] = 1. That is, the best linear estimator in this case is just the constant estimator equal to 1. 3.16. An innovations sequence and its application 2 2 e 2 (a) Y1 = Y1 . (Note: E [Y1 ] = 1), Y2 = Y2 − E [YfY1 ] Y1 = Y2 − 0.5Y1 (Note: E [Y2 ] = 0.75.) 2 e E [Y3 Y1 ] Y f2 ] 1 E [Y1 e E [Y3 Y2 ] Y f2 ] 2 E [Y2 E [Y1 ] 1 1 Y3 = Y3 − − = Y3 − (0.5)Y1 − 1 Y2 = Y3 − 3 Y1 − 3 Y2 . Summarizing, 3 Y1 Y1 Y1 1 00 Y2 = A Y2 where A = − 1 1 0 Y2 . 2 −1 −1 1 Y3 Y3 Y3 3 3 205 Y1 1 0.5 0.5 Y1 (b) Since Cov Y2 = 0.5 1 0.5 and Cov(X, Y2 ) = (0 0.25 0.25), Y3 0.5 0.5 1 Y3 Y1 1 0.5 0.5 100 3 it follows that Cov Y2 = A 0.5 1 0.5 AT = 0 4 0 , 0.5 0.5 1 002 Y3 3 Y1 11 and that Cov(X, Y2 ) = (0 0.25 0.25)AT = (0 4 6 ). Y3 e e e Cov(X,Y1 ) = 0 b = Cov(X,Y2 ) = 1 c = Cov(X,Y3 ) = 1 . (c) a = e2 E [Y1 ] e2 E [Y2 ] e2 E [Y3 ] 3 3.18. A Kalman filtering example (a) 4 xk+1|k = f xk|k−1 + Kk (yk − xk|k−1 ) 2 2 22 2 σk+1 = f 2 (σk − σk (σk + 1)−1 σk ) + 1 = and 2 σk f 2 2 +1 1 + σk σ2 Kk = f ( 1+k 2 ). σ k 2 2 (b) Since σk ≤ 1 + f 2 for k ≥ 1, the sequence (σk ) is bounded for any value of f . 3.20. A variation of Kalman filtering Equations (3.14) and (3.13) are still true: xk+1|k = f xk|k−1 + Kk (yk − xk|k−1 ) and Kk = Cov(xk+1 − f xk|k−1 , yk )Cov(yk )−1 . The variable vk in (3.17) is replaced by wk to yield: ˜ ˜ Cov(xk+1 − f xk|k−1 , yk ) = Cov(f (xk − xk|k−1 ) + wk , xk − xk|k−1 ) + wk ˜ 2 = f σk + 1 2 As before, writing σk for Σk|k−1 , 2 Cov(yk ) = Cov((xk − xk|k−1 ) + wk ) = σk + 1 ˜ So now Kk = 2 f σk + 1 2 1 + σk 2 2 σk+1 = (f 2 σk + 1) − = and 2 (f σk + 1)2 2 1 + σk 2 (1 − f )2 σk 2 1 + σk 2 Note that if f = 1 then σk = 0 for all k ≥ 1, which makes sense because xk = yk−1 in that case. 3.22. An innovations problem 2 2 2 2 2 (a) E [Yn ] = E [U1 · · · Un ] = E [U1 ] · · · E [Un ] = 2−n and E [Yn ] = E [U1 · · · Un ] = E [U1 ] · · · E [Un ] = 206 3−n , so Var(Yn ) = 3−n − (2−n )2 = 3−n − 4−n . (b) E [Yn |Y0 , . . . , Yn−1 ] = E [Yn−1 Un |Y0 , . . . , Yn−1 ] = Yn−1 E [Un |Y0 , . . . , Yn−1 ] = Yn−1 E [Un ] = Yn−1 /2. (c) Since the conditional expectation found in (b) is linear, it follows that E [Yn |Y0 , . . . , Yn−1 ] = E [Yn |Y0 , . . . , Yn−1 ] = Yn−1 /2. (d) Y0 = Y0 = 1, and Yn = Yn − Yn−1 /2 (also equal to U1 · · · Un−1 (Un − 1 )) for n ≥ 1. 2 1 2 2 (e) For n ≥ 1, Var(Yn ) = E [(Yn )2 ] = E [U1 · · · Un−1 (Un − 2 )2 ] = 3−(n−1) /12 and 1 C ov (XM , Yn ) = E [(U1 + · · · + UM )Yn ] = E [(U1 + · · · + UM )U1 · · · Un−1 (Un − 2 )] = E [Un (U1 · · · Un−1 )(Un − 1 )] = 2−(n−1) Var(Un ) = 2−(n−1) /12. Since Y0 = 1 and all the other 2 innovations variables are mean zero, we have E [XM |Y0 , . . . , YM ] = M + 2 = M + 2 = M + 2 M n=1 M n=1 M n=1 C ov (XM , Yn )Yn Var(Yn ) 2−n+1 /12 Yn 3−n+1 /12 3 ( )n−1 Yn 2 4.2. Correlation function of a pro duct RX (s, t) = E [Ys Zs Yt Zt ] = E [Ys Yt Zs Zt ] = E [Ys Yt ]E [Zs Zt ] = RY (s, t)RZ (s, t) 4.4. Another sinusoidal random pro cess (a) Since E X1 = E X2 = 0, E Yt ≡ 0. The autocorrelation function is given by 2 2 RY (s, t) = E [X1 ] cos(2π s) cos(2π t) − 2E [X1 X2 ] cos(2π s) sin(2π t) + E [X2 ] sin(2π s) sin(2π t) = σ 2 (cos(2π s) cos(2π t) + sin(2π s) sin(2π t)] = σ 2 cos(2π (s − t)) (a function of s − t only) So (Yt : t ∈ R) is WSS. (b) If X1 and X2 are each Gaussian random variables and are independent then (Yt : t ∈ R) is a real-valued Gaussian WSS random process and is hence stationary. (c) A simple solution to this problem is to take X1 and X2 to be independent, mean zero, variance σ 2 random variables with different distributions. For example, X1 could be N (0, σ 2 ) and X2 could 1 be discrete with P (X1 = σ ) = P (X1 = −σ ) = 2 . Then Y0 = X1 and Y3/4 = X2 , so Y0 and Y3/4 do not have the same distribution, so that Y is not stationary. 4.6. MMSE prediction for a Gaussian pro cess based on two observations 5 5 0 −9 (a) Since RX (0) = 5, RX (1) = 0, and RX (2) = − 5 , the covariance matrix is 0 5 0 . 9 −5 0 5 9 (2) v (X (b) As the variables are all mean zero, E [X (4)|X (2)] = CoVar(4),X (2)) X (2) = − X9 . (X (2) (c) The variable X (3) is uncorrelated with (X (2), X (4))T . Since the variables are jointly Gaussian, (2 X (3) is also independent of (X (2), X (4))T . So E [X (4)|X (2)] = E [X (4)|X (2), X (3)] = − X9 ) . 207 4.8. Poisson pro cess probabilities (a) The numbers of arrivals in the disjoint intervals are independent, Poisson random variables with mean λ. Thus, the probability is (λe−λ )3 = λ3 e−3λ . (b) The event is the same as the event that the numbers of counts in the intervals [0,1], [1,2], and 2 2 2 [2,3] are 020, 111, or 202. The probability is thus e−λ ( λ e−λ )e−λ + (λe−λ )3 + ( λ e−λ )e−λ ( λ e−λ ) = 2 2 2 2 4 ( λ + λ3 + λ )e−3λ . 2 4 (c) This is the same as the probability the counts are 020, divided by the answer to part (b), or λ2 λ4 λ2 3 2 2 2 /( 2 + λ + 4 ) = λ /(2 + 4λ + λ ). 4.10. Adding jointly stationary Gaussian pro cesses X (t)+Y (t) (a) RZ (s, t) = E [ X (s)+Y (s) = 1 [RX (s − t) + RY (s − t) + RX Y (s − t) + RY X (s − t)]. 2 2 4 So RZ (s, t) is a function of s − t. Also, RY X (s, t) = RX Y (t, s). Thus, −|τ −3| −|τ +3| RZ (τ ) = 1 [2e−|τ | + e 2 + e 2 ]. 4 (b) Yes, the mean function of Z is constant (µZ ≡ 0) and RZ (s, t) is a function of s − t only, so Z is WSS. However, Z is obtained from the jointly Gaussian processes X and Y by linear operations, so Z is a Gaussian process. Since Z is Gaussian and WSS, it is stationary. 1 1 (c) P [X (1) < 5Y (2) + 1] = P [ X (1)−5Y (2) ≤ σ ] = Φ( σ ), where σ −4 σ 2 = Var(X (1) − 5Y (2)) = RX (0) − 10RX Y (1 − 2) + 25RY (0) = 1 − 10e + 25 = 26 − 5e−4 . 2 4.12. A linear evolution equation with random co efficients 2 2 2 2 (a) Pk+1 = E [(Ak Xk + Bk )2 ] = E [A2 Xk ] + 2E [Ak Xk ]E [Bk ] + E [Bk ] = σA Pk + σB . k (b) Yes. Think of n as the present time. The future values Xn+1 , Xn+2 , . . . are all functions of Xn and (Ak , Bk : k ≥ n). But the variables (Ak , Bk : k ≥ n) are independent of X0 , X1 , . . . Xn . Thus, the future is conditionally independent of the past, given the present. (c) No. For example, X1 − X0 = X1 = B1 , and X2 − X1 = A2 B1 + B2 , and clearly B1 and A2 B1 + B2 2 2 are not independent. (Given B1 = b, the conditional distribution of A2 B1 + B2 is N (0, σA b2 + σB ), which depends on b.) (d) Suppose s and t are integer times with s < t. Then RY (s, t) = E [Ys (At−1 Yt−1 + Bt−1 ] = Pk if s = t = k E [At−1 ]E [Ys Yt−1 ] + E [Ys ]E [Bt−1 ] = 0. Thus, RY (s, t) = 0 else. (e) The variables Y1 , Y2 , . . . are already orthogonal by part (d) (and the fact the variables have mean zero). Thus, Yk = Yk for all k ≥ 1. 4.14. A fly on a cub e (a)-(b) See the figures. For part (a), each two-headed line represents a directed edge in each direction and all directed edges have probability 1/3. (a) 100 000 010 110 (b) 2/3 1 001 101 111 0 1/3 1 1/3 2 2/3 3 1 011 (c) Let ai be the mean time for Y to first reach state zero starting in state i. Conditioning on the 208 1 first time step yields a1 = 1 + 2 a2 , a2 = 1 + 2 a1 + 3 a3 , a3 = 1 + a2 . Using the first and third 3 3 of these equations to eliminate a1 and a3 from the second equation yields a2 , and then a1 and a3 . The solution is (a1 , a2 , a3 ) = (7, 9, 10). Therefore, E [τ ] = 1 + a1 = 8. 4.16. An M/M/1/B queueing system (a) Q = −λ 1 0 0 0 λ 0 0 0 −(1 + λ) λ 0 0 1 −(1 + λ) λ 0 0 1 −(1 + λ) λ 0 0 1 −1 . (b) The equilibrium vector π = (π0 , π1 , . . . , πB ) solves π Q = 0. Thus, λπ0 = π1 . Also, λπ0 − (1 + λ)π1 + π2 = 0, which with the first equation yields λπ1 = π2 . Continuing this way yields that πn = λπn−1 for 1 ≤ n ≤ B . Thus, πn = λn π0 . Since the probabilities must sum to one, πn = λn /(1 + λ + · · · + λB ). 4.18. Identification of sp ecial prop erties of two discrete time pro cesses (version 2) (a) (yes, yes, no). The process is Markov by its description. Think of a time k as the present time. Given the number of cells alive at the present time k (i.e. given Xk ) the future evolution does not depend on the past. To check for the martingale property in discrete time, it suffices to check that E [Xk+1 |X1 , . . . , Xk ] = Xk . But this equality is true because for each cell alive at time k , the expected number of cells alive at time k + 1 is one (=0.5 × 0 + 0.5 × 2). The process does not have independent increments, because, for example, P [X2 − X1 = 0|X1 − X0 = −1] = 1 and P [X2 − X1 = 0|X1 − X0 = 1] = 1/2. So X2 − X1 is not independent of X1 − X0 . (b) (yes, yes, no). Let k be the present time. Given Yk , the future values are all determined by Yk , Uk+1 , Uk+2 , . . .. Since Uk+1 , Uk+2 , . . . is independent of Y0 , . . . , Yk , the future of Y is conditionally independent of the past, given the present value Yk . So Y is Markov. The process Y is a martingale because E [Yk+1 |Y1 , . . . , Yk ] = E [Uk+1 Yk |Y1 , . . . , Yk ] = Yk E [Uk+1 |Y1 , . . . , Yk ] = Yk E [Uk+1 ] = Yk . The process Y does not have independent increments, because, for example Y1 − Y0 = U1 − 1 is clearly not independent of Y2 − Y1 = U1 (U2 − 1). (To argue this further we could note that the conditional density of Y2 − Y1 given Y1 − Y0 = y − 1 is the uniform distribution over the interval [−y , y ], which depends on y .) 4.20. Identification of sp ecial prop erties of two continuous time pro cesses (version 2) (a) (yes,no,no) Z is Markov because W is Markov and the mapping from Wt to Zt is invertible. So Wt and Zt have the same information. To see if W 3 is a martingale we suppose s ≤ t and use the independent increment property of W to get: E [Wt3 |Wu , 0 ≤ u ≤ s] = E [Wt3 |Ws ] = E [(Wt − Ws + Ws )3 |Ws ] = 3 3 3 3E [(Wt − Ws )2 ]Ws + Ws = 3(t − s)Ws + Ws = Ws . 3 is not a martingale. If the increments were indep endent, then since W is the increTherefore, W s ment Ws − W0 , it would have to be that E [(Wt − Ws + Ws )3 |Ws ] doesn’t depend on Ws . But it does. So the increments are not independent. (b) (no, no, no) R is not Markov because knowing Rt for a fixed t doesn’t quite determines Θ to be one of two values. But for one of these values R has a positive derivative at t, and for the other R has a negative derivative at t. If the past of R just before t were also known, then θ could be completely determined, which would give more information about the future of R. So R is not Markov. (ii)R is not a martingale. For example, observing R on a finite interval total determines 209 R. So E [Rt |(Ru , 0 ≤ u ≤ s] = Rt , and if s − t is not an integer, Rs = Rt . (iii) R does not have independent increments. For example the increments R(0.5) − R(0) and R(1.5) − R(1) are identical random variables, not independent random variables. 4.22. Moving balls (a) The states of the “relative-position process” can be taken to be 111, 12, and 21. The state 111 means that the balls occupy three consecutive positions, the state 12 means that one ball is in the left most occupied position and the other two balls are one position to the right of it, and the state 21 means there are two balls in the leftmost occupied position and one ball one position to the right of them. With the states in the order 111, 12, 21, the one-step transition probability matrix 0.5 0.5 0 0 1 . is given by P = 0 0.5 0.5 0 (b) The equilibrium distribution π of the process is the probability vector satisfying π = π P , from which we find π = ( 1 , 1 , 1 ). That is, all three states are equally likely in equilibrium. (c) Over a 333 long period of time, we expect the process to be in each of the states about a third of the time. After each visit to states 111 or 12, the left-most position of the configuration advances one position to the right. After a visit to state 21, the next state will be 12, and the left-most position of the configuration does not advance. Thus, after 2/3 of the slots there will be an advance. So the long-term speed of the balls is 2/3. Another approach is to compute the mean distance the moved ball travels in each slot, and divide by three. (d) The same states can be used to k the relative positions of the balls as in discrete time. The trac −0.5 0.5 0 −1 1 . (Note that if the state is 111 and if the generator matrix is given by Q = 0 0.5 0.5 −1 leftmost ball is moved to the rightmost position, the state of the relative-position process is 111 the entire time. That is, the relative-position process misses such jumps in the actual configuration process.) The equilibrium distribution can be determined by solving the equation π Q = 0, and 11 the solution is found to be π = ( 1 , 3 , 3 ) as before. When the relative-position process is in states 3 111 or 12, the leftmost position of the actual configuration advances one position to the right at rate one, while when the relative-position process is in state is 21, the rightmost position of the actual configuration cannot directly move right. The long-term average speed is thus 2/3, as in the discrete-time case. 4.24. Mean hitting time for a continuous time, discrete space Markov pro cess −1 1 0 1 Q = 10 −11 0 5 −5 π= 50 5 1 ,, 56 56 56 Consider Xh to get a1 = h + (1 − h)a1 + ha2 + o(h) a2 = h + 10a1 + (1 − 11h)a2 + o(h) h h or equivalently 1 − a1 + a2 + o(h ) = 0 and 1 + 10a1 − 11a2 + o(h ) = 0. Let h → 0 to get 1 − a1 + a2 = 0 210 and 1 + 10a1 − 11a2 = 0, or a1 = 12 and a2 = 11. 4.26. Some orthogonal martingales based on Brownian motion Throughout the solution of this problem, let 0 < s < t, and let Y = Wt − Ws . Note that Y is independent of Ws and it has the N (0, t − s) distribution. 2 Mt Mt Mt Mt (a) E [Mt |Ws ] = Ms E [ Ms |Ws ]. Now Ms = exp(θY − θ (t−s) ). Therefore, E [ Ms |Ws ] = E [ Ms ] = 1. 2 Thus E [Mt |Ws ] = Ms , so by the hint, M is a martingale. 2 (b) Wt2 − t = (Ws + Y )2 − s − (t − s) = Ws − s + 2Ws Y + Y 2 − (t − s), but E [2Ws Y |Ws ] = 2Ws E [Y |Ws ] = 2Ws E [Y ] = 0, and E [Y 2 − (t − s)|Ws ] = E [Y 2 − (t − s)] = 0. It follows that E [2Ws Y + Y 2 − (t − s)|Ws ] = 0, so the martingale property follows from the hint. Similarly, 3 2 Wt3 − 3tWt = (Y + Ws )3 − 3(s + t − s)(Y + Ws ) = Ws − 3sWs + 3Ws Y + 3Ws (Y 2 − (t − s)) + Y 3 − 3tY . 2 − (t − s)] = E [Y 3 ] = 0, it follows that Because Y is independent of Ws and because E [Y ] = E [Y 2 E [3Ws Y + 3Ws (Y 2 − (t − s)) + Y 3 − 3tY |Ws ] = 0, so the martingale property follows from the hint. (c) Fix distinct nonnegative integers m and n. Then E [Mn (s)Mm (t)] = E [E [Mn (s)Mm (t)|Ws ]] = E [Mn (s)E [Mm (t)|Ws ]] property of cond. expectation property of cond. expectation = E [Mn (s)Mm (s)] martingale property = 0 orthogonality of variables at a fixed time 5.2. Lack of sample path continuity of a Poisson pro cess (a) The sample path of N is continuous over [0, T ] if and only if it has no jumps in the interval, equivalently, if and only if N (T ) = 0. So P [N is continuous over the interval [0,T] ] = exp(−λT ). Since {N is continuous over [0, +∞)} = ∩∞ {N is continuous over [0, n]}, it follows n=1 that P [N is continuous over [0, +∞)] = limn→∞ P [N is continuous over [0, n]] = limn→∞ e−λn = 0. (b) Since P [N is continuous over [0, +∞)] = 1, N is not a.s. sample continuous. However N is m.s. continuous. One proof is to simply note that the correlation function, given by RN (s, t) = λ(s ∧ t) + λ2 st, is continuous. A more direct proof is to note that for fixed t, E [|Ns − Nt |2 ] = λ|s − t| + λ2 |s − t|2 → 0 as s → t. 5.4. Some statements related to the basic calculus of random pro cesses t (a) False. limt→∞ 1 0 Xs ds = Z = E [Z ] (except in the degenerate case that Z has variance zero). t (b) False. One reason is that the function is continuous at zero, but not everywhere. For another, we would have Var(X1 − X0 − X2 ) = 3RX (0) − 4RX (1) + 2RX (2) = 3 − 4 + 0 = −1. (c) True. In general, RX X (τ ) = RX (τ ). Since RX is an even function, RX (0) = 0. Thus, for any t, E [Xt Xt ] = RX X (0) = RX (0) = 0. Since the process X has mean zero, it follows that Cov(Xt , Xt ) = 0 as well. Since X is a Gaussian process, and differentiation is a linear operation, Xt and Xt are jointly Gaussian. Summarizing, for t fixed, Xt and Xt are jointly Gaussian and uncorrelated, so they are independent. (Note: Xs is not necessarily independent of Xt if s = t. ) 5.6. Cross correlation b etween a pro cess and its m.s. derivative Fix t, u ∈ T . By assumption, lims→t Xs −Xt = Xt m.s. Therefore, by Corollary 2.2.3, E s− t 211 Xs −Xt s−t Xu → E [Xt Xu ] as s → t. Equivalently, RX (s, u) − RX (t, u) s−t Hence ∂1 RX (s, u) exists, and ∂1 RX (t, u) = RX → RX X (t, u) as s → t. X (t, u). 5.8. A windowed Poisson pro cess (a) The sample paths of X are piecewise constant, integer valued with initial value zero. They jump by +1 at each jump of N , and jump by -1 one time unit after each jump of N . (b) Method 1: If |s − t| ≥ 1 then Xs and Xt are increments of N over disjoint intervals, and are therefore independent, so CX (s, t) = 0. If |s − t| < 1, then there are three disjoint intervals, I0 , I1 , and I2 , with I0 = [s, s + 1] ∪ [t, t + 1], such that [s, s + 1] = I0 ∪ I1 and [t, t + 1] = I0 ∪ I2 . Thus, Xs = D0 + D1 and Xt = D0 + D2 , where Di is the increment of N over the interval Ii . The three increments D1 , D2 , and D3 are independent, and D0 is a Poisson random variable with mean and variance equal to λ times the length of I0 , which is 1 − |s − t|. Therefore, CX (s, t) = Cov(D0 + D1 , D0 + D2 ) = λ(1 − |s − t|) if |s − t| < 1 Cov(D0 , D0 ) = λ(1 − |s − t|). Summarizing, CX (s, t) = 0 else Method 2: CX (s, t) = Cov(Ns+1 − Ns , Nt+1 − Nt ) = λ[min(s + 1, t + 1) − min(s + 1, t) − min(s, t + 1) − min(s, t)]. This answer can be simplified to the one found by Method 1 by considering the cases |s − t| > 1, t < s < t + 1, and s < t < s + 1 separately. (c) No. X has a -1 jump one time unit after each +1 jump, so the value Xt for a “present” time t tells less about the future, (Xs : s ≥ t), than the past, (Xs : 0 ≤ s ≤ t), tells about the future . (d) Yes, recall that RX (s, t) = CX (s, t) − µX (s)µX (t). Since CX and µX are continuous functions, so is RX , so that X is m.s. continuous. (e) Yes. Using the facts that CX (s, t) is a function of s − t alone, and CX (s) → 0 as s → ∞, we t t find as in the section on ergodicity, Var( 1 0 Xs ds) = 2 0 (1 − s )CX (s)ds → 0 as t → ∞. t t t 5.10. A singular integral with a Brownian motion 1 wt (a) The integral t dt exists in the m.s. sense for any > 0 b ecause wt /t is m.s. continuous over [ , 1]. To see if the limit exists we apply the correlation form of the Cauchy criteria (Proposition 2.2.2). Using different letters as variables of integration and the fact Rw (s, t) = s ∧ t (the minimum of s and t), yields that as , → 0, 1 E ws ds s 1 wt dt t 1 1 s∧t dsdt st 1 1 s∧t dsdt → st 0 0 1 t 1 t s s∧t dsdt = 2 dsdt =2 st 0 0 st 0 0 1 t 1 1 =2 dsdt = 2 1dt = 2. 0 0t 0 = Thus the m.s. limit defining the integral exits. The integral has the N (0, 2) distribution. 212 (b) As a, b → ∞, a E 1 ws ds s b 1 a b s∧t dsdt st 1 1 ∞ ∞ s∧t → dsdt st 1 1 ∞ t ∞ t s∧t s =2 dsdt = 2 dsdt st 1 1 1 1 st ∞ ∞ t t−1 1 dsdt = 2 dt = ∞, =2 t 1 1 1t wt dt t = so that the m.s. limit does not exist, and the integral is not well defined. 5.12. Recognizing m.s. prop erties (a) Yes m.s. continuous since RX is continuous. No not m.s. differentiable since RX (0) doesn’t exist. Yes, m.s. integrable over finite intervals since m.s. continuous. Yes mean ergodic in m.s. since RX (T ) → 0 as |T | → ∞. (b) Yes, no, yes, for the same reasons as in part (a). Since X is mean zero, RX (T ) = CX (T ) for all T . Thus lim CX (T ) = |T |→∞ lim RX (T ) = 1 |T |→∞ Since the limit of CX exists and is net zero, X is not mean ergodic in the m.s. sense. (c) Yes, no, yes, yes, for the same reasons as in (a). (d) No, not m.s. continuous since RX is not continuous. No, not m.s. differentiable since X is not even m.s. continuous. Yes, m.s. integrable over finite intervals, because the Riemann integral bb a a RX (s, t)dsdt exists and is finite, for the region of integration is a simple b ounded region and the integrand is piece-wise constant. (e) Yes, m.s. continuous since RX is continuous. No, not m.s. differentiable. For example, E Xt − X0 t 2 = = 1 [RX (t, t) − RX (t, 0) − RX (0, t) + RX (0, 0)] t2 1√ t − 0 − 0 + 0 → +∞ as t → 0. t2 Yes, m.s. integrable over finite intervals since m.s. continuous. 5.14. Correlation ergo dicity of Gaussian pro cesses Fix h and let Yt = Xt+h Xt . Clearly Y is stationary with mean µY = RX (h). Observe that CY (τ ) = E [Yτ Y0 ] − µ2 Y = E [Xτ +h Xτ Xh X0 ] − RX (h)2 = RX (h)2 + RX (τ )RX (τ ) + RX (τ + h)RX (τ − h) − RX (h)2 Therefore, CY (τ ) → 0 as |τ | → ∞. Hence Y is mean ergodic, so X is correlation ergodic. 213 5.16. Gaussian review question (a) Since X is Markovian, the best estimator of X2 given (X0 , X1 ) is a function of X1 alone. Since X is Gaussian, such estimator is linear in X1 . Since X is mean zero, it is given by Cov(X2 , X1 )Var(X1 )−1 X1 = e−1 X1 . Thus E [X2 |X0 , X1 ] = e−1 X1 . No function of (X0 , X1 ) is a better estimator! But e−1 X1 is equal to p(X0 , X1 ) for the polynomial p(x0 , x1 ) = x1 /e. This is the optimal polynomial. The resulting mean square error is given by MMSE = Var(X2 ) − (Cov(X1 X2 )2 )/Var(X1 ) = 9(1 − e−2 ) (b) Given (X0 = π , X1 = 3), X2 is N 3e−1 , 9(1 − e−2 ) so P [X2 ≥ 4|X0 = π , X1 = 3] = P X2 − 3e−1 9(1 − e−2 ) ≥ 4 − 3e−1 9(1 − e−2 ) =Q 4 − 3e−1 9(1 − e−2 ) 5.18. Karhunen-Lo`ve expansion of a simple random pro cess e (a) Yes, because RX (τ ) is twice continuously differentiable. t t (b) No. limt→∞ 2 0 ( t−τ )CX (τ )dτ = 50 + limt→∞ 100 0 ( t−τ ) cos(20π τ )dτ = 50 = 0. Thus, the t t t t necessary and sufficient condition for mean ergodicity in the m.s. sense does not hold. (c) APPROACH ONE Since RX (0) = RX (1), the process X is periodic with period one (actually, with period 0.1). Thus, by the theory of WSS periodic processes, the eigen-functions can be taken to be φn (t) = e2πj nt for n ∈ Z. (Still have to identify the eigenvalues.) APPROACH TWO The identity cos(θ) = 1 (ej θ + e−j θ ), yields 2 RX (s − t) = 50 + 25e20πj (s−t) + 25e−20πj (s−t) = 50 + 25e20πj s e−20πj t + 25e−20πj s e20πj t = 50φ0 (s)φ∗ (t) + 25φ1 (s)φ∗ (t) + 25φ2 (s)φ∗ (t) for the choice φ0 (t) ≡ 1, φ1 (t) = e20πj t and φ2 = 2 1 0 e−20πj t . The eigenvalues are thus 50, 25, and 25. The other eigenfunctions can be selected to fill out an orthonormal basis, and the other eigenvalues are zero. APPROACH THREE For s, t ∈ [0, 1] we have RX (s, t) = 50 + 50 cos(20π (s − t)) = 50 + 50 cos(20π s) cos(20π t) + 50 sin(20π s) sin(20π t) = 50φ0 (s)φ∗ (t) + 25φ1 (s)φ∗ (t) + 25φ2 (s)φ∗ (t) 2 1 0 √ √ for the choice φ0 (t) ≡ 1, φ1 (t) = 2 cos(20π t) and φ2 = 2 sin(20π t). The eigenvalues are thus 50, 25, and 25. The other eigenfunctions can be selected to fill out an orthonormal basis, and the other eigenvalues are zero. (Note: the eigenspace for eigenvalue 25 is two dimensional, so the choice of eigen functions spanning that space is not unique.) 5.20. Mean ergo dicity of a p erio dic WSS random pro cess 1 t t Xu du = 0 1 t where a0 = 1, and for n = 0, |an,t | = | 1 t not important as t → ∞. Indeed, E n∈Z,n=0 t 2 an,t Xn = n∈Z,n=0 t 0 Xn e2πj nu/T du = an,t Xn n∈Z n t 2π j nu/T du| 0e |an,t |2 pX (n) ≤ =| e2πj nt/T −1 T2 π 2 t2 2π j nt/T n∈Z,n=0 |≤ T π nt . The n = 0 terms are pX (n) → 0 as t → ∞ Therefore, 1 0 Xu du → X0 m.s. The limit has mean zero and variance pX (0). For mean ergodicity t (in the m.s. sense), the limit should be zero with probability one, which is true if and only if 214 pX (0) = 0. That is, the process should have no zero frequency, or DC, component. (Note: More generally, if X were not assumed to be mean zero, then X would be mean ergodic if and only if Var(X0 ) = 0, or equivalently, pX (0) = µ2 , or equivalently, X0 is a constant a.s.) X 6.2. On the cross sp ectral density Follow the hint. Let U be the output if X is filtered by H and V be the output if Y is filtered by H . The Schwarz inequality applied to random variables Ut and Vt for t fixed yields |RU V (0)|2 ≤ RU (0)RV (0), or equivalently, | SX Y (ω ) J dω 2 |≤ 2π SX (ω ) J dω 2π SY (ω ) J dω , 2π which implies that | SX Y (ωo ) + o( )|2 ≤ ( SX (ωo ) + o( ))( SY (ωo ) + o( )) Letting → 0 yields the desired conclusion. 6.4. Filtering a Gauss Markov pro cess (a) The process Y is the output when X is passed through the linear time-invariant system with impulse response function h(τ ) = e−τ I{τ ≥0} . Thus, X and Y are jointly WSS, and 1 −τ τ ≥0 ∞ ∞ 2e RX Y (τ ) = RX ∗ h(τ ) = t=−∞ RX (t)h(τ − t)dt = −∞ RX (t)h(t − τ )dt = 1 ( 2 − τ )eτ τ ≤ 0 (b) X5 and Y5 are jointly Gaussian, mean zero, with Var(X5 ) = RX (0) = 1, and Cov(Y5 , X5 ) = RX Y (0) = 1 , so E [Y5 |X5 = 3] = (Cov(Y5 , X5 )/Var(X5 ))3 = 3/2. 2 (c) Yes, Y is Gaussian, because X is a Gaussian process and Y is obtained from X by linear operations. (d) No, Y is not Markov. For example, we see that SY (ω ) = (1+2 2 )2 , which does not have the ω form required for a stationary mean zero Gaussian process to be Markov (namely α22Aω2 ). Another + explanation is that, if t is the present time, given Yt , the future of Y is determined by Yt and (Xs : s ≥ t). The future could be better predicted by knowing something more about Xt than Yt gives alone, which is provided by knowing the past of Y . (Note: the R2 -valued process ((Xt , Yt ) : t ∈ R) is Markov.) 6.6. A stationary two-state Markov pro cess 1 π P = π implies π = ( 2 , 1 ) is the equilibrium distribution so P [Xn = 1] = P [Xn = −1] = 2 n. Thus µX = 0. For n ≥ 1 1 2 for all RX (n) = P [Xn = 1, X0 = 1] + P [Xn = −1, X0 = −1] − P [Xn = −1, X0 = 1] − P [Xn = 1, X0 = −1] 11 1 11 1 11 1 11 1 [ + (1 − 2p)n ] + [ + (1 − 2p)n ] − [ − (1 − 2p)n ] − [ − (1 − 2p)n ] = 22 2 22 2 22 2 22 2 = (1 − 2p)n 215 So in general, RX (n) = (1 − 2p)|n| . The corresponding power spectral density is given by: ∞ SX (ω ) = ∞ (1 − 2p)n e−j ωn = n=−∞ n=0 ((1 − 2p)e−j ω )n + ∞ n=0 ((1 − 2p)ej ω )n − 1 1 1 + −1 1 − (1 − 2p)e−j ω 1 − (1 − 2p)ej ω 1 − (1 − 2p)2 1 − 2(1 − 2p) cos(ω ) + (1 − 2p)2 = = 6.8. A linear estimation problem E [|Xt − Zt |2 ] = E [(Xt − Zt )(Xt − Zt )∗ ] = RX (0) + RZ (0) − RX Z (0) − RZ X (0) = RX (0) + h ∗ h ∗ RY (0) − 2Re(h ∗ RX Y (0)) = ∞ −∞ SX (ω ) + |H (ω )|2 SY (ω ) − 2Re(H ∗ (ω )SX Y (ω )) The hint with σ 2 = SY (ω ), zo = S (X Y (ω ), and z = H (ω ) implies Hopt (ω ) = 6.10. The accuracy of approximate differentiation (a) SX (ω ) = SX (ω )|H (ω )|2 = ω 2 SX (ω ). ∞ (b) k (τ ) = 21 (δ (τ + a) − δ (τ − a)) and K (ω ) = −∞ k (τ )e−j ωt dτ = a s( lima→0 j ω co1 aω) 1 j ωa 2a (e dω 2π SX Y ( ω ) SY (ω ) . − e−j ωa ) = j sin(aω ) . a By = j ω. l’Hˆspital’s rule, lima→0 K (ω ) = o (c) D is the output of the linear system with input X and transfer function H (ω ) − K (ω ). The output thus has power spectral density SD (ω ) = SX (ω )|H (ω ) − K (ω )|2 = SX (ω )|ω − sin(aω) |2 . a (d) Or, SD (ω ) = SX (ω )|1 − sin(aω ) 2 aω | . Suppose 0 < a ≤ √ 0 .6 0.77 ωo (≈ ωo ). 2 sin(aω ) ≤ (aω) aω 6 Then by the bound given 2 in the problem statement, if |ω | ≤ ωo then 0 ≤ 1 − ≤ (aωo ) ≤ 0.1, so that 6 SD (ω ) ≤ (0.01)SX (ω ) for ω in the base band. Integrating this inequality over the band yields that E [|Dt |2 ] ≤ (0.01)E [|Xt |2 ]. 6.12. An approximation of white noise (a) Since E [Bk Bl∗ ] = I{k=l} , E [| 1 0 K Nt dt|2 ] = E [|AT T k=1 22 K Bk |2 ] = (AT T )2 E [ = (AT T ) σ K = K Bl∗ ] Bk k=1 l=1 A2 T σ 2 T 1 (b) The choice of scaling constant AT such that A2 T ≡ 1 is AT = √T . Under this scaling the T process N approximates white noise with power spectral density σ 2 as T → 0. 1 (c) If the constant scaling AT = 1 is used, then E [| 0 Nt dt|2 ] = T σ 2 → 0 as T → 0. 6.14. Filtering to maximize signal to noise ratio ∞ The problem is to select H to minimize σ 2 −∞ |H (ω )|2 dω , sub ject to the constraints (i) |H (ω )| ≤ 1 2π 216 ω o for all ω , and (ii) −ωo |ω | |H (ω )|2 dω ≥ (the power of X )/2. First, H should be zero outside of 2π the interval [−ωo , ωo ], for otherwise it would be contributing to the output noise power with no contribution to the output signal power. Furthermore, if |H (ω )|2 > 0 over a small interval of length 2π contained in [ωo , ωo ], then the contribution to the output signal power is |ω ||H (ω )|2 , whereas the contribution to the output noise is σ 2 |H (ω )|2 . The ratio of these contributions is |ω |/σ 2 , which we would like to be as large as possible. Thus, the optimal H has the form H (ω ) = I{a≤|ω|≤ωo } , where a is selected to meet the signal power constraint with equality: (power of X )= (power of √ X )/2. This yields a = ωo / 2. In summary, the optimal choice is |H (ω )|2 = I{ωo /√2≤|ω|≤ωo } . 6.16. Sampling a signal or pro cess that is not band limited (a) Evaluating the inverse transform of xo at nT , and using the fact that 2ωo T = 2π yields ωo ∞ ωo xo (nT ) = −∞ ej ωnT xo (ω ) dω = −ωo ej ωnT xo (ω ) dω = ∞=−∞ −ωo ej ωnT x(ω + 2mωo ) dω m dπ dπ dπ ∞ (2m+1)ω = ∞=−∞ (2m−1)ωoo ej ωnT x(ω ) dω = −∞ ej ωnT x(ω ) dω = x(nT ). m dπ dπ (b) The observation follows from the sampling theorem for the narrowband signal xo and part (a). o (c) The fact RX (nT ) = RX (nT ) follows from part (a) with x(t) = RX (t). o b e a WSS baseband random pro cess with auto correlation function Ro . Then by the (d) Let X X o o sampling theorem for baseband random processes, Xt = ∞ −∞ XnT sinc t−nT . But the discrete n= T o time processes (XnT : n ∈ Z) and (XnT : n ∈ Z) have the same autocorrelation function by part o and Y have the same auto correlation function. Also, Y has mean zero. So Y is WSS (c). Thus X o with mean zero and autocorrelation function RX . ∞ o , S o (ω ) = (e) For 0 ≤ ω ≤ ω n=−∞ exp(−α|ω + 2nωo |) X ∞ ∞ = n=0 exp(−α(ω + 2nωo )) + −=−1 exp(α(ω + 2nωo )) n = exp(−αω )+exp(α(ω −2ωo )) 1−exp(−2αωo ) = exp(−α(ω −ωo ))+exp(α(ω −ω0 )) exp(αωo )−exp(−αωo )) o SX (ω ) = I{|ω|≤ωo } = cosh(α(ωo −ω )) sinh(αωo ) . Thus, for any ω , cosh(α(ωo − |ω |)) . sinh(αωo ) 6.18. Another narrowband Gaussian pro cess (a) µX = µR SX (2π f ) = ∞ h(t)dt = µR H (0) = 0 −∞ |H (2π f )|2 SR (2π f ) 4 = 10−2 e−|f |/10 I5000≤|f |≤6000 (b) RX (0) = ∞ SX (2π f )df = −∞ X25 ∼ N (0, 11.54) so 2 102 6000 5000 4 e−f /10 df = (200)(e−0.5 − e−0.6 ) = 11.54 P [X25 > 6] = Q √ 6 11.54 ≈ Q(1.76) ≈ 0.039 (c) For the narrowband representation about fc = 5500 (see Figure 9.3), 4 SU (2π f ) = SV (2π f ) = 10−2 e−(f +5500)/10 + e−(−f +5500)/10 = e−.55 cosh(f /104 )I|f |≤500 50 217 4 I|f |≤500 SX SU=SV For fc= 5500 j SUV SU=SV For fc= 5000 SUV j Figure 9.3: Spectra of baseband equivalent signals for fc = 5500 and fc = 5000. 4 SU V (2π f ) = 10−2 j e−(f +5500)/10 − j e−(−f +5500)/10 4 I|f |≤500 = −j e−.55 sinh(f /104 )I|f |≤500 50 (d) For the narrowband representation about fc = 5000 (see Figure 9.3), 4 SU (2π f ) = SV (2π f ) = 10−2 e0.5 e−|f |/10 I|f |≤1000 SU V (2π f ) = j sgn(f )SU (2π f ) 6.20. Declaring the center frequency for a given random pro cess (a) SU (ω ) = g (ω + ωc ) + g (−ω + ωc ) and SU V (ω ) = j (g (ω + ωc ) − g (−ω + ωc )). (b) The integral becomes: ∞ ∞ ∞ dω 2 dω 2 2 dω −∞ (g (ω + ωc ) + g (−ω + ωc )) dπ = 2 −∞ g (ω ) 2π + 2 −∞ g (ω + ωc )g (−ω + ωc ) dπ = 2||g || + g ∗ g (2ωc ) Thus, select ωc to maximize g ∗ g (2ωc ). 7.2. A smo othing problem 3 10 Write X5 = 0 g (s)Ys ds + 7 g (s)ys ds. The mean square error is minimized over all linear estimators if and only if (X5 − X5 ) ⊥ Yu for u ∈ [0, 3] ∪ [7, 10], or equivalently 3 RX Y (5, u) = 0 10 g (s)RY (s, u)ds + g (s)RY (s, u)ds 7 for u ∈ [0, 3] ∪ [7, 10]. 7.4. Interp olating a Gauss Markov pro cess (a) The constants must be selected so that X0 − X0 ⊥ Xa and X0 − X0 ⊥ X−a , or equivalently e−a − [c1 e−2a + c2 ] = 0 and e−a − [c1 + c2 e−2a ] = 0. Solving for c1 and c2 (one could begin by e−a 1 1 subtracting the two equations) yields c1 = c2 = c where c = 1+e−2a = ea +e−a = 2 cosh(a) . The corre- 2 2 sponding minimum MSE is given by E [X0 ] − E [X0 ] = 1 − c2 E [(X−a + Xa )2 ] = 1 − c2 (2 + 2e−2a ) = a −e−a )(ea +e−a ) 2a −e−2a e = (e (ea +e−a )2 = tanh(a). (ea +e−a )2 (b) The claim is true if (X0 − X0 ) ⊥ Xu whenever |u| ≥ a. If u ≥ a then 1 E [(X0 − c(X−a + Xa ))Xu ] = e−u − ea +e−a (e−a−u + ea+u ) = 0. Similarly if u ≤ −a then 1 E [(X0 − c(X−a + Xa ))Xu ] = eu − ea +e−a (ea+u + e−a+u ) = 0. The orthogonality condition is thus true whenever |u| ≥ a as required. 218 7.6. Prop ortional noise (a) In order that κYt be the optimal estimator, by the orthogonality principle, it suffices to check two things: 1. κYt must be in the linear span of (Yu : a ≤ u ≤ b). This is true since t ∈ [a, b] is assumed. 2. Orthogonality condition: (Xt − κYt ) ⊥ Yu for u ∈ [a, b] It remains to show that κ can be chosen so that the orthogonality condition is true. The condition is ∗ equivalent to E [(Xt − κYt )Yu ] = 0 for u ∈ [a, b], or equivalently RX Y (t, u) = κRY (t, u) for u ∈ [a, b]. The assumptions imply that RY = RX + RN = (1 + γ 2 )RX and RX Y = RX , so the orthogonality condition becomes RX (t, u) = κ(1 + γ 2 )RX (t, u) for u ∈ [a, b], which is true for κ = 1/(1 + γ 2 ). The form of the estimator is proved. The MSE is given by E [|Xt − Xt |2 ] = E [|Xt |2 ] − E [|Xt |]2 = γ2 R (t, t). 1+γ 2 X (b) Since SY is proportional to SX , the factors in the spectral factorization of SY are proportional to the factors in the spectral factorization of X : SY = (1 + γ 2 )SX = − 1 + γ 2 SX . + 1 + γ 2 SX − SY + SY That and the fact SX Y = SX imply that H (ω ) = 1 ej ωT SX Y + − SY SY + ej ωT SX 1 = 1+ + + γ 2 SX 1+ γ2 = + κ + SX (ω ) + ej ωT SX (ω ) + Therefore H is simply κ times the optimal filter for predicting Xt+T from (Xs : s ≤ t). In particular, if T < 0 then H (ω ) = κej ωT , and the estimator of Xt+T is simply Xt+T |t = κYt+T , which agrees with part (a). (c) As already observed, if T > 0 then the optimal filter is κ times the prediction filter for Xt+T given (Xs : s ≤ t). 7.8. Short answer filtering questions (a) The convolution of a causal function h with itself is causal, and H 2 has transform h ∗ h. So if H is a positive type function then H 2 is positive type. (b) Since the intervals of support of SX and SY do not intersect, SX (2π f )SY (2π f ) ≡ 0. Since |SX Y (2π f )|2 ≤ SX (2π f )SY (2π f ) (by the first problem in Chapter 6) it follows that SX Y ≡ 0. Hence the assertion is true. (c) Since sinc(f ) is the Fourier transform of I[− 1 , 1 ] , it follows that 22 1 2 [H ]+ (2π f ) = 0 1 e−2πf j t dt = e−πj f /2 sinc 2 f 2 7.10. A singular estimation problem (a) E [Xt ] = E [A]ej 2πfo t = 0, which does not depend on t. 2 RX (s, t) = E [Aej 2πfo s (Aej 2πfo t )∗ ] = σA ej 2πfo (s−t) is a function of s − t. 2 2 Thus, X is WSS with µX = 0 and RX (τ ) = σA ej 2πfo τ . Therefore, SX (2π f ) = σA δ (f − f0 ), or equiv∞ ∞ 2 alently, SX (ω ) = 2π σA δ (ω − ω0 ) (This makes RX (τ ) = −∞ SX (2π f )ej 2πf τ df = −∞ SX (ω )ej ωτ dω .) 2π 219 ∞ ∞ ∞ (b) (h ∗ X )t = −∞ h(τ )Xt−τ dτ = 0 αe−α−j 2πfo )τ Aej 2πfo (t−τ ) dτ = 0 αe−(ατ dτ Aej 2πfo t = Xt . Another way to see this is to note that X is a pure tone sinusoid at frequency fo , and H (2π f0 ) = 1. (c) In view of part (b), the mean square error is the power of the output due to the noise, or ∞ ∞ σ2 α 2 2 2 MSE=(h∗h∗RN )(0) = −∞ (h∗h)(t)RN (0−t)dt = σN h∗h(0) = σN ||h||2 = σN 0 α2 e−2αt dt = N . 2 The MSE can be made arbitrarily small by taking α small enough. That is, the minimum mean square error for estimation of Xt from (Ys : s ≤ t) is zero. Intuitively, the power of the signal X is concentrated at a single frequency, while the noise power in a small interval around that frequency is small, so that perfect estimation is possible. 7.12. A prediction problem The optimal prediction filter is given by tion of SX is given by 1 + SX + ej ωT SX . Since RX (τ ) = e−|τ | , the spectral factoriza- √ √ 2 jω + 1 2 −j ω + 1 + SX SX (ω ) = − SX + + so [ej ωT SX ]+ = e−T SX (see Figure 9.4). Thus the optimal prediction filter is H (ω ) ≡ e−T , or in the 2 e -(t+T) -T Figure 9.4: t 0 √ + 2ej ωT SX in the time domain time domain it is h(t) = e−T δ (t), so that XT +t|t = e−T Xt . This simple form can be explained and derived another way. Since linear estimation is being considered, only the means (assumed zero) and correlation functions of the processes matter. We can therefore assume without loss of generality that X is a real valued Gaussian process. By the form of RX we recognize that X is Markov so the best estimate of XT +t given (Xs : s ≤ t) is a function of Xt alone. Since X is Gaussian Cov(Xt+T ,Xt )Xt = e−T Xt . with mean zero, the optimal estimator of Xt+T given Xt is E [Xt+T |Xt ] = Var(Xt ) 7.14. Sp ectral decomp osition and factorization (a) Building up transform pairs by steps yields: sinc(f ) ↔ I{− 1 ≤t≤ 1 } 2 2 sinc(100f ) ↔ 10−2 I{− 1 ≤ 2 sinc(100f )e2πj f T sinc(100f )e j 2π f T + t ≤1} 100 2 ↔ 10−2 I{− 1 ≤ t+T ≤ 1 } 2 −2 ↔ 10 220 100 2 I{−50−T ≤t≤50−T }∩{t≥0} so −4 ||x|| = 10 2 length of ([−50 − T , 50 − T ] ∩ [0, +∞)) = 10−2 T ≤ −50 − T ) −50 ≤ T ≤ 50 0 T ≥ 50 10−4 (50 (b) By the hint, 1 + 3j is a pole of S . (Without the hint, the poles can be found by first solving for values of ω 2 for which the denominator of S is zero.) Since S is real valued, 1 − 3j must also be a pole of S . Since S is an even function, i.e. S (ω ) = S (−ω ), −(1 + 3j ) and −(1 − 3j ) must also be poles. Indeed, we find S (ω ) = 1 . (ω − (1 + 3j ))(ω − (1 − 3j ))(ω + 1 + 3j )(ω + 1 − 3j ) or, multiplying each term by j (and using j 4 = 1) and rearranging terms: S (ω ) = 1 1 (j ω + 3 + j )(j ω + 3 − j ) (−j ω + 3 + j )(−j ω + 3 − j ) S − (ω ) S + (ω ) or S + (ω ) = constant. 1 . (j ω 2 )+6j ω +10 The choice of S + is unique up to a multiplication by a unit magnitude 7.16. Estimation of a random signal, using the Karhunen-Lo`ve expansion e Note that (Y , φj ) = (X, φj ) + (N , φj ) for all j , where the variables (X, φj ), j ≥ 1 and (N , φj ), j ≥ 1 are all mutually orthogonal, with E [|(X, φj )|2 ] = λj and E [|(N , φj )|2 ] = σ 2 . Observation of the process Y is linearly equivalent to observation of ((Y , φj ) : j ≥ 1). Since these random variables are orthogonal and all random variables are mean zero, the MMSE estimator is the sum of the pro jections onto the individual observations, (Y , φj ). But for fixed i, only the ith observation, (Y , φi ) = (X, φi ) + (N , φi ), is not orthogonal to (X, φi ). Thus, the optimal linear estimator of (X, φi ) φi ,φi , i given Y is Cov((X((Y )φ(Y ,φi )) (Y , φi ) = λλ(Y ,σ2 ) . The mean square error is (using the orthogonality i+ Var , i )) 2 λ2 (λi +σ 2 ) φi i iσ principle): E [|(X, φi )|2 ] − E [| λλ(Y ,σ2 ) |2 ] = λi − (iλi +σ2 )2 = λλ+σ2 . i+ i (b) Since f (t) = j (f , φj )φj (t), we have (X, f ) = j (f , φj )(X, φj ). That is, the random variable to be estimated is the sum of the random variables of the form treated in part (a). Thus, the best linear estimator of (X, f ) given Y can be written as the corresponding weighted sum of linear estimators: λi (Y , φi )(f , φi ) (MMSE estimator of (X, f ) given Y ) = . λi + σ 2 i The error in estimating (X, f ) is the sum of the errors for estimating the terms (f , φj )(X, φj ), and those errors are orthogonal. Thus, the mean square error for (X, f ) is the sum of the mean square errors of the individual terms: λi σ 2 |(f , φi )|2 (MSE) = . λi + σ 2 i 7.18. Linear innovations and sp ectral factorization First approach: The first approach is motivated by the fact that 221 1 + SY is a whitening filter. Let H(z ) = β + SX (z ) and let Y be the output when X is passed through a linear time-invariant system with z -transform H(z ). We prove that Y is the innovations process for X . Since H is positive type and lim|z |→∞ H(z ) = 1, it follows that Yk = Xk + h(1)Xk−1 + h(2)Xk−2 + · · · Since SY (z ) = H(z )H ∗ (1/z ∗ )SX (z ) ≡ β 2 , it follows that RY (k ) = β 2 I{k=0} . In particular, Yk ⊥ linear span of {Yk−1 , Yk−2 , · · · } Since H and 1/H both correspond to causal filters, the linear span of {Yk−1 , Yk−2 , · · · } is the same as the linear span of {Xk−1 , Xk−2 , · · · }. Thus, the above orthogonality condition becomes, Xk − (−h(1)Xk−1 − h(2)Xk−2 − · · · ) ⊥ linear span of {Xk−1 , Xk−2 , · · · } Therefore −h(1)Xk−1 − h(2)Xk−2 − · · · must equal Xk|k−1 , the one step predictor for Xk . Thus, (Yk ) is the innovations sequence for (Xk ). The one step prediction error is E [|Yk |2 ] = RY (0) = β 2 . Second approah: The filter K for the optimal one-step linear predictor (Xk+1|k ) is given by (take T = 1 in the general formula): K= 1 + + z SX SX + . + The z -transform z SX corresponds to a function in the time domain with value β at time -1, and z + + value zero at all other negative times, so [z SX ]+ = z SX − z β . Hence K(z ) = z − S +βz ) . If X is ( filtered using K, the output at time k is Xk+1|k . So if X is filtered using 1 − β time k is Xk|k−1 . So if X is filtered using H(z ) = 1 − (1 − S + (z ) ) = X β + S X (z ) β + SX (z ) X , the output at then the output at time k is Xk − Xk|k−1 = Xk , the innovations sequence. The output X has SX (z ) ≡ β 2 , so the prediction e error is RX (0) = β 2 . e 7.20. A discrete-time Wiener filtering problem To begin, z T SX Y (z ) − SY (z ) = zT z T +1 + β (1 − ρ/z )(1 − zo ρ) β ( z1 − ρ)(1 − zo z ) o The right hand side corresponds in the time domain to the sum of an exponential function supported on −T , −T + 1, −T + 2, . . . and an exponential function supported on −T − 1, −T − 2, . . .. If T ≥ 0 then only the first term contributes to the positve part, yielding z T SX Y − SY H(z ) = = + T zo β 2 (1 − zo ρ)(1 − zo /z ) T zo β (1 − ρ/z )(1 − zo ρ) and h(n) = On the other hand if T ≤ 0 then z T SX Y − SY = + T zo znI . β 2 (1 − zo ρ) o {n≥0} T zT z (z T − zo ) + β (1 − ρ/z )(1 − zo ρ) β ( z1 − ρ)(1 − zo z ) o 222 so H(z ) = T zT z (z T − zo )(1 − ρ/z ) . + 21 β 2 (1 − zo ρ)(1 − zo /z ) β ( z − ρ)(1 − zo z )(1 − zo /z ) o Inverting the z -transforms and arranging terms yields that the impulse response function for the optimal filter is given by h(n) = 1 2 β 2 (1 − zo ) | zon+T | − zo − ρ 1 zo − ρ n zo + T I{n≥0} . (9.1) Graphically, h is the sum of a two-sided symmetric exponential function, slid to the right by −T and set to zero for negative times, minus a one sided exponential function on the nonnegative integers. (This structure can be deduced by considering that the optimal casual estimator of Xt+T is the optimal causal estimator of the optimal noncausal estimator of Xt+T .) Going back to the z -transform domain, we find that H can be written as H(z ) = zT β 2 (1 − zo /z )(1 − zo z ) + − T zo (zo − ρ) . 2 β 2 (1 − zo )( z1 − ρ)(1 − zo /z ) o (9.2) Although it is helpful to think of the cases T ≥ 0 and T ≤ 0 separately, interestingly enough, the expressions (9.1) and (9.2) for the optimal h and H hold for any integer value of T . 7.22. Estimation given a strongly correlated pro cess (a) RX = g ∗ g ↔ SX (z ) = G (z )G ∗ (1/z ∗ ), RY = k ∗ k ↔ SY (z ) = K(z )K∗ (1/z ∗ ), RX Y = g ∗ k ↔ SX Y (z ) = G (z )K∗ (1/z ∗ ). + − (b) Note that SY (z ) = K(z ) and SY (z ) = K∗ (1/z ∗ ). By the formula for the optimal causal estimator, [G ]+ 1 G (z ) 1 SX Y G (z )K∗ (1/z ∗ ) = = = H(z ) = + − ∗ (1/z ∗ ) K K K(z ) SY SY + K(z ) + (c) The power spectral density of the estimator process X is given by H(z )H∗ (1/z ∗ )SY (z ) = SX (z ). π π Therefore, M S E = RX (0) − RX (0) = −π SX (ej ω ) dω − −π SX (ej ω ) dω = 0. A simple explanation b b 2π 2π 1 1 for why the MSE is zero is the following. Using K inverts K, so that filtering Y with K produces the process W . Filtering that with G then yields X . That is, filtering Y with H produces X , so the estimation error is zero. 223 ...
View Full Document

Page1 / 229

randomprocDec05 - Notes for ECE 534 An Exploration of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online