Lecture 27 05/12/14
1. X is AR(1), obtained from the white noise sequence Z via
i Zni ,
Xn =
i0
or equivalently, using the given dierence equation. In this case, Z is (specically) IID
Gaussian.
Since (joint) Gaussianity is preserved under linear transform

Lecture 20 04/14/14
1. The pdf of Z is given by
fZ (z) = fZ1 (z1 ) fZn (zn )
n
1
2
ezi /2
=
2
i=1
=
=
1
2
2
e(z1 +zn )/2
n/2
(2)
cfw_ T
1
z z
exp
2
(2)n/2
The transformation x = g(z) = Az+ is invertible (since A is nonsingular). Since xi /zj =
aij ,

Lecture 18 - Mon 04/07
Convergence in distribution; signicance of characteristic
functions
Convergence in distribution: examples
Implications between modes of convergence: special cases
(One last) binary example
Self-study quizzes on convergence
1
Converg

Lecture 19 - Wed 04/09
Autocovariance and cross-covariance matrices; identities
Positive-semidenite property of (auto-)covariance; LU
factorization, whitening transformation of a random
vector
Singularity of (auto-)covariance; linear dependence in
compone

Lecture 18 04/07/14
1.
FZn (x) =
Thus
enx /2,
x<0
1 (enx /2), x 0
x<0
0,
1/2, x = 0
lim FZn (x) =
n
1,
x>0
and FZn (x) u(x) at every continuity point of u( ). The unit step function is the cdf of
Z = 0 w.p.1.
2. The Laplace () distribution has all odd mo

Lecture 17 04/02/14
1. We have
2
X 2 = (X Xn + Xn )2 2(X Xn )2 + 2Xn ,
and thus
2
E[X 2 ] E[(Xn X)2 ] + E[Xn ] <
2. The Chebyshev inequality is obtained from the Markov inequality by taking Z = Y 2 :
P [|Y | ] = P [Y 2
2
E[Y 2 ]
]
2
[ Another useful fo

Lecture 16 - Mon 03/31
Central Limit Theorem
Modes of convergence of sequences of random variables:
denitions
Basic convergence examples
1
Central Limit Theorem
In essence: a random quantity formed by summing
together many independent components of simila

Lecture 17 - Wed 04/02
Relationships between modes of convergence of sequences
of random variables
Uncorrelated sequences and the weak law of large
numbers
IID sequences and the strong law of large numbers
1
Modes of Xn X: Key Dierences
a.s.
P
q.m.
Xn X a

Lecture 15 - Wed 03/26
Markov inequality
Sample mean of an IID sequence has exponentially
decaying probability of deviating from true mean
(Cherno bound)
Example on Cherno bound
Characteristic function: general properties
Moments and derivatives of charac

Lecture 16 03/31/14
1.
n
n
Cov(Xi , Xj ) = n 2
Var[X1 + + Xn ] =
i=1 j=1
Since Xi and Xj are independent (for i = j), it follows that
E[Xi Xj ] = E[Xi ]E[Xj ]
i.e.,
Cov(Xi , Xj ) = 0
(This holds regardless of the value of X = E[Xi ], here taken as zero fo

Lecture 14 03/24/14
1. The triangle inequality
|z1 + z2 | |z1 | + |z2 |
extends to any nite number of summands, i.e.,
n
zk
k=1
n
|zk |
k=1
and thus also to integrals of complex-valued functions on the real line (since any such integral
can be approximated

Lecture 21 04/16/14
1. Note that Yn () is nondecreasing in n thus Y () = limn Yn () is well dened (and possibly
innite).
By linearity of expectation (nite sum of variables with nite means),
E[Yn ] =
ak E[Xk ]
|k|n
2
We know (from the denition of variance

Lecture 22 04/21/14
2
1. Since E[Zn ] = 1 (for all n) and i |hi | < , the conditions in the hypothesis of the SVBSM
theorem (Lecture 21) are satised. Thus the convolution sum
Xn =
hi Zni
i=
converges almost surely and in quadratic mean; and both rst and s

Lecture 20 - Mon 04/14
Multivariate Gaussian density
Gaussian vectors
Independence and uncorrelatedness in the Gaussian case
Conditional Gaussian distributions and MMSE estimation
1
The Multivariate Gaussian PDF
f (x) =
1
1
exp (x )T C1 (x )
(2)n/2 det(C)

Lecture 26 - Wed 05/07
Linear MMSE prediction: Wiener lter
Kolmogorovs formula for the prediction variance
Sample averages of WSS sequences
Sample paths of WSS sequences in the frequency domain;
relationship to power spectral density
1
Explicit Form of Li

Lecture 26 05/07/14
1. The expansion
|1 (ej /2)|2
9 5 4 cos
=
j /3)|2
|1 + (e
8 5 + 3 cos
is easily veried but not really needed in what follows. Comparing SX () to the general
rational (ARMA) model, we observe that the scaling constant 2 equals unity i

Lecture 23 - Mon 04/28
Power spectral distribution and density of a wide-sense
stationary sequence
Earlier examples revisited in the spectral domain
Rational power spectral densities; generating ARMA
sequences from white noise
1
Positive-Semidenite Sequen

Lecture 25 - Mon 05/05
Linear prediction of WSS stationary sequences: problem
statement
Basic examples
Deterministic and non-deterministic sequences
Prediction of non-deterministic sequences with absolutely
continuous power spectrum
1
Linear MMSE Predicti

Lecture 25 05/05/14
1. Consider the sequence (Yi ) dened by
n
Yi = E[Xn+m | Xni ]
n
n
Since L(Xni1 ) L(Xn1 ), it follows that
E[(Xn+m Yi+1 )2 ] E[(Xn+m Yi )2 ]
Thus the MMSEs form a decreasing, hence convergent (since bounded below), sequence.
The limit 2

Lecture 24 - Wed 04/30
L2 as a complete inner product space
Linear MMSE estimation: innite-dimensional case
Conditional expectation and linear MMSE estimation
Projection onto orthogonal subspaces
1
The Inner Product Space L2(, F, P )
Let X and Y be random

Lecture 23 04/28/14
1. We can write
rk
r0
ejk dF () + ejk cfw_
=
(,)
cos(k) dF () + cfw_0 + ejk cfw_
= 2
(0,)
where the second equality is due to symmetry of [ ] about the origin. Since cos(k) =
cos(k) and ejk = ejk , it follows that rk = rk .
For any a R

Lecture 24 04/30/14
1.
Parallelogram law : X + Y 2 + X Y 2 = 2(X2 + Y 2 )
Proof is straightforward: express each l.h.s. term as X Y, X Y and expand using
distributivity (i.e., additivity of the inner product).
Triangle inequality: X + Y X + Y
This is e

Lecture 21 - Wed 04/16
Characteristic function of a Gaussian vector
Cauchy-Schwarz inequality
Dominated and monotone convergence theorems
Weighted sums of random variables with bounded second
moments
1
Multivariate Characteristic Function
The characterist

Lecture 22 - Mon 04/21
White noise through linear lters
Denition of a wide-sense stationary sequence;
autocorrelation and autocovariance functions
Examples
Wide-sense stationary sequences through linear lters
1
White Noise Through Linear Filters
Denition.

Lecture 15 03/26/14
1. If s > 0, the function g(x) = esx is strictly increasing in x, thus
xa
esx esa
2. MX (s) is given by
E[esX ] =
esk e
k0
(es )k
k
s
= e
= e(e 1) ,
k!
k!
k0
where the standard Taylor series expansion of g(t) = et was used for the la

Lecture 14 - Mon 03/24
Probability distributions in the transform domain:
moment generating function, characteristic function
Analogies to Laplace and Fourier transforms; examples
Sums of i.i.d. random variables; outline of results
Derivatives of moment g

Lecture 5 - Mon 02/10
Extending a probability measure from a eld to a -eld
M -ary fractional expansions on the unit interval; or, how
to model to independent M -ary trials with uniformly
distributed outcomes
Iterative construction of a sequence of depende

Lecture 4 - Wed 02/05
Borel Cantelli lemmas: 01 laws for P [lim supn An ]
Discrete random variables and indicator functions
Lebesgue measure on the unit interval [0, 1)
Constructing multiple random variables, with a given joint
distribution, on the unit i