EE562a_Lecture_Part_2

EE562a_Lecture_Part_2 - 2.0 Random Vectors 2.1 Definitions...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2.0 Random Vectors 2.1 Definitions of Correlation Matrices Definition: Let X1 , X2 , . . . , Xn be n random variables defined on (U, F, P ). Then, ⎡ ⎤ X1 (u) ⎢ ⎥ ⎢ X2 (u) ⎥ ⎢ X(u) = ⎢ . ⎥ . ⎥ ⎣ . ⎦ Xn (u) is a random vector. To fully characterize X(u) we need an n−dimensional joint pdf . This would often result in very cumbersome calculations when utilized. Instead of doing this we usually just use 1st and 2nd order statistics in our study of random vectors. This is sufficient for most needs. We often write X for X(u). Definition: µX = E [X] is called the mean vector. Note µX = [E[X1 (u)], . . . , E[Xn (u)]]t . Here t denotes the transpose. Definition: RX = E XX† is called the correlation matrix. Here † denotes the conjugate transpose, i.e., ∗ ∗ X† (u) = (X1 (u), . . . , Xn (u)) . Thus ⎡ ∗ E [X1 (u)X1 (u)] ⎢ . . RX = ⎣ . ∗ E [Xn (u)X1 (u)] ... ⎤ ∗ E [X1 (u)Xn (u)] ⎥ . . ⎦. . ... ∗ . . . E [Xn (u)Xn (u)] Definition: KX = E (X − µX ) (X − µX )† is called the covariance matrix. Note: KX = RX − µX µ† . X Definition: If X(u) and Y(u) are two random vectors then RXY = E XY† is called the cross-correlation matrix. 1 † Note: RXY = RYX . Definition: KXY = E (X − µX ) (Y − µY )† is called the cross-covariance matrix. Note: KXY = RXY − µX µ† . Y Let Z = [XY]t . Then the correlation matrix for Z is RZ = E ZZ† = RX RYX RXY . RY 2.2 Properties of Correlation Matrices Definition: A matrix M is said to be Hermitian symmetric if M = M† . Note: † RX = E XX† † =E XX† † =E (X† )† X† = E XX† = RX so correlation matrices are Hermitian symmetric. Definition: A Hermitian symmetric matrix M is said to be non-negative definite if for any complex vector a a† Ma ≥ 0. Claim: Correlation matrices are non-negative definite. 2 Proof: We just need to show a† RX a ≥ 0. ⎡⎛ ⎞ ⎤⎛ ⎞ X1 a1 ⎢⎜ . ⎟ † ∗ ∗ ∗ ∗ . ⎠ ( X 1 · · · Xn ) ⎥ ⎜ . ⎟ a RX a = (a1 . . . an ) E ⎣⎝ . ⎦⎝ . ⎠ . Xn an ⎡ =E⎣ n i=1 ⎛ ⎞⎤ n a∗ Xi ⎝ i j=1 ⎡ ai Xi∗ ⎠⎦ = E ⎣ ⎡ =E⎣ n i=1 2 ⎤ n i=1 ⎛ a∗ Xi ⎝ i n j=1 ⎞∗ ⎤ a∗ Xi ⎠ ⎦ i a∗ Xi ⎦ ≥ 0. i Also, a† KX a ≥ 0. 2.3 Linear Transformations of Random Vectors Y(u) is formed by a linear transformation of X(u). Here X(u) ∈ Rn and Y(u) ∈ Rm . n Yi(u) = j=1 hij Xj (u), i = 1, 2, . . . , m or Y(u) = HX(u) where ⎡ h11 ⎢ . H=⎣ . . hm1 ... ⎤ h1n . ⎥ . ⎦. ... . . . . hmn Let us now look at the first and second moments. µY = E [Y(u)] = E [HX(u)] = HE [X(u)] = HµX . RY = E Y(u)Y† (u) = E HX(u) (HX(u))† = HE X(u)X(u)† H = HRX H† . Also, KY = HKX H† . 3 Question: Given a vector X of n uncorrelated random variables with zero mean and unit variance how do we transform this vector into a vector Y with mean c and covariance KY ? Let ˜ Y(u) = HX(u). Then µY = HµX = 0 ˜ and KY = HKX H† . ˜ Now KX is an n × n identity matrix, In . Thus KY = HIn H† = HH† . ˜ We now let ˜ Y(u) = Y(u) + c. Then Y(u) = HX(u) + c. Hence, µY = c and KY = KY = HH† . ˜ Problem: We need to find H given some KY . This is a matrix factorization problem that we will deal with later in the course. 4 ...
View Full Document

Ask a homework question - tutors are online