# lecture_12 - 50 Lecture 12 Example 4 Let S be a...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 50 Lecture 12 Example 4: Let S be a 2-dimensional space of real vectors. Consider the following inner product with a parameter β such that 0 ≤ β ≤ 1: x, y = [x1 x2 ] 1β β1 y1 y2 = x1 y1 + βx2 y1 + βy2 x1 + x2 y2 Question: Is it a valid inner product ? Answer: Conditions 1, 2 and 3 are easy to check, so lets check 4: x, y = [x1 x2 ] 1β β1 x1 x2 = x2 + 2βx1 x2 + x2 1 2 = β (x1 + x2 )2 + (1 − β )x2 + (1 − β )x2 ≥ 0, 1 2 and equality holds if and only if x1 = 0, x2 = 0. We would like to use just one function given by a valid inner product to obtain the notion of: 1. An angle between 2 vectors. 2. A length of a vector. 3. A distance between two vectors. To get a general notion of a length of a vector, we will require a function that assigns a non-negative number to every vector. The technical name for the length of a vector is ”Norm”. Norm: We would like this norm to satisfy the following properties: Notation: . Properties: For all x, y and scalar α 1. x ≥ 0, and equality if and only if x = 0. 2. αx = |α| x . 3. Triangle inequality: x + y ≤ x + y The ﬁrst two properties are easy to interpret. The third property captures the idea that the length of the sum of two vectors should be less than the sum of lengths of each vector. In other words length of a side of a triangle must be no longer than the sum of other two sides (Fig 54). The sum of two short vectors cannot be too long. 51 z=x+y y x Figure 54: Triangle Inequality Cauchy-Schwartz Inequality: For any two vectors x and y the inner product satisﬁes: | x, y |2 ≤ x, x y , y . Consider the inner product of (x − αy ) for an arbitrary α 0 ≤ x − αy, x − αy = x, x + x, −αy + −αy, x + −αy, −αy = x, x + −αy, x ∗ ∗ = x, x + − α y , x − α y , x + (−α) y , −αy ∗ − α y , x + (−α)(−α∗ ) y , y = x, x − 2Re(α y , x ) + |α|2 y , y x, y } {Let α = y, y | x, y |2 | x, y |2 = x, x − 2Re + y, y y, y 2 | x, y | = x, x − y, y 2 ⇒ | x, y | ≤ x, x y , y . Norm induced from Inner product: Consider a function S → R+ , where R+ denotes the set of non-negative real numbers, given by: x→ x, x . It turns out that this function satisﬁes all the required properties of a norm. x, x is a valid norm. Claim: 1. Property 1: x, x ≥ 0 (equality if and only if x = 0). 52 2. Property 2: αx, αx = α x, αx = α αx, x = α [α x, x ]∗ = αα∗ x, x = |α|2 x, x = |α| ∗ ∗ x, x . 3. Property 3: x + y, x + y = x, x + x , y + y , x + y , y ∗ = x, x + x , y + x , y + y, y = x, x + 2Re( x, y ) + y , y ≤ x, x + 2| x, y | + y , y ≤ x, x + 2 x, x y , y + y , y 2 = = x, x + x, x + y, y y, y , where the ﬁrst inequality follows from the fact that the real part of a complex number is less than or equal to its absolute value, and the second inequality follows from the CauchySchwartz Inequality. Hence we can use x, x as a norm (length). From now on we will use x = x, x . The notion of length is induced from the inner product. Now we can interpret Cauchy-Schwartz inequality in the following way. Since x, y ≤ x y , we can express this as: | x, y | = cos(θ ) x y Or, in other words, one can deﬁne a notion of an angle θ between x and y , whose value is characterized by cos(θ ) = | x, y | x, x y, y Orthogonality: The above argument implies that 2 vectors are at right angles with each other if and only if cos(θ ) = 0 ⇐⇒ | x, y | = 0 ⇐⇒ x, y = 0 53 Remark: If x and y are orthogonal, i.e., x, y = 0, then we have x + y, x + y = x+y = x, x + y, y = x + y . Intuition for Cauchy-Schwartz inequality: Let us try to ﬁnd β such that the length of the error between x and βy is minimized. e = x − βy Observe that e = x − x,y y y2 is orthogonal to y x− x, y y x, y y , y = x, y − 2 y y2 In other words, x − αy is orthogonal to y for α = Hence e = e + e , where e = ˜ ˜ x,y x2 x,y y ,y 2 = 0. . −β y ⇒e 2 =e ˜ 2 2 +e Since e is a constant, it implies that we can minimize the length of e instead. The length of ˜ e can be minimized by choosing β = α = x,y2 . This is illustrated in Fig. 55. ˜ y x e e’ βy αy y Figure 55: Intuition for Cauchy-Schwartz Inequality This means that we have a new interpretation of the inner product: αy is the projection of the vector x on the direction of the vector y , where α = x,y2 . y Now let us go back and look at the 2-dimensional Hilbert space with inner product given by x, y = [x1 x2 ] 1β β1 y1 y2 = x1 y1 + βx2 y1 + βy2 x1 + x2 y2 54 Let us take β = 1 √. 2 A picture of this Hilbert space is given in Figure 56. Note that this (−0.707,0.707) (0,1) (0.707,0.707) (0,0) (1,0) Figure 56: An example of two-dimensional Hilbert Space Hilbert space has the same collection of vectors and scalars as the standard two-dimensional Euclidean space. The geometry of these two spaces are determined solely through their inner products. In the ﬁgure, we can see that the angle between the vectors (0, 1) and (1, 0) is π/4. Let us conﬁrm this using the corresponding inner product. Note that (0, 1) = (1, 0) = 1. So these vectors are unit norm vectors, and 1 (1, 0), (0, 1) = √ . 2 This implies that the angle θ between these two vectors is given by cos θ = | (1, 0), (0, 1) 1 =√ , ( (0, 1) ) ( (1, 0) ) 2 which is the conﬁrmation. We can also note that 1 1 √ ,√ 2 2 = 1 1 + √ > 1, 2 and 11 −√ , √ 22 = 1 1 − √ < 1. 2 This space can be thought as a stretched version of the standard 2-dimensional Euclidean 1 1 space along the direction ( √2 , √2 ) by the factor given by β . 55 The notion of length (norm) can be used to obtain the notion of distance between two vectors. The technical name for this is metric. This is a function that assigns a non-negative number for every pair of vectors in the space. Notation: d(x, y ). Properties: 1. d(x, y ) = d(y, x), 2. d(x, y ) ≥ 0, and 0 if and only if x = y . 3. d(x, z ) ≤ d(x, y ) + d(y, z ) for all x, y and z . The last inequality (also called triangle inequality) says that if vectors x and y are close to each other, and if vectors y and z are close to each other, then the vectors x and z cannot be too far apart. Metric Induced by Norm: Consider a function S × S → R, given by d(x, y ) = x − y . It turns out that this function is a valid metric. How do we see this? We have to check the three axioms of metric. The ﬁrst two are easy to check. Let us check the third in the following. d(x, y ) = x − z = x − y + y − z ≤ x − y + y − z = d(x, y ) + d(y, z ), where the inequality follows from the triangle inequality of the norm. Hence we have shown that this function is a valid metric. Hence by deﬁning one function, the inner product, we get the notion of (a) angle between two vectors, (b) length of a vector, and (c) distance between two vectors. Note: From now on, we will drop the vector notation, and simply use lower case English letters to denote the vectors, and Greek letters to denote scalars. Let us consider an example. Let S denote the space of all complex-valued DT signals with x, y = ∞ x[n]y ∗ [n]. n=−∞ Then the induced norm is given by x= n |x[n]|2 , and d(x, y ) = n |x[n] − y [n]|2 . Hilbert Space of Random Variables: The framework of Hilbert space is very general. It can also be used to solve practical problems where the signals are modeled as random vectors or 56 random processes. In the standard model of probability, we have three objects. (a) Sample space Ω: the set of all observable outcomes of a random experiment, (b) Event Space B : a collection of events satisfying certain properties. This is called sigma-algebra of events. If you are not familiar with sigma-algebra, it is not a problem for this course. At this point, just treat it as an interesting collection of events. (c) Probability Assignment P : assignment of numbers between 0 and 1 to the events in the event space. A random variable X is a function that maps elements of Ω to the set of real numbers: X : Ω → R. This function induces an assignment of probability to the subsets of R, called Probability Mass Function of X . Let us consider an example. Consider the random experiment involving rolling three fair dice. There are 216 outcomes, where each outcome is equally likely. The event space in this simple example is the collection of all subsets of the sample space. How many events are there in the event space? There are 2216 events in the event space. The probability assignment is given by: |E | , P (E ) = 216 for every event E , where |E | denotes the size of E . Consider the following random variable: X : for every w in Ω we have X (w) = number of 6’s in w. For example, let w = (4, 5, 6) denoting that 4, 5 and 6 are observed on the ﬁrst, second and third rolls. X (w) = 1. Now let us create the Hilbert space. For this random experiment, we associate a Hilbert space, S as follows. A random variable X is a vector in this space. So S is composed of all functions with domain Ω and range R. For the time being, we will use R as the set of scalars. Every random variable deﬁned on this random experiment is a vector in this Hilbert space. In other words, a function from Ω to R is treated as a vector in this space. Clearly, we can add two vectors in this space to get a third vector, and scale a vector by a scalar. Hence vector addition and multiplication of a vector by a scalar is well-deﬁned. The all-zero vector is given by a random variable that assigns 0 to all outcomes. To specify a vector in this space, we need 216 numbers, i.e., the “dimension” of this Hilbert space is 216. This Hilbert space is endowed with the following inner product: x, y = E (XY ). Let us conﬁrm that this is a valid inner product. • x, y = E (XY ) = E (Y X ) = y , x . • αx, y = E (αXY ) = αE (XY ) = α x, y . • x + y, z = E ((X + Y )Z ) = E (XZ ) + E (Y Z ) = x, y + y , z . • x, x = E (X 2 ) ≥ 0, and equality if and only if X (w) = 0 for all w ∈ Ω, i.e., x = 0. 57 Hence it is a valid inner product. So now we have a valid Hilbert space of random variables. For complex-valued random variables the inner product is given by E (XY ∗ ). Now let us see what Cauchy-Schwartz inequality implies here. | x, y |2 ≤ x, x y , y E (X 2 ) E (Y 2 ) =⇒ E (XY ) ≤ or in other words, E (XY ) E (X 2 ) E (Y 2 ) ≤ 1. The induced norm is given by E (X 2 ), and the induced metric is given by E [(X − Y )2 ]. The metric is the root mean squared error between the two random variables. So, what is the angle θ between two random variables X and Y ? The answer is: cos θ = |E (XY )| E (X 2 )E (Y 2 ) . ...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online