This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 2. Orthogonal Vectors and Matrices Since the 19608, many of the best algorithms of numerical linear algebra have
been based in one way or another on orthogonality. In this lecture we present
the ingredients: orthogonal vectors and orthogonal (unitary) matrices. Adjoint The complex conjugate of a scalar 2, written E or 2*, is obtained by negating
its imaginary part. For real 2, E = Z. The hermitian conjugate or adjotnt of an m X a matrix A, written A*, is
the n X m matrix whose 2',j entry is the complex conjugate of the j,z' entry of
A. For example, a a
11 12 — — —
a a a
A: Q21 Q22 : 14*: _11 _21 _31 .
a a 5512 (122 (132
31 32 If A = A*, A is hermitian. By deﬁnition, a hermitian matrix must be square.
For real A, the adjoint simply interchanges the rows and columns of A. In
this case, the adjoint is also known as the transpose, and is written AT. If a
real matrix is hermitian, that is, A = AT, then it is also said to be symmetric. Most textbooks of numerical linear algebra assume that the matrices under
discussion are real and thus use principally T instead of *. Since most of the
ideas to be dealt with are not intrinsically restricted to the reals, however, we
have followed the other course. Thus, for example, in this book a row vector 11 12 PART I FUNDAMENTALS will usually be denoted by, say, a* rather than CF. The reader who prefers to
imagine that all quantities are real and that * is a synonym for T will rarely
get into trouble. Inner Product The inner product of two column vectors 30,3; 6 (Um is the product of the
adjoint of :1: by y: ﬂy : (2.1)
i:1 The Euclidean length of a: may be written (vector norms such as this are
discussed systematically in the next lecture), and can be deﬁned as the square
root of the inner product of x with itself: m 1/2
HmH = = (2W) . (22) 1'21 The cosine of the angle 04 between 1: and y can also be expressed in terms of
the inner product: {13*y
llmll llyll' At various points of this book, as here, we mention geometric interpretations
of algebraic formulas. For these geometric interpretations, the reader should
think of the vectors as real rather than complex, although usually the in—
terpretations can be carried over in one way or another to the complex case
too. The inner product is bilinear, which means that it is linear in each vector
separately: cosa = (2.3) (1‘1 + $2)*y = fly + $321;
$1311 + 3/2) = 33*311 + 519312,
(MVWy) = Mfg. We shall also frequently use the easily proved property that for any matrices
or vectors A and B of compatible dimensions, (AB)* = B*A*. (2.4) This is analogous to the equally important formula for products of invertible
square matrices, (AB)’1 : B’lA’l. (2.5) The notation A” is a shorthand for (A*)’1 or (A’1)*; these two are equal, as
can be veriﬁed by applying (2.4) with B : A’l. LECTURE 2 ORTHOGONAL VECTORS AND MATRICES 13 Orthogonal Vectors A pair of vectors :1: and y are said to be orthogonal if ﬂy : 0. If :1: and y
are real, this means they lie at right angles to each other in IBM. Two sets of
vectors X and Y are orthogonal (also stated “X is orthogonal to Y”) if every
1: E X is orthogonal to every y E Y. A set of nonzero vectors S is orthogonal if its elements are pairwise orthog—
onal, i.e., if for 1:, y E S, :1: 7E y :> ﬂy : 0. A set of vectors is orthonormal if
it is orthogonal and in addition every :1: E S has = 1. Theorem 2.1. The vectors in an orthogonal set S are linearly independent. Proof. If the vectors in S are not independent, then some ’Uk 6 S can be expressed as a linear combination of other members '01, . . . ,1)” E S,
TL
ok 2 2 0,11,.
i:1
igék
Since ok 75 0, 71271,, : “ka2 > 0. Using the bilinearity of inner products and the orthogonality of S , we calculate TL
* _ * _
Ulcvk — thvkvz‘ — 0:
i 1 which contradicts the assumption that the vectors in S are nonzero. As a corollary of Theorem 2.1 it follows that if an orthogonal set S Q (Um
contains m vectors, then it is a basis for (Um. Components of a Vector The most important idea to draw from the concepts of inner products and or—
thogonality is this: inner products can be used to decompose arbitrary vectors
into orthogonal components. For example, suppose that {q1, q2, . . . , an} is an orthonormal set, and let o
be an arbitrary vector. The quantity q§v is a scalar. Utilizing these scalars as
coordinates in an expansion, we ﬁnd that the vector 7‘ = v  (QIle  (q3v)q2     (@2an (26)
is orthogonal to {(11, (12, ~ .  , on}. This can be veriﬁed by computing qfr:
air = qt?)  (qiv)(q;"ql)     (inﬂqfqnl This sum collapses, since (1qu : 0 for t 7E j: qE‘r = QlU—(le)(q:%) = 0 14 PART I FUNDAMENTALS Thus we see that ’U can be decomposed into n —1— 1 orthogonal components: TL TL 11 : r +Z(q;“v)q, = r +Z(q,~qf)v~ (27) i:1 i:1 In this decomposition, r is the part of v orthogonal to the set of vectors
{(11, (12, . . . , qn}, or equivalently to the subspace spanned by this set of vectors,
and (qfv)q, is the part of v in the direction of (1,. If is a basis for (Um, then rt must be equal to m and 7" must be the
zero vector, so ’0 is completely decomposed into m orthogonal components in
the directions of the q,: m vzzigﬁwa==§laﬁﬂ (2& 2:1 In both (2.7) and (2.8) we have written the formula in two different ways,
once with (qf'u)q,~ and again with (q,q§‘)v. These expressions are equal, but
they have different interpretations. In the ﬁrst case, we view ’0 as a sum
of coeﬂicients qf/U times vectors (1,. In the second, we view 71 as a sum of
orthogonal projections of 12 onto the various directions q,. The ith projection
operation is achieved by the very special rank—one matrix qiqf. We shall discuss
this and other projection processes in Lecture 6. Unitary Matrices A square matrix Q E mem is unitary (in the real case we also say orthogonal) if Q* : Q’l, i.e, if Q*Q : I. In terms of the columns of Q, this product can
be written qi‘ 1
ﬁ 1
C11 C12 ' ' ‘ qm q; 1 In other words, (1qu = 62], and the columns of a unitary matrix Q form an
orthonormal basis of (Um. The symbol 6,]. is the Kronecker delta, equal to 1 if i=jmﬂ0ﬁi#j Multiplication by a Unitary Matrix In the last lecture we discussed the interpretation of matrix—vector products
Am and A’lb. If A is a unitary matrix Q, these products become Qa: and
Q*b, and the same interpretations are of course still valid. As before, Qa: is
the linear combination of the columns of Q with coefﬁcients an Conversely, LECTURE 2 ORTHOGONAL VECTORS AND MATRICES 15 Q*b is the vector of coefﬁcients of the expansion of b
in the basis of columns of Q. Schematically, the situation looks like this: Multiplication by Q* b:
coefﬁcients of
the expansion of b in {el,...,em} Q*b:
coefﬁcients of
the expansion of b in {q1,...,qm} Multiplication by Q These processes of multiplication by a unitary matrix or its adjoint pre—
serve geometric structure in the Euclidean sense7 because inner products are
preserved. That is, for unitary Q, (Q$)*(Qy) = ﬂy, (29) as is readily veriﬁed by (2.4). The invariance of inner products means that
angles between vectors are preserved, and so are their lengths: HQIL‘H = llwll (210) In the real case7 multiplication by an orthogonal matrix Q corresponds to a
rigid rotation (if detQ : 1) or reﬂection (if detQ : —1) of the vector space. Exercises
Show that if a matrix A is both triangular and unitary, then it is diagonal. The Pythagorean theorem asserts that for a set of n orthogonal vectors H EatH2 = Z: HanH2
1:1 1:1
(a) Prove this in the case n : 2 by an explicit computation of cc1 + m22. (b) Show that this computation also establishes the general case7 by induction. Let A E mem be hermitian. An eigenvector of A is a nonzero vector x E (Um
such that Ax : A1: for some /\ E (D, the corresponding eigenvalue. (a) Prove that all eigenvalues of A are real. 16 PART I FUNDAMENTALS (b) Prove that if :1: and y are eigenvectors corresponding to distinct eigenvalues,
then a: and y are orthogonal. What can be said about the eigenvalues of a unitary matrix? Let S E (Ume be skewhermittan, i.e., 3* = —S.
(a) Show by using Exercise 1 that the eigenvalues of S are pure imaginary.
(b) Show that I — S is nonsingular. (c) Show that the matrix Q = (I—S)_1(I+S), known as the Cog/leg transform
of S, is unitary. (This is a matrix analogue of a linear fractional transformation
(1 + — s), which maps the left half of the complex s—plane conformally
onto the unit disk.) If u and v are m—vectors, the matrix A = I + an“ is known as a rankone
perturbation of the identity. Show that if A is nonsingular, then its inverse
has the form A‘1 : I + 0mm for some scalar 04, and give an expression for 04.
For what u and v is A singular? If it is singular, what is null(A)? A Hadamard matrix is a matrix whose entries are all ::1 and whose transpose
is equal to its inverse times a constant factor. It is known that if A is a
Hadarnard matrix of dimension m > 2, then m is a multiple of 4, but it is
an unsolved problem whether there is a Hadarnard matrix for every such m,
though examples are known for all cases m g 424. Show that the following recursive description provides a Hadarnard matrix of
each dimension m : 2’“, k : 0,1,2,.. .. H0 H H
PE
+
—‘  ...
View
Full
Document
 Spring '10
 Johnson
 Linear Algebra, Hilbert space, Orthogonal vectors, tl tl

Click to edit the document details