This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: IGENVALUES AND
EIGENVECTORS INTRODUCTION I 5.1 This chapter begins the “second half” of matrix theory. The ﬁrst part was almost
completely involved with linear systems Ax = b, and the fundamental technique
was elimination. From now on row operations will play only a minor role. The
new problems will still be solved by simplifying a matrix—making it diagonal or
upper triangular—but the basic step is no longer to subtract 0 multiple of one row
from another. We are not interested any more in preserving the row space of a
matrix, but in preserving its eigenvalues. Those are changed by elimination. The chapter on determinants was really a transition from the old problem
Ax = b to the new problem of eigenvalues. In both cases the determinant leads to a “formal solution”: to Cramer’s rule for x = A‘lb and to the polynomial.
det(A — 11) whose roots will be the eigenvalues. (We emphasize that all matrices are now square; the eigenvalues of a rectangular matrix make no more sense than its determinant.) As always, the determinant can actually be used to solve the problem, if n = 2 or 3. For large n the computation of eigenvalues is a longer and more difﬁcult task than solving Ax = b, and even Gauss himself did not help much. But that can wait. The ﬁrst step is to understand what eigenvalues are and how they can be useful.
One of their applications, the one by which we want to introduce them, is to the
solution of ordinary differential equations. We shall not assume that the reader
is an expert on differential equations! If you can differentiate the usual functions
like 3:", sin x, and e", you know enough. As a speciﬁc example, consider the coupled 244 5 Eigenvalues and Eigenvectors pair of equations d—v——4v—Sw, v=8att=0, dt dw (1)
—=20—3w, w=5att=0. dt This is an initialvalue problem. The unknown is speciﬁed at time t = 0, and not
at both endpoints of an interval; we are interested in a transient rather than a
steady state. The system evolves in time from the given initial values 8 and 5, and the problem is to follow this evolution.
It is easy to write the system in matrix form. Let the unknown vector be 14, its initial value be ac, and the coefﬁcient matrix be A: 8 4 —5
was} as} A=[2 3]. In this notation, the system becomes a vector equation d—u=Au, u=uoatt=0. (2)
dt This is the basic statement of the problem. Note that it is a ﬁrstorder equation—
no higher derivatives appear—and it is linear in the unknowns. It also has constant
coeﬁ‘icients; the matrix A is independent of time. How do we ﬁnd the solution? If there were only one unknown instead of two,
that question would be easy to answer. We would have a scalar instead of a vector
differential equation. If it is again homogeneous with constant coefﬁcients, it can
only be —=au, u=uoatt=0. (3) The solution is the one thing you need to know:
u(t) = e“‘uo. (4) At the initial time t = 0, u equals u0 because e° = 1. The derivative of e‘" has the
required factor a, so that du/dt = au. Thus the initial condition and the equation
are both satisﬁed. Notice the behavior of u for large times. The equation is unstable if a > 0, neu
trally stable if a = 0, or stable if a < 0; the solution approaches inﬁnity, remains
bounded, or goes to zero. If a were a complex number, a.= a + iﬁ, then the same
tests would be applied to the real part a. The complex part produces oscillations
e‘“ = cos [it + isin ﬁt; but stability is governed by the factor 9“. ~— _____.,—__ 5.1 Introduction 245 So much for a single equation. We shall take a direct approach to systems, and
look for solutions with the same exponential dependence on t just found in the
scalar case. In other words, we look for solutions of the form __ 2.1
v(t) e y (5)
w(t = el‘z,
or in vector notation
u(t) = el‘x. (6) This is the whole key to differential equations du/dt = Au: Look for pure expo
nential solutions. Substituting 0 = e“y and w = e“z into the equation we ﬁnd le"y = 4e’1‘y — 5e“z
le“z = Zel'y — 3e"; The factor e“ is common to every term, and can be removed. This cancellation is
the reason for assuming the same exponent ,1 for both unknowns; it leaves 4y—52=Ay
2y—3z=}.z. That is the basic equation; in matrix form it is Ax = 1.x. You can see it again if
we use the vector solution u = eMx—a number e" that grows or decays times a
ﬁxed vector x. The substitution of u = e’"x into du/dt = Au gives le“x = Ae4‘x, and
the cancellation produces (7) Ax = 1x. (8) Now we have the fundamental equation of this chapter. It involves two un—
knowns A and x, and it is an algebra problem. The differential equations can be
forgotten! The number it (lambda) is called an eigenvalue of the matrix A, and the
vector x is the associated eigenvector. Our goal is to ﬁnd the eigenvalues and
eigenvectors, and to use them. The Solutions of Ax = 1 Notice that Ax = 2.x is a nonlinear equation; 1 multiplies x. Ifwe could discover
1., then the equation for x would be linear. In fact we could write 11x in place of
1x: and bring this term over to the left side: (9) T The identity matrix is needed to keep matrices and vectors and scalars straight; the
equation (A — 2.)x = 0 is shorter, but mixed up. 246 5 Eigenvalues and Eigenvectors This is the key to the problem: The vector x is in the nullspace of A — 11
The number A is chosen so that A — M has a nullspace. Of course every matrix has a nullspace. It was ridiculous to suggest otherwise,
but you see the point. We want a nonzero eigenvector x. The vector x = 0 always
satisﬁes Ax = 3.x, and it is always in the nuIISpace, but it is useless in solving dif
ferential equations. The goal is to build u(t) out of exponentials el‘x, and we are
interested only in those particular values 1 for which there is a nonzero eigenvector
x. To be of any use, the nullspace of A — 111 must contain vectors other than zero. In short, A — M must be singular.
For this, the determinant gives a conclusive test. In our example, shifting A by 111 gives
4 — 11 —5
— I = .
A ’1 [ 2 —3 — A] Note that l is subtracted only from the main diagonal (because it multiplies I). The
determinant of A — AI is (4—l)(—3—/l)+10 or 22—1—2. This is the “characteristic polynomial.” Its roots, where the determinant is zero,
are the eigenvalues. They come from the general formula for the roots of a quad
ratic, or from factoring into 22 — A — 2 = (,1 + l)(l — 2). That is zero if ,1 = ——l
or A = 2, as the general formula conﬁrms: l_—bi,/b2—4ac_1i,/§
“T‘T = —l or2. There are two eigenvalues, because a quadratic has two roots. Every 2 by 2 matrix
A — M has 1.2 (and no higher power) in its determinant. Each of these special values, A = —l and A = 2, leads to a solution of Ax = rlx
or (A — 21):: = 0. A matrix with zero determinant is singular, so there must be 5.1 Introduction 247 a nonzero vector x in its nullspace.1‘ In fact the nullspace contains a whole line
of eigenvectors; it is a subspace! inu The solution (the ﬁrst eigenvector) is any multiple of x41]. The computation for 12 is done separately: 0 _ 2 —5 y _ 0 (mung _5][Z]_[0]. The second eigenvector is any multiple of mu Note on computing eigenvectors: In the 2 by 2 case, both rows of A — M will be
multiples of the same vector (a, b). Then the eigenvector is any multiple of (— b, a).
The rows of A — AZI were (2, — 5) and the eigenvector was (5, 2). In the 3 by 3 case,
I often set a component of x equal to one and solve (A — lI)x = 0 for the other
components. Of course if x is an eigenvector then so is 7x and so is —x. All vectors
in the nullspace of A — M (which we call the eigenspace) will satisfy Ax = 2.x. In
this case the eigenspaces are the lines through x1 = (I, l) and x2 = (5, 2). Before going back to the application (the diﬂ‘erential equation), we emphasize
the steps in solving the eigenvalue problem: in 1. Compute the determinant of A — A]. With A subtracted along the diagonal,
this determinant is a polynomial of degree n. 2. Find the roots of this polynomial. The n roots are the eigenvalues. 3. For each eigenvalue solve the equation (A — 110x = 0. Since the determinant
is zero, there are solutions other than x = 0. Those are the eigenvectors. T If solving (A — 1.1)): = 0 leads you to x = 0, then i. is not an eigenvalue. ' 248 5 Eigenvalues and Eigenvectors In the differential equation, this produces the special solutions u = ehx. They are
the pure exponential solutions 5
u = e“‘x, = ([1] and u = elz‘xz = e2'[2:. More than that, these two special solutions give the complete solution. They can
be multiplied by any numbers c1 and c2, and they can be added together. When
two functions u 1 and 142 satisfy the linear equation du/dt = Au, so does their sum u, + uz. Thus any combination u = Clea"):1 + czeh‘xz (12)
is again a solution. This is superposition, and it applies to differential equations
(homogeneous and linear) just as it applied to algebraic equations Ax = 0. The
nullspace is always a subspace, and combinations of solutions are still solutions. Now we have two free parameters c1 and c2, and it is reasonable to hope that
they can be chosen to satisfy the initial condition u = u() at t = 0: l 5 c 8
clx1 + czx2 = u0 or [I 2] = (13) The constants are cl = 3 and c2 = 1, and the solution to the original equation is .(...3..[;]..2x[;]. (1.. Writing the two components separately, this means that
v(t) = 3e" + 5e”, w(t) = 3e" + 2e”. The initial conditions v0 = 8 and w0 = 5 are easily checked. The message seems to be that the key to an equation is in its eigenvalues and
eigenvectors. But what the example does not show is their physical signiﬁcance;
they are important in themselves, and not just part of a trick for ﬁnding u. Probably
the homeliest exampleT is that of soldiers going over a bridge. Traditionally, they '
stop marching and just walk across. The reason is that they might happen to
march at a frequency equal to one of the eigenvalues of the bridge, and it would
begin to oscillate. (Just as a child’s swing does; you soon notice the natural fre
quency of a swing, and by matching it you make the swing go higher.) An engineer
tries to keep the natural frequencies of his bridge or rocket away from those of
the wind or the sloshing of fuel. And at the other extreme, a stockbroker spends T One which I never really believed. But a bridge did crash this way in 1831. 5.1 Introduction 249 his life trying to get in line with the natural frequencies of the market. The eigen
values are the most important feature of practically any dynamical system. Summary and Examples We stop now to summarize what has been done, and what there remains to
do. This introduction has shown how the eigenvalues and eigenvectors of A appear
naturally and automatically when solving du/dt = An. Such an equation has pure
exponential solutions it = eA‘x; the eigenvalue gives the rate of growth or decay, and
the eigenvector x develops at this rate. The other solutions will be mixtures of
these pure solutions, and the mixture is adjusted to ﬁt the initial conditions. The key equation was Ax = Ax. Most vectors x will not satisfy such an equation.
A typical x changes direction when multiplied by A, so that Ax is not a multiple
of x. This means that only certain special numbers A are eigenvalues, and only
certain special vectors x are eigenvectors. Of course, if A were a multiple of the
identity matrix, then no vector would change direction, and all vectors would be
eigenvectors. But in the usual case, eigenvectors are few and far between. They
are the “normal modes” of the system, and they act independently. We can watch
the behavior of each eigenvector, and then combine these normal modes to ﬁnd
the solution. To say the same thing in another way, the underlying matrix can be
diagonalized. We plan to devote Section 5.2 to the theory of diagonalization, and the following
sections to its applications: ﬁrst to difference equations and Fibonacci numbers
and Markov processes, and afterward to diﬂ'erential equations. In every example,
we start by computing the eigenvalues and eigenvectors; there is no shortcut to
avoid that. But then the examples go in so many directions that a quick summary
is impossible, except to emphasize that symmetric matrices are especially easy and
certain other “defective matrices” are especially hard. They lack a full set of
eigenvectors, they are not diagonalizable, and they produce a breakdown in the
technique of normal modes. Certainly they have to be discussed, but we do not
intend to allow them to take over the book. We start with examples of particularly good matrices. . EXAMPLE 1 Everything is clear when A is diagonal: 3 0 . l . O
A=l:O 2:lhasl.1=3w1thxl=[0], lz=2w1thx2=[l]. On each eigenvector A acts like a multiple of the identity: Axl = 3x1 and sz = 2x2. Other vectors like x = (1, 5) are mixtures x1 + 5x2 of the two eigenvectors,
and when A multiplies x it gives AX = 11x1 + Slzxz = 250 5 Eigenvalues and Eigenvectors This was a typical vector x—not an eigenvector—but the action of A was still
determined by its eigenvectors and eigenvalues. EXAMPLE 2 The situation is also good for a projection: %% . 1 . 1
P= 2; 1i haslj=1w1thx1= 1, 12=0w1thx2= _1. The eigenvalues of a projection are one or zero! We have 2. = 1 when the vector
projects to itself, and A = 0 when it projects to the zero vector. The column space
of P is ﬁlled with eigenvectors and so is the nullspace. If those spaces have dimen
sion r and n — r, then 3. = 1 is repeated r times and A = O is repeated n — r times: has A = 1, 1,0,0. COCO
0000 O
0
0
1 COO!— There are still four eigenvalues, even if not distinct, when P is 4 by 4. Notice that there is nothing exceptional about A = 0. Like every other number,
zero might be an eigenvalue and it might not. If it is, then its eigenvectors satisfy
Ax = 0x. Thus x is in the nullspace of A. A zero eigenvalue signals that A has
linearly dependent columns and rows; its determinant is zero. Invertible matrices
have all A ace 0, whereas singular matrices include zero among their eigenvalues. EXAMPLE 3 The eigenvalues are still obvious when A is triangular: 1—1 4 5
det(A—M)= 0 %—,1 6
0 0 %—l = (1  MG  Mi  l) The determinant is just the product of the diagonal entries. It is zero if A = 1, or
A = %, or A = %; the eigenvalues were already sitting along the main diagonal. This example, in which the eigenvalues can be found by inspection, points to
one main theme of the whole chapter: To transform A into a diagonal or triangular
matrix without changing its eigenvalues. We emphasize once more that the Gaussian
factorization A = LU is not suited to this purpose. The eigenvalues of U may be
visible on the diagonal, but they are not the eigenvalues of A. For most matrices, there is no doubt that the eigenvalue problem is computa
tionally more difﬁcult than Ax = b. With linear systems, a ﬁnite number of elim 5.1 Introduction 251 ination steps produced the exact answer in a ﬁnite time. (Or equivalently, Cramer’s
rule gave an exact formula for the solution.) In the case of eigenvalues, no such
steps and no such formula can exist, or Galois would turn in his grave. The
characteristic polynomial of a 5 by 5 matrix is a quintic, and he proved that there
can be no algebraic formula for the roots of a ﬁfth degree polynomial. All he will
allow is a few simple checks on the eigenvalues, after they have been computed,
and we mention two of them: The projection matrix P had diagonal entries %,% and eigenvalues l, O—and
%+ % agrees with l + 0 as it should. So does the determinant, which is 0 l = 0.
We see again that a singular matrix, with zero determinant, has one or more of its
eigenvalues equal to zero. There should be no confusion between the diagonal entries and the eigenvalues.
For a triangular matrix they are the same—but that is exceptional. Normally the
pivots and diagonal entries and eigenvalues are completely different. And for a 2
by 2 matrix, we know everything: [:1 has trace a + d, determinant ad —— bc det a _ A b = 112 — (traceM + determinant
c d — 1. l _ trace i [(trace)2 — 4 det]”2
_ _____2____, Those two 1’s add up to the trace; Exercise 5.1.9 gives 2 l, = trace for all matrices. EXERCISES 5.1.1 Find the eigenvalues and eigenvectors of the matrix A = ‘4]. Verify that the
trace equals the sum of the eigenvalues, and the determinant equals their
product. 5.1.2 With the same matrix A, solve the differential equation du/dt = Au, uo = 2]. What
are the two pure exponential solutions? ...
View
Full
Document
This note was uploaded on 11/08/2011 for the course MATH 601 taught by Professor Osu during the Fall '11 term at Cornell University (Engineering School).
 Fall '11
 OSU

Click to edit the document details