Writing the four real equations (22) and (23) in matrix form, with
P = [X11 |Y11 |X12 |Y12 ],
then P is non-singular and
AP = 6
The numerical determination of P is left as a tutorial problem.
The Smith Canonical Form
Equivalence of Polynomial Matrices
A matrix P 2 Mnn (F [x]) is called a unit in Mnn (F [x]) if 9 Q 2
Mnn (F [x]) such that
P Q = In .
Clearly if P and Q are units, so is P Q.
A matrix P 2 Mnn (F [x
: vij , T (vij ), . . . , T nij
and nij = deg pi ij and mT, vij = pi ij . Then we have the direct sum decomposition
CT, vij .
Also if we write bi = ei1 , we have
Ker pbi i (T ) =
The Real Jordan Form
If A is a real n n matrix, the characteristic polynomial of A will in general
have real roots and complex roots, the latter occurring in complex pairs.
In this section we show how to derive a canonical form B fo
and we label the conjugate partition by
Finally, note that the total number of dots in the dot diagram is (pb (T ),
by Theorem 4.2.
9v1 , . . . , v 2 V
and we get the secondary decomposition
Ker pbi i (T ) = CT,vi1 .
Further, if =
for CT,vi1 , then
11 [ [ t1 ,
is the elementary Jordan basis
Jeij (ci )
Jbi (ci ),
Calculating Am , where A 2 Mnn (C)
where d1 , . . . , ds are nonconstant monic polynomials in F [x] such that dk
divides dk+1 for 1 k s 1, then d1 , . . . , ds are called invariant factors
of A. So the invariant factors of A are the invariant factors of TA .
The invariant facto
Uniqueness of the Jordan form
be a basis for V for which [T ] is in Jordan canonical form
J = Je1 (
Jes ( s ).
If we change the order of the basis vectors in , we produce a corresponding
change in the order of the elementary Jordan matrices. I
We now present some interesting applications of the Jordan canonical
Nonderogatory matrices and transformations
If chA = mA , we say that the matrix A is non-derogatory.
Suppose that chT splits completely in F [x]. Then chT = mT , 9
For brevity, let c = ci , v = vij , e = eij . Let
P1 = v, P2 = (A
cIn )P1 , . . . , Pe = (A
P1 = X1 + iY1 , P2 = X2 + iY2 , . . . , Pe = Xe + iYe ; c = a + ib.
Then we have the following equations, posed in two dierent ways:
AP1 = cP1 + P2 A
Let A 2 M66 (F ) have the property that chA = x6 , mA = x3 and
(A) = 3, (A2 ) = 5, (A3 ) = 6).
Next, with h, x = dimF Nh, x we have
1, x = (A) = 3 =
2, x = (A )
(A) = 5
3, x = (A )
(A ) = 6
3 = 2;
5 = 1.
Hence the dot diagram c
Systems of dierential equations
If X = X(t) satisfies the system of dierential equations
X = AX,
t0 , where A is a constant matrix, then
X = e(t
PROOF. Suppose X = AX for t
t0 . Then
X) = ( Ae
)X + e
(vi) (A B)(C D) = (AC) (BD);
C) D = (B D)
(viii) P (A(B C)P
matrix P ;
= (AB) (AC) for a suitable row permutation
(ix) det (A B) = (det A)n (det B)m if A is m m and B is n n;
(x) Let f (x, y) =
F and define
cij xi y j 2 F
(iii) f (gv) = f (gv) = f (gv) = (f g)v = (f g)v;
(iv) 1v = 1v = v.
Remark: An F basis for Nh, p will be an Fp spanning family for Nh, p , but
will not, in general, be an Fp basis for Nh, p . The precise connection between
F independence and Fp independen
Then place the linearly independent family A2 v11 , Av12 at the head of this
N1, x = hA2 v11 , Av12 , Z1 , Z2 , Z3 i.
The LRA is then applies to the above spanning family selects a basis of the
form A2 v11 , Av12 , v13 , where v13 is the
is a polynomial in m of degree k and
| = |mj e(m
k) log c
| = mj e(m
k) log |c|
m ! 1,
as log c = log |c| + i arg c and log |c| < 0.
The last corollary gives a more general result:
Let A 2 Mnn (C) and suppose that all the eige
Suppose B = [T ] = sk=1 C(dk ) is the canonical form corresponding to
the invariant factors d1 , . . . , ds of T . Then
mT = mB =
lcm (mC(d1 ) , . . . , mC(ds ) )
lcm (d1 , . . . , ds ) = ds .
chT = chB =
chC(dk ) =
cfw_min(mk , ml ) + min(nk , nl )
2 min(mk , nl )
Further, equality occurs i the sequences are identical.
Case 1: k = l.
The terms to consider here are of the form
mk + nk
2 min(mk , nk )
which is obviously 0. Also, the term
and more generally
cij (Ai B j )(P Q) =
cij (J1i J2j ).
The matrix on the righthand side is lower triangular and has diagonal
f ( k , l ), 1 k m, 1 l n.
Let be the standard basis for Mmn (F )i.e
5. Let A be a real n n matrix and c 2 C. Then
W = N (A
cIn )h ) ) W = N (A
cIn )h ).
W = W1
Wr ) W = W 1
W = CTA , v ) W = CTA , v .
CTA , vi ) W =
CTA , vi .
mTA , v = (x
c)e ) mTA , v = (x
Let A 2 Mnn (
(with n = deg pi ) which reduces to the Jordan basis when pi = x ci , it is
not difficult to verify that we get a corresponding matrix H(pi ij ) called a
hypercompanion matrix, which reduces to the elementary Jordan matrix
Jeij (ci ) when pi = x ci :
Then V6 (Z3 ) = N (p2 (A) = CTA , v11 CTA , v12 .
Then joining hypercompanion bases for CTA , v11 and CTA , v12 :
and v12 , Av11
v11 , Av11 , p(A)v11 , Ap(A)v11
gives a basis v11 , Av11 , p(A)v11 , Ap(A)v11 ; v12 , Av11 for V6 (Z3 ). Finally if
P is the n
A positive Markov matrix is one with all positive elements (i.e.
strictly greater than zero). For such a matrix A we may write A > 0.
If A is a positive Markov matrix, then 1 is the only eigenvalue of modulus
1. Moreover nullit
proof First note that dim PL = deg mL as
IV , L, . . . , Ldeg mL
form a basis for PL . So, since PL ZL,L we have
PL = ZL,L , dim PL = dim ZL,L
, deg mL =
(2s 2k + 1) deg dk
chL = mL .
proof (a sketch) of Cecioni-Frobenius theorem.
Uniqueness of the Smith Canonical Form
Every matrix A 2 Mmn (F [x]) is equivalent to precisely one matrix is
Smith canonical form.
proof Suppose A is equivalent to a matrix B in Smith canonical form.
A real n n matrix A = [aij ] is called a Markov matrix, or row
stochastic matrix if
0 for 1 i, j n;
aij = 1 for 1 i n.
Remark: (ii) is equivalent to AJn = Jn , where Jn = [1, . . . , 1]t . So 1 is
The Smith canonical form of xIn C(d) where d is a monic polynomial
of degree n is
diag (1, . . . , 1, d).
proof Let d = xn + an
+ + a0 2 F [x], so
C(d) = 6
. . .
A real algorithm for finding the real Jordan form
Referring to the last example, if we write Z =
Then the vectors
If A is a primitive Markov matrix, then A satisfies the same properties
enunciated in the last two theorems for positive Markov matrices.
PROOF Suppose Ak > 0. Then (x
chA = (x
c1 )a1 (x
1)| chAk and hence (x
ct )at ) chAk = (x
ck1 )a1 (x