.
.
.
.
.
.
.
.
.
.
.
.
.
.
l
n
1
l
n
2
· · ·
l
n,n

1
1
,
(10.39)
where the
l
ij
,
j
= 1
, . . . , n

1,
i
=
j
+ 1
, . . . , n
are the multipliers (com
puted after all the rows have been rearranged), we arrive at the anticipated
factorization
PA
=
LU
. Incidentally, up to sign, Gaussian elimination also
produces the determinant of
A
because
det(
PA
) =
±
det(
A
) = det(
LU
) = det(
U
) =
a
(1)
11
a
(2)
22
· · ·
a
(
n
)
nn
(10.40)
and so det(
A
) is plus or minus the product of all the pivots in the elimination
process.
In the implementation of Gaussian elimination the array storing the aug
mented matrix
A
b
is overwritten to save memory.
The pseudo code with
partial pivoting (assuming
a
i,n
+1
=
b
i
, i
= 1
, . . . , n
) is presented in Algo
rithm 3.
10.2.1
The Cost of Gaussian Elimination
We now do an operation count of Gaussian elimination to solve an
n
×
n
linear system
Ax
=
b
.
We focus on the elimination as we already know that the work for the
step of backward substitution is
O
(
n
2
). For each round of elimination,
j
=
1
, . . . , n

1, we need one division to compute each of the
n

j
multipliers and
(
n

j
)(
n

j
+ 1) multiplications and to (
n

j
)(
n

j
+ 1) sums (subtracts)
perform the eliminations. Thus, the total number number of operations is
W
(
n
) =
n

1
X
j
=1
[2(
n

j
)(
n

j
+ 1) + (
n

j
)] =
n

1
X
j
=1
2(
n

j
)
2
+ 3(
n

j
)
(10.41)
and using (10.10) and
m
X
i
=1
i
2
=
m
(
m
+ 1)(2
m
+ 1)
6
,
(10.42)
162
CHAPTER 10.
LINEAR SYSTEMS OF EQUATIONS I
Algorithm 3
Gaussian Elimination with Partial Pivoting
1:
for
j
= 1
, . . . , n

1
do
2:
Find
m
such that

a
mj

= max
j
≤
i
≤
n

a
ij

3:
if

a
mj

= 0
then
4:
stop
.
Matrix is singular
5:
end if
6:
a
jk
↔
a
mk
,
k
=
j, . . . , n
+ 1
.
Exchange rows
7:
for
i
=
j
+ 1
, . . . , n
do
8:
m
←
a
ij
/a
jj
.
Compute multiplier
9:
a
ik
←
a
ik

m
*
a
jk
,
k
=
j
+ 1
, . . . , n
+ 1
.
Elimination
10:
a
ij
←
m
.
Store multiplier
11:
end for
12:
end for
13:
for
i
=
n, n

1
, . . . ,
1
do
.
Backward Substitution
14:
x
i
←
a
i,n
+1

n
X
j
=
i
+1
a
ij
x
j
!
/a
ii
15:
end for
we get
W
(
n
) =
2
3
n
3
+
O
(
n
2
)
.
(10.43)
Thus, Gaussian elimination is computationally rather expensive for large
systems of equations.
10.3
LU
and Choleski Factorizations
If Gaussian elimination can be performed without row interchanges, then we
obtain an
LU
factorization of
A
, i.e.
A
=
LU
.
This factorization can be
advantageous when solving many linear systems with the same
n
×
n
matrix
A
but different right hand sides because we can turn the problem
Ax
=
b
into
two triangular linear systems, which can be solved much more economically
in
O
(
n
2
) operations. Indeed, from
LUx
=
b
and setting
y
=
Ux
we have
Ly
=
b,
(10.44)
Ux
=
y.
(10.45)
10.3.
LU
AND CHOLESKI FACTORIZATIONS
163
Given
b
, we can solve the first system for
y
with forward substitution and
then we solve the second system for
x
with backward substitution.
Thus,
while the
LU
factorization of
A
has an
O
(
n
3
) cost, subsequent solutions to
the linear system with the same matrix
A
but different right hand sides can
be done in
O
(
n
2
) operations.
When can we obtain the factorization
A
=
LU
?
the following result
provides a useful sufficient condition.