We can use Taylor’s theorem to approximate derivatives of a function, given only the
function’s value on a uniform grid.
For example, consider the following corollary.
Corollary 7.3
(Forward Difference)
.
Suppose that
u
(
x
) : [
a, b
]
→
R
is smooth (infinitely
differentiable) function, and we have approximated this on a uniform grid by
{
u
j

j
=
School of Mathematics
143
University of Manchester
MATH20401: 7
NUMERICAL SOLUTION OF PDES
S.L. COTTER
0
,
1
, . . . , N
}
at the points
{
x
j
=
a
+
j
×
h

j
= 0
,
1
, . . . , N
}
, where
h
= (
b

a
)
/N
. Then if
x
=
a
+
Jh
u
J
+1

u
J
h
=
u
0
(
x
) +
O
(
h
)
.
Proof.
By Taylor’s theorem (with
n
= 2), we have that there exists
ξ
∈
[
a, b
] such that
u
(
x
+
h
) =
u
(
x
) +
hu
0
(
x
) +
h
2
2
u
00
(
ξ
) =
u
(
x
) +
hu
0
(
x
) +
O
(
h
2
)
.
Since
u
(
x
+
h
) =
u
((
J
+ 1)
h
) =
u
J
+1
and
u
(
x
) =
u
(
Jh
) =
u
J
, we therefore have that:
u
J
+1

u
J
h
=
hu
0
(
x
) +
O
(
h
2
)
h
,
=
u
0
(
x
) +
O
(
h
)
.
Note here that if we divide an
O
(
h
n
) function by
h
m
where
m < n
, then we get an
O
(
h
n

m
) function. Proof: Exercise. This result means that if
h
is small enough, then we
can use (
u
J
+1

u
J
)
/h
as an approximation for
u
(
x
).
This particular method is a first
order method of approximation of the first derivative.
Definition 7.4.
A finite difference method is said to be an
n

th
order approximation if
the remainder term is
O
(
h
n
)
.
The remainder term describes how big an error we are making in our approximation
of the function that we want to understand, and since
h
1, then
h
n
is increasingly small
as
n
gets larger. Therefore higher order methods are more accurate (the error in the ap
proximation is smaller). However, usually higher order methods are more
computationally
expensive
, i.e. they require the computer to calculate more operations. In all of numerical
analysis and scientific computing, there is always a trade off between
computation cost
(the
number of operations that the computer must make) and the accuracy of the algorithm
(the size of the error in the approximation of the function). Let us now look at another
finite difference approximation of the first derivative.
7.1.1
Centered Differencing
Corollary 7.5
(Centered Differencing)
.
Suppose that
u
(
x
) : [
a, b
]
→
R
is a smooth func
tion, and we have approximated
u
(
x
)
on a uniform grid by
N
+ 1
points. Let us define
δu
(
x
) =
u
(
x
+
h
)

u
(
x

h
) =
u
J
+1

u
J

1
(assuming that
x
=
a
+
Jh
for some
J
). Then
δu
(
x
)
2
h
is an second order approximation for
u
0
(
x
)
. i.e.,
δu
(
x
)
2
h
=
u
0
(
x
) +
O
(
h
2
)
.
School of Mathematics
144
University of Manchester
MATH20401: 7
NUMERICAL SOLUTION OF PDES
S.L. COTTER
Proof.
We can look again to the Taylor’s expansions for both
u
(
x
+
h
) (as in the proof of
the last corollary), and
u
(
x

h
). Note that the Taylor expansion for
u
(
x

h
) is found by
replacing
h
with

h
in the formula:
u
(
x

h
) =
u
(
x
)

hu
0
(
x
) +
h
2
2
u
00
(
x
) +
O
(
h
3
)
.
Therefore
δ
2
h
u
(
x
)
=
u
(
x
+
h
)

u
(
x

h
)
2
h
,
=
(
u
(
x
) +
hu
0
(
x
) +
h
2
u
00
(
x
)
/
2)

(
u
(
x
)

hu
0
(
x
) +
h
2
u
00
(
x
)
/
2) +
O
(
h
3
)
2
h
,
=
2
hu
0
(
x
) +
O
(
h
3
)
2
h
,
=
u
0
(
x
) +
O
(
h
2
)
.
Therefore
δu
(
x
)
2
h
is a second order approximation of the first derivative of
u
.