Roughly speaking, the error can be estimated by comparing two approxima
tions obtained with different
h
.
Consider (1.27). If we halve
h
we get
I
[
f
] =
T
h/
2
[
f
] +
1
4
C
2
h
2
+
R
(
h/
2)
.
(1.29)
Subtracting (1.29) from (1.27) we get
C
2
h
2
=
4
3
(
T
h/
2
[
f
]

T
h
[
f
]
)
+
4
3
(
R
(
h/
2)

R
(
h
))
.
(1.30)
The last term on the right hand side is
o
(
h
2
). Hence, for
h
sufficiently small,
we have
C
2
h
2
≈
4
3
(
T
h/
2
[
f
]

T
h
[
f
]
)
(1.31)
and this could provide a good, computable estimate for the error, i.e.
E
h
[
f
]
≈
4
3
(
T
h/
2
[
f
]

T
h
[
f
]
)
.
(1.32)
The key here is that
h
has to be sufficiently small to make the asymptotic
approximation (1.31) valid. We can check this by working backwards. If
h
is sufficiently small, then evaluating (1.31) at
h/
2 we get
C
2
h
2
2
≈
4
3
(
T
h/
4
[
f
]

T
h/
2
[
f
]
)
(1.33)
1.2.
AN ILLUSTRATIVE EXAMPLE
11
and consequently the ratio
q
(
h
) =
T
h/
2
[
f
]

T
h
[
f
]
T
h/
4
[
f
]

T
h/
2
[
f
]
(1.34)
should be approximately 4. Thus,
q
(
h
) offers a reliable, computable indicator
of whether or not
h
is sufficiently small for (1.32) to be an accurate estimate
of the error.
We can now use (1.31) and the idea of error correction to improve the
accuracy of
T
h
[
f
] with the following approximation
2
S
h
[
f
] :=
T
h
[
f
] +
4
3
(
T
h/
2
[
f
]

T
h
[
f
]
)
.
(1.35)
1.2.5
Richardson Extrapolation
We can view the
error correction
procedure as a way to eliminate the
leading order (in
h
) contribution to the error. Multiplying (1.29) by 4 and
substracting (1.27) to the result we get
I
[
f
] =
4
T
h/
2
[
f
]

T
h
[
f
]
3
+
4
R
(
h/
2)

R
(
h
)
3
(1.36)
Note that
S
h
[
f
] is exactly the first term in the right hand side of (1.36) and
that the last term converges to zero faster than
h
2
.
This very useful and
general procedure in which the leading order component of the asymptotic
form of error is eliminated by a combination of two computations performed
with two different values of
h
is called
Richardson’s Extrapolation
.
Example 2.
Consider again
f
(
x
) =
e
x
in
[0
,
1]
. With
h
= 1
/
16
we get
q
1
16
=
T
1
/
32
[
e
x
]

T
1
/
16
[
e
x
]
T
1
/
64
[
e
x
]

T
1
/
32
[
e
x
]
≈
3
.
9998
(1.37)
and the improved approximation is
S
1
/
16
[
e
x
] =
T
1
/
16
[
e
x
] +
4
3
(
T
1
/
32
[
e
x
]

T
1
/
16
[
e
x
]
)
= 1
.
718281837561771
(1.38)
which gives us nearly 8 digits of accuracy (error
≈
9
.
1
×
10

9
).
S
1
/
32
gives
us an error
≈
5
.
7
×
10

10
. It decreased by approximately a factor of
1
/
16
.
This would correspond to
fourth order
rate of convergence. We will see in
Chapter 8 that indeed this is the case.
2
The symbol := means equal by definition.
12
CHAPTER 1.
INTRODUCTION
It appears that
S
h
[
f
] gives us superior accuracy to that of
T
h
[
f
] but at
roughly twice the computational cost.
If we group together the common
terms in
T
h
[
f
] and
T
h/
2
[
f
] we can compute
S
h
[
f
] at about the same compu
tational cost as that of
T
h/
2
[
f
]:
4
T
h/
2
[
f
]

T
h
[
f
] = 4
h
2
"
1
2
f
(
a
) +
2
N

1
X
j
=1
f
(
a
+
jh/
2) +
1
2
f
(
b
)
#

h
"
1
2
f
(
a
) +
N

1
X
j
=1
f
(
a
+
jh
) +
1
2
f
(
b
)
#
=
h
2
"
f
(
a
) +
f
(
b
) + 2
N

1
X
k
=1
f
(
a
+
kh
) + 4
N

1
X
k
=1
f
(
a
+
kh/
2)
#
.
Therefore
S
h
[
f
] =
h
6
"
f
(
a
) + 2
N

1
X
k
=1
f
(
a
+
kh
) + 4
N

1
X
k
=1
f
(
a
+
kh/
2) +
f
(
b
)
#
.
(1.39)
The resulting quadrature formula
S
h
[
f
] is known as the
Composite Simpson’s
Rule
and, as we will see in Chapter 8, can be derived by approximating the
integrand by quadratic polynomials. Thus, based on cost and accuracy, the
Composite Simpson’s Rule would be preferable to the Composite Trapezoidal