22
Chapter 1. Introduction to Vectors
1.3 Matrices
'
1 2
1 A = 3 4 is a 3 by 2 matrix : m = 3 rows and n = 2 columns.
5 6
1 2
1
2
x1
2 Ax = 3 4
is a combination of the columns
Ax = x1 3 + x2 4 .
x2
5 6
5
6
3 The 3 components of Ax are dot products of th
Ch
a
p
t e
r
9
Ca
1
N
If
a
i n
t o
l i n
o
i t h
a
i n
p
d
b
a
s
i l l
It
a
w
e l e m
a
n
d
t h
e o
R
s u
b
f o
r m
L
a
D
e
n
F
t o
r
t h
If
a
L
i l l
e
n
m
x
m
d l e t
o
a
n
s
S
n
h
a
s
n
a
t
l
n
t o
i c h
t h
o
d
1
c o
s s i b
s u
c h
a
e n
s e
.
In
h
4
Nonlinear Equations
lecture 27: Introduction to Nonlinear Equations
lecture 28: Bracketing Algorithms
The solution of nonlinear equations has been a motivating
challenge throughout the history of numerical analysis. We opened
these notes with mention of
170
lecture 33: Local Analysis of One-Step Integrators
What can be said of the error between the computed solution xk at
time tk = t0 + kh and the exact solution x (tk )? In this lecture and the
next, we analyze this error, as a function of k, h, and prop
162
lecture 30: Convergence of Newtons Method via Direct Iteration
4.3
Direct iteration
We have already performed a simple analysis of Newtons method
to gain an appreciation for the quadratic convergence rate. For a
broader perspective, we shall now put N
122
lecture 22: NewtonCotes quadrature
12
f (x)
10
3.2
NewtonCotes quadrature
8
You encountered the most basic method for approximating an integral when you learned calculus: the Riemann integral is motivated by
approximating the area under a curve by the
143
lecture 26: Richardson extrapolation
3.5
Richardson extrapolation, Romberg integration
Throughout numerical analysis, one encounters procedures that
apply some simple approximation (e.g., linear interpolation) to construct some equally simple algorith
132
lecture 24: Gaussian quadrature rules: fundamentals
3.4
Gaussian quadrature
It is clear that the trapezoid rule,
b
a
2
f ( a) + f (b) ,
exactly integrates linear polynomials, but not all quadratics. In fact,
one can show that no quadrature rule of the
112
lecture 19: Orthogonal Polynomials, Part I
lecture 20: Orthogonal Polynomials, Part II
2.5
Systems of orthogonal polynomials
Given a basis for Pn , we can obtain the least squares approximation
to f 2 C [ a, b] by solving the linear system Gc = b as d
3
Quadrature
lecture 21: Interpolatory Quadrature Rules
The past two chapters have developed a suite of tools for polynomial interpolation and approximation. We shall now apply these
tools toward the approximation of definite integrals.
To compute the lea
88
lecture 15: Chebyshev Polynomials for Optimal Interpolation
2.3
Optimal Interpolation Points via Chebyshev Polynomials
As an application of the minimax approximation procedure, we consider how best to choose interpolation points cfw_ x j nj=0 to minimi
102
lecture 17: Fundamentals of Least Squares Approximation, Part I
lecture 18: Fundamentals of Least Squares Approximation, Part II
This proof is very general: we are
thinking of f0 , . . . , fn being a basis for
Pn (and hence linearly independent),
but
97
lecture 16: Introduction to Least Squares Approximation
2.4
Least squares approximation
The minimax criterion is an intuitive objective for approximating
a function. However, in many cases it is more appealing (for both
computation and for the given ap
82
lecture 14: Equioscillation, Part 2
A direct proof that an optimal minimax approximation p 2 Pn must
give an equioscillating error is rather tedious, requiring one the chase
down the oscillation points one at a time. The following approach is
a bit mor
2
Approximation Theory
lecture 12: Introduction to Approximation Theory
Interpolation is an invaluable tool in numerical analysis: it
provides an easy way to replace a complicated function by a polynomial (or piecewise polynomial), and, at least as import
158
lecture 29: Newtons Method
We have studied two bracketing methods for finding zeros of a function, bisection and regula falsi. These methods have certain virtues
(most importantly, they always converge), but some exploratory
evaluations of f might be
79
lecture 13: Equioscillation, Part 1
2.2
Oscillation Theorem
The previous example hints that the points at which the error f p
attains its maximum magnitude play a central role in the theory of
minimax approximation. The Theorem of de la Valle Poussin i
55
lecture 9: Introduction to Splines
1.11
Splines
Spline fitting, our next topic in interpolation theory, is an essential
tool for engineering design. As in the last lecture, we strive to interpolate data using low-degree polynomials between consecutive
59
lecture 10: B-Splines
1.11.2
B-Splines: a basis for splines
Throughout our discussion of standard polynomial interpolation, we
viewed Pn as a linear space of dimension n + 1, and then expressed
the unique interpolating polynomial in several different b
51
lecture 8: Piecewise interpolation
1.10
Piecewise polynomial interpolation
We have seen, through Runges example, that high degree polynomial interpolation can lead to large errors when the (n + 1)st
derivative of f is large in magnitude. In other cases
35
lecture 6: Interpolating Derivatives
1.8
Hermite Interpolation and Generalizations
Example 1.1 demonstrated that polynomial interpolants to sin( x )
attain arbitrary accuracy for x 2 [ 5, 5] as the polynomial degree
increases, even if the interpolation
41
lecture 7: Trigonometric Interpolation
1.9
Trigonometric interpolation for periodic functions
Thus far all our interpolation schemes have been based on polynomials. However, if the function f is periodic, one might naturally prefer
to interpolate f wit
25
lecture 4: Constructing Finite Difference Formulas
1.7
Application: Interpolants for Finite Difference Formulas
The most obvious use of interpolants is to construct polynomial models of more complicated functions. However, numerical analysts rely
on in
30
lecture 5: Finite Difference Methods for Differential Equations
1.7.3 Application: Boundary Value Problems
Example 1.6 (Dirichlet boundary conditions). Suppose we want to
solve the differential equation
u00 ( x ) = g( x ),
x 2 [0, 1]
for the unknown fu
16
lecture 3: Interpolation Error Bounds
1.6
Convergence Theory for Polynomial Interpolation
Interpolation can be used to generate low-degree polynomials that
approximate a complicated function over the interval [ a, b]. One
might assume that the more dat
10
lecture 2: Superior Bases for Polynomial Interpolants
1.3
Polynomial interpolants in a general basis
The monomial basis may seem like the most natural way to write
down the interpolating polynomial, but it can lead to numerical
problems, as seen in the
1
Lecture Notes on Numerical Analysis
Virginia Tech MATH/CS 5466 Spring 2016
We model our world with continuous mathematics. Whether our
interest is natural science, engineering, even finance and economics,
the models we most often employ are functions of
THE CATENARY AND HYPERBOLIC FUNCTIONS
MICHAEL RAUGH
Abstract. A uniform perfectly flexible chain hangs freely from
two fixed points in a catenary, a curve characterized by a hyperbolic cosine function. This can be shown using elementary calculus,
but it a
Embree
draft
23 February 2012
1 Basic Spectral Theory
Matrices prove so useful in applications because of the insight one gains from
eigenvalues and eigenvectors. A first course in matrix theory should thus
be devoted to basic spectral theory: the deve