MATRICES
Matrices are rectangular arrays of real or complex
numbers. With them, we dene arithmetic operations
that are generalizations of those for real and complex
numbers. The general form a matrix of order m n
is
a1,1 a1,n
.
.
.
A= .
.
am,1 am,n
We say
EXAMPLE OF ONE-STEP METHOD
Consider solving
y 0 = y cos x;
y (0) = 1
Imagine writing a Taylor series for the solution Y (x),
say initially about x = 0. Then
Y (h ) =
2
3
0 (0) + h Y 00 (0) + h Y 000 (0) +
Y (0) + hY
2
6
We can calculate Y 0(0) = Y (0) cos
ITERATION METHODS
These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b.
We need such methods for solving many large linear systems. Sometimes the matrix is too large to
be stored in the comput
SOLVING LINEAR SYSTEMS
We want to solve the linear system
a1,1x1 + + a1,nxn = b1
.
.
an,1x1 + + an,nxn = bn
This will be done by the method used in beginning
algebra, by successively eliminating unknowns from
equations, until eventually we have only one e
NUMERICAL METHODS FOR ODEs
Consider the initial value problem
y 0 = f (x; y );
x0
x
b;
y (x0) = Y0
and denote its solution by Y (x). Most numerical
methods solve this by nding values at a set of node
points :
x0 < x1 <
< xN
b
The approximating values are
ESTIMATION OF ERROR
b
Let x denote an approximate solution for Ax = b;
b
perhaps x is obtained by Gaussian elimination. Let x
denote the exact solution. Then introduce
b
r = b Ax
b
a quantity called the residual for x. Then
r=
=
=
b
xx =
b
b Ax
b
Ax Ax
b
GENERAL ERROR FORMULA
In general,
yn+1 = yn + h f (xn; yn);
n = 0; 1; :; N
1
h2 00
Y (xn+1) = Y (xn) + h Y 0(xn) + Y ( n)
2
h2 00
= Y (xn) + h f (xn; Y (xn) + Y ( n)
2
with some xn
xn+1.
n
We will use this as the starting point of our discussion
of the er
LEAST SQUARES DATA FITTING
Experiments generally have error or uncertainty in measuring their outcome. Error can be human error, but it is
more usually due to inherent limitations in the equipment
being used to make measurements. Uncertainty can be
due to
LINEAR SYSTEMS
Consider the following example of a linear system:
x1 + 2x2 + 3x3 = 5
= 3
x1 + x3
3x1 + x2 + 3x3 = 3
Its unique solution is
x1 = 1,
x2 = 0,
x3 = 2
In general we want to solve n equations in n unknowns. For this, we need some simplifying not
NUMERICAL STABILITY;
IMPLICIT METHODS
When solving the initial value problem
Y 0(x) = f (x; Y (x);
x0
x
b
Y (x0) = Y0
we know that small changes in the initial data Y0 will
result in small changes in the solution of the di erential equation. More precisel
SYSTEMS OF ODES
Consider the pendulum shown below. Assume the rod
is of neglible mass, that the pendulum is of mass m,
and that the rod is of length `. Assume the pendulum
moves in the plane shown, and assume there is no
friction in the motion about its p
DIFFERENTIAL EQUATIONS
A principal model of physical phenomena.
The equation:
y 0 = f (x; y )
The initial value:
y (x0) = Y0
Find solution Y (x) on some interval x0 x b: Together these two conditions constitute an initial value
problem.
We will study meth
MULTISTEP METHODS
All of the methods we have studied until now are examples of one-step methods. These methods use one
past value of the numerical solution, call it yn, in order
to compute a new value yn+1. Multistep methods use
more than one past value,
GAUSSIAN ELIMINATION - REVISITED
Consider solving the linear system
2x1 + x2 x3 + 2x4
4x1 + 5x2 3x3 + 6x4
2x1 + 5x2 2x3 + 6x4
4x1 + 11x2 4x3 + 8x4
=
=
=
=
5
9
4
2
by Gaussian elimination without pivoting. We denote
this linear system by Ax = b. The augmen
TWO-POINT BVP
Consider the two-point boundary value problem of a
second-order linear equation:
Y 00(x) = p(x) Y 0(x) + q (x) Y (x) + r (x)
a
Y (a) = g1;
x
b
Y (b) = g2
Assume the given functions p, q and r are continuous
on [a; b]. Unlike the initial valu