CONCORDIA UNIVERSITY
Department of Mathematics & Statistics
Course
Number
Section(s)
Mathematics
208/2
Examination
Date
Time
Midterm
October 2011
1 Hour 30 minutes
All
Instructors
Pages
2
Course Examiner
B. Rhodes, C. Santana, D. Sen, E. Smith, F. romanel
1 2 1 2 0 0 ,
1 6 1 6 2 6 0 , 3 6 3 6
3 6 3 2 , and
orthonormal basis for L . Moreover, we have
R 4 = L L = c c c c
c R x y z w
R 4
x + y + z + w = 0 , a
decomposition of R 4 into a line and its three
dimensional orthogonal complement. Notice
that fo
= cfw_0W , U + V = U V. When U V 6= cfw_0W ,
U + V 6= U V . This distinction is important
because the direct sum has a very nice
property: Theorem 14.6.1. If w U V then
there is only one way to write w as the sum of
a vector in U and a vector in V . Proof
eigenvector, we can pick a value for x. Setting x
= 1 is convenient, and gives the eigenvector v1
= 1 4 . = 2: We solve the linear system 4 2
16 8 x y = 0 0 . Here again both equations
agree, because we chose to make the system
singular. We see that y = 2
Example 14.5 QR Decomposition In Chapter 7,
Section 7.7 teaches you how to solve linear
systems by decomposing a matrix M into a
product of lower and upper triangular
matrices M = LU . The GramSchmidt
procedure suggests another matrix
decomposition, M = Q
eigenvaluesee Review Problem 3). Let P be
the square matrix of orthonormal column
vectors P = x1 x2 xn , While x1 is an
eigenvector for M, the others are not
necessarily eigenvectors for M. Then MP =
1x1 Mx2 Mxn . 279 280 Diagonalizing
Symmetric Matrices
the matrix M I is singular, and so we require
that 1To save writing many minus signs
compute det(M I); which is equivalent if
you only need the roots. 233 234 Eigenvalues
and Eigenvectors Figure 12.2: Dont forget the
characteristic polynomial; you will ne
of this book, not matrices; matrices are merely
a convenient way of doing computations.
Change of Basis Example Lets now calculate
how the matrix of a linear transformation
changes when changing basis. To wit, let L: V
W with matrix M = (mi j ) in the or
answered. How do we find eigenvectors and
their eigenvalues? How many eigenvalues
and (independent) eigenvectors does a given
linear transformation have? When can a
linear transformation be diagonalized? We will
start by trying to find the eigenvectors fo
0 is a homogeneous linear equation, linear
combinations of solutions are solutions; in
other words the kernel ker(w) is a vector
space. Given the linear function W, some
vectors are now more special than others. We
can use musical intuition to do more! If
determining the strings shape. The vector v is
called an eigenvector and its corresponding
eigenvalue. The solution sets for each are
called V. For any the set Vk is a vector space
since elements of this set are solutions to the
homogeneous equation (L )v
wi = X j uj (uj wi) Thus the matrix for the
change of basis from T to R is given by P = (p j
i ) = (uj wi). 258 14.3 Relating Orthonormal
Bases 259 We would like to calculate the
product P PT . For that, we first develop a dirty
trick for products of dot
by L x y z = x + y x + z y + z . Let
ei be the vector with a one in the ith position
and zeros in all other positions. (a) Find Lei for
each i = 1, 2, 3. (b) Given a matrix M = m1 1
m1 2 m1 3 m2 1 m2 2 m2 3 m3 1 m3 2 m3 3
, what can you say about Mei for
domain; ranf = ImgS. For this reason, the range
of f is also sometimes called the image of f and
is sometimes denoted im(f) or f(S). We have
seen that the range of a matrix is always a
span of vectors, and hence a vector space.
Note that we prefer the phr
any matrix (or vector) N, we can compute N by
applying complex conjugation to each entry of
N. Compute (x ) T . Then compute (x Mx) T .
Note that for matrices AB + C = AB + C. (h)
Show that = . Using the result of a previous
part of this problem, what doe
CONCORDIA UNIVERSITY
Department of Mathematics 85 Statistics
Course Number Section(s)
Mathematics 208/2 All except EC
Examination Date Time Pages
Final December 2011 3 Hours 3
Instructors Course Examiner
B. Rhodes, C. Santana, D. Sen, E. Smith, F. R
MATH 208 Fundamental Mathematics I Fall 2014
Points to note:
This course deals with some basic mathematical notions, terminologies and techniques
that crop up in many mathematical modeling of business/commercial situation, social
sciences and life science
Department of Mathematics & Statistics
Concordia University
MATH 208
Fundamental Mathematics I
Fall 2014
Instructor*:
Office/Tel No.:
Office Hours:
Dr. S. I. Zaman
LB 916 Tel. extension 3260
Wednesdays 10 -11:30 am
*Students should get the above informati
CONCORDIA UNIVERSITY
Department of Mathematics 85 Statistics
Course Number Section(s)
Mathematics 208/ 2 All
Examination Date Time Pages
Midterm October 2014 1 Hour 30 minutes 2
Instructors Course Examiner
A. Bellahnid, C. Santana, D. Barrera, F. Romanell
matrix M0 of L we could simply calculate what
L does the the new input basis vectors in terms
of the new output basis vectors: L(1 + t), L(t + t
2 ), L(1 + t 2 ) = 1 2 + 2 1 , 2 1 + 3 3 , 1 2 +
3 3 = (w 0 1 + w 0 2 , w0 1 + 2w 0 2 , 2w 0 1 +
w 0 2 ) = (w
the second entry of the first row, whose effect
upon multiplying the two matrices precisely
undoes what we we did to the second column
of the first matrix. For the third column of M
we use GramSchmidt to deduce the third
orthogonal vector 1 6 1 3 7 6 =
10000100000
0.0001000
be diagonalized? Either
diagonalize it or explain why this is impossible.
Note: It turns out that every matrix is similar
to a block matrix whose diagonal blocks look
like diagonal matrices or the ones above and
whose off-diagonal b
orthogonal basis for the latter vector space.
Note that the set of vectors you start out with
needs to be ordered to uniquely specify the
algorithm; changing the order of the vectors
will give a different orthogonal basis. You
might need to be the one to
a subspace of W. 13. Let Sn and An define the
space of nn symmetric and anti-symmetric
matrices, respectively. These are subspaces of
the vector space Mn n of all n n matrices.
What is dim Mn n , dim Sn, and dim An? Show
that Mn n = Sn + An. Define an inn
w u = 0 for all u U is the orthogonal
complement of U in W. Remark The symbols
U are often read as U-perp. This is the
set of all vectors in W orthogonal to every
vector in U. Notice also that in the above
definition we have implicitly assumed that the
in
that S is called the domain of f, T is called the
codomain or target of f. We now formally
introduce a term that should be familar to you
from many previous courses. 285 286 Kernel,
Range, Nullity, Rank 16.1 Range Definition The
range of a function f : S
where we use the assumption that W is finitedimensional.) Let e1, . . . , en be an
orthonormal basis for U. Set: u = (w e1)e1 +
+ (w en)en U , u = w u . It is easy to
check that u U (see the Gram-Schmidt
procedure). Then w = u + u , so w U U
, and we are