{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# k - 18.06 Spring 2009 Exam 2 Practice General comments Exam...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 18.06 Spring 2009 Exam 2 Practice General comments Exam 2 covers the first 18 lectures of 18.06. It does not cover determinants (lectures 19 and 20). There will also be no questions on graphs and networks. The topics covered are (very briefly summarized): 1. All of the topics from exam 1. 2. Linear independence [key point: the columns of a matrix A are independent if N ( A ) = { } ], bases (an indepen- dent set of vectors that spans a space), and dimension of subspaces (the number of vectors in any basis). 3. The four fundamental subspaces (key points: their dimensions for a given rank r and m × n matrix A , their relationship to the solutions [if any] of Ax = b , their orthogonal complements, and how/why we can find bases for them via the elimination process). 4. What happens to the four subspaces as we do matrix operations, especially elimination steps and more generally how the subspaces of AB compare to those of A and B . The fact (important for projection and least-squares!) that A T A has the same rank as A , the same null space as A , and the same column space as A T , and why (we proved this in class and another way in homework). 5. Orthogonal complements S ⊥ for subspaces S , especially (but not only) the four fundamental subspaces. 6. Orthogonal projections: given a matrix A , the projection of b onto C ( A ) is p = A ˆ x where ˆ x solves A T A ˆ x = A T b [always solvable since C ( A T A ) = C ( A T ) ]. If A has full column rank, then A T A is invertible and we can write the projection matrix P = A ( A T A )- 1 A T (so that A ˆ x = Pb , but it is much quicker to solve A T A ˆ x = A T b by elimination than to compute P in general). e = b- A ˆ x is in C ( A ) ⊥ = N ( A T ) , and I- P is the projection matrix onto N ( A T ) . 7. Least-squares: ˆ x minimizes k Ax- b k 2 over all x , and is the least-squares solution. That is, p = A ˆ x is the closest point to b in C ( A ) . Application to least-square curve fitting, minimizing the sum of the squares of the errors. 8. Orthonormal bases, forming the columns of a matrix Q with Q T Q = I . The projection matrix onto C ( Q ) is just QQ T , and ˆ x = Q T b . Obtaining Q from A (i.e., an orthonormal basis from any basis) by Gram-Schmidt, and the correspondence of this process to A = QR factorization where R = Q T A is invertible and upper-triangular. Using A = QR to solve equations (either Ax = b or A T A ˆ x = A T b ). Q is an orthogonal matrix only if it is square, in which case Q T = Q- 1 . 9. Dot products of functions, and hence Gram-Schmidt, orthonormal bases (e.g. Fourier series or orthogonal polynomials), orthogonal projection, and least-squares for functions. As usual, the exam questions may turn these concepts around a bit, e.g. giving the answer and asking you to work backwards towards the question, or ask about the same concept in a slightly changed context. We want to know that you have really internalized these concepts, not just memorizing an algorithm but knowing why...
View Full Document

{[ snackBarMessage ]}

### Page1 / 4

k - 18.06 Spring 2009 Exam 2 Practice General comments Exam...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online