Lecture 12
August 11, 2015
First wrap up some stu from last lecture.
Conditional density/joint density/conditional expectation.
Joint distribution: so far we have considered only the distribution of
a single random variable; now we are interested in th
Lecture 11
August 10, 2015
Review of last time: lagrange multipliers.
Problem: minimize f (x) such that g(x) = 0. Solve it by setting up
the Lagrangian L = f (x) g(x) and set all partial derivatives equal
to zero. We must solve f = g, g(x) = 0.
Intuiti
Lecture 10
August 5, 2015
Introduction to matrix calculus: dierentiating linear and quadratic
matrix functions.
Suppose y = f (x) where y is a vector in Rm and x is a vector in Rn .
dy
Dene the rst derivative dx by the matrix D where Dij =
is called th
Lecture 8
July 27, 2015
SVD continued: recall that the SVD factorization of a matrix A is
given by A = U V , where U contains the eigenvectors of AA in its
columns, V contains the eigenvectors of A A in its columns, and
contains the square roots of the
Lecture 9
August 2, 2015
Linear programming: simplex method, duality, application to zerosum games.
A linear program is an optimization problem that has a linear objective
function and linear constraints.
min c x,
x
s.t. Ax b,
x 0.
The set of all point
Lecture 7
July 21, 2015
Consider nding the minimum point of a quadratic form f (x, y) =
ax2 + 2bxy + cy 2 .
If the quadratic form has a minimum point, it is called positive denite.
Note that clearly solving fx = 0, fy = 0 gives (x, y) = (0, 0). We want
Lecture 6
July 20, 2015
Recall the dierential equation we solved earlier: du = Au where
A = 2 1 1 2 . The rst step is to nd the eigenvalues (-1 and
1
1
.
and
3) and the eigenvectors
1
1
The solution was a combination of the eigenvalues and eigenvectors:
Lecture 5
July 7, 2015
Diagonalization: Form a matrix S with its columns as the eigenvectors
of A. Form matrix with diagonal entries equal to its eigenvalues
and 0 everywhere else. Then if all the eigenvectors are independent,
S 1 AS = . To see why, just
Lecture 4
July 8, 2015
First talk about how to calculate a determinant before showing what
it is good for.
The determinant of a square matrix A is a number. For a 2 2 matrix
a b
, det(A) = ad bc.
A=
c d
The determinant for an n n matrix A is given by t
The length (or the norm) of a vector x, denoted by x is given by
the Pythagorean theorem: x =
of vector x. Note that x
the 0 vector.
2
n 2
1 xi ,
where n is the dimension
= x x. The only vector with length 0 is
Given vectors x and y, how to test if they
Lecture 1
June 28, 2015
Matrices are a bunch of numbers put into an array:
Ex:
6 5 4
.
3 2 1.
A=
A is a matrix. It has 2 rows and 3 columns, i.e. A is a 2 3 matrix.
Notation: Ajk is the entry in the jth row and kth column.
A23 =? A32 =?
Special case: A
Homework 2
July 9, 2015
1 2
0 2
1. Let A =
0 0
0 0
fundamental subspaces
0 3
2 2
. Find the basis and dimension of the four
0 0
0 4
(the column space and null space of A and A ).
2. What do you know about C(A) when the number of solutions to Ax = b
is (a
HW1
July 1, 2015
1.
Let
2
3
2 3
1 0 1
4
A = 4 1 1 0 5 , b = 4 35 .
1 1 1
6
Calculate Ab, and solve the equation Ax = b by Gauss-Jordan elimination,
LU decomposition, and by calculating A 1 b.
2. The Fibonacci sequence is dened by F1 = 1, F2 = 1, Fn+2 = Fn
2
3
2 3
10
1
4 7 5. Ax = b has solution x = 425.
1. Ab =
13
3
2. Observe that:
Fn+2
1 1 Fn+1
=
.
Fn+1
1 0
Fn
n
1 1
F
Fn
Note that
= n+1
. This can be proven by induction. Then
1 0
Fn F n 1
our expression for Fm+n+1 is given by:
m
Fm+n+1
1 1
Fn+1
=
Fm+n
1
Today well be taking a much more rigorous look at probability.
Recall that the sample space (denoted by ) is the set of all possible
outcomes. Consider for example = [0, 1], then the probability of
any particular outcome happening is zero. So we cannot
An important result is that for aperiodic, irreducible Markov chains,
there is a unique stationary distribution that solves = P . In this
case the invariant distribution and the limiting distribution are the
same.
Clearly there are dierent questions tha
MSM502: Linear Algebra
Date and location TBA
Instructor: Leon Cui ([email protected]) Phone: 908-240-5288
Course Objectives: The goal of this course is to give an introduction to linear algebra. A thorough
understanding of this subject is fundamental f
1. (1pt) If A is a 6417 matrix of rank 11, how many independent vectors
x satisfy Ax = 0? How many independent vectors satisfy A x = 0?
Proof. Since the rank is 11, there are 11 pivots and so there are 17-11=6
free variables, meaning that 6 independent ve
First wrap up some stu from last lecture.
Conditional density/joint density/conditional expectation.
Joint distribution: so far we have considered only the distribution of
a single random variable; now we are interested in the distribution of
two varia
Review of last time: lagrange multipliers.
Problem: minimize f (x) such that g(x) = 0. Solve it by setting up
the Lagrangian L = f (x) g(x) and set all partial derivatives equal
to zero. We must solve f = g, g(x) = 0.
Intuition: imagine we are travelli
Introduction to matrix calculus: dierentiating linear and quadratic
matrix functions.
Suppose y = f (x) where y is a vector in Rm and x is a vector in Rn .
dy
Dene the rst derivative dx by the matrix D where Dij =
is called the Jacobian matrix.
yi
xj .
SVD continued: recall that the SVD factorization of a matrix A is
given by A = U V , where U contains the eigenvectors of AA in its
columns, V contains the eigenvectors of A A in its columns, and
contains the square roots of the nonzero eigenvalues of b
Consider nding the minimum point of a quadratic form f (x, y) =
ax2 + 2bxy + cy 2 .
If the quadratic form has a minimum point, it is called positive denite.
Note that clearly solving fx = 0, fy = 0 gives (x, y) = (0, 0). We want
to see if all the surro
Linear programming: simplex method, duality, application to zerosum games.
A linear program is an optimization problem that has a linear objective
function and linear constraints.
min c x,
x
s.t. Ax b,
x 0.
The set of all points that satisfy the constr
Recall the dierential equation we solved earlier: du = Au where
A = 2 1 1 2 . The rst step is to nd the eigenvalues (-1 and
1
1
3) and the eigenvectors
and
.
1
1
The solution was a combination of the eigenvalues and eigenvectors:
1 1
et
0
c1
u(t) = c1 e
Diagonalization: Form a matrix S with its columns as the eigenvectors
of A. Form matrix with diagonal entries equal to its eigenvalues
and 0 everywhere else. Then if all the eigenvectors are independent,
S 1 AS = . To see why, just rearrange into AS = S.
Lecture 1
June 29, 2014
Matrices are a bunch of numbers put into an array:
Ex:
6 5 4
.
3 2 1.
A=
A is a matrix. It has 2 rows and 3 columns, i.e. A is a 2 3 matrix.
Notation: Ajk is the entry in the jth row and kth column.
A23 =? A32 =?
Special case: A