Lecture 12
August 11, 2015
First wrap up some stu from last lecture.
Conditional density/joint density/conditional expectation.
Joint distribution: so far we have considered only the distribution o
Lecture 11
August 10, 2015
Review of last time: lagrange multipliers.
Problem: minimize f (x) such that g(x) = 0. Solve it by setting up
the Lagrangian L = f (x) g(x) and set all partial derivatives
Lecture 10
August 5, 2015
Introduction to matrix calculus: dierentiating linear and quadratic
matrix functions.
Suppose y = f (x) where y is a vector in Rm and x is a vector in Rn .
dy
Dene the rst
Lecture 8
July 27, 2015
SVD continued: recall that the SVD factorization of a matrix A is
given by A = U V , where U contains the eigenvectors of AA in its
columns, V contains the eigenvectors of A A
Lecture 9
August 2, 2015
Linear programming: simplex method, duality, application to zerosum games.
A linear program is an optimization problem that has a linear objective
function and linear constr
Lecture 7
July 21, 2015
Consider nding the minimum point of a quadratic form f (x, y) =
ax2 + 2bxy + cy 2 .
If the quadratic form has a minimum point, it is called positive denite.
Note that clearl
Lecture 6
July 20, 2015
Recall the dierential equation we solved earlier: du = Au where
A = 2 1 1 2 . The rst step is to nd the eigenvalues (-1 and
1
1
.
and
3) and the eigenvectors
1
1
The solution
Lecture 5
July 7, 2015
Diagonalization: Form a matrix S with its columns as the eigenvectors
of A. Form matrix with diagonal entries equal to its eigenvalues
and 0 everywhere else. Then if all the ei
Lecture 4
July 8, 2015
First talk about how to calculate a determinant before showing what
it is good for.
The determinant of a square matrix A is a number. For a 2 2 matrix
a b
, det(A) = ad bc.
A=
The length (or the norm) of a vector x, denoted by x is given by
the Pythagorean theorem: x =
of vector x. Note that x
the 0 vector.
2
n 2
1 xi ,
where n is the dimension
= x x. The only vector with
Lecture 2
June 30, 2015
Ex. Solve this system by using LU decomposition:
2x1 + x2 + x3 = 5,
4x1 6x2 = 2,
2x1 + 7x2 + 2x3 = 9.
Lets concentrate on
2
1
A = 4 6
2 7
the LU decomposition of A.
2 1
1
2 1
Lecture 1
June 28, 2015
Matrices are a bunch of numbers put into an array:
Ex:
6 5 4
.
3 2 1.
A=
A is a matrix. It has 2 rows and 3 columns, i.e. A is a 2 3 matrix.
Notation: Ajk is the entry in the
Homework 2
July 9, 2015
1 2
0 2
1. Let A =
0 0
0 0
fundamental subspaces
0 3
2 2
. Find the basis and dimension of the four
0 0
0 4
(the column space and null space of A and A ).
2. What do you know
HW1
July 1, 2015
1.
Let
2
3
2 3
1 0 1
4
A = 4 1 1 0 5 , b = 4 35 .
1 1 1
6
Calculate Ab, and solve the equation Ax = b by Gauss-Jordan elimination,
LU decomposition, and by calculating A 1 b.
2. The F
2
3
2 3
10
1
4 7 5. Ax = b has solution x = 425.
1. Ab =
13
3
2. Observe that:
Fn+2
1 1 Fn+1
=
.
Fn+1
1 0
Fn
n
1 1
F
Fn
Note that
= n+1
. This can be proven by induction. Then
1 0
Fn F n 1
our express
Today well be taking a much more rigorous look at probability.
Recall that the sample space (denoted by ) is the set of all possible
outcomes. Consider for example = [0, 1], then the probability of
An important result is that for aperiodic, irreducible Markov chains,
there is a unique stationary distribution that solves = P . In this
case the invariant distribution and the limiting distribution
MSM502: Linear Algebra
Date and location TBA
Instructor: Leon Cui ([email protected]) Phone: 908-240-5288
Course Objectives: The goal of this course is to give an introduction to linear algebra. A
1. (1pt) If A is a 6417 matrix of rank 11, how many independent vectors
x satisfy Ax = 0? How many independent vectors satisfy A x = 0?
Proof. Since the rank is 11, there are 11 pivots and so there ar
First wrap up some stu from last lecture.
Conditional density/joint density/conditional expectation.
Joint distribution: so far we have considered only the distribution of
a single random variable;
Review of last time: lagrange multipliers.
Problem: minimize f (x) such that g(x) = 0. Solve it by setting up
the Lagrangian L = f (x) g(x) and set all partial derivatives equal
to zero. We must sol
Introduction to matrix calculus: dierentiating linear and quadratic
matrix functions.
Suppose y = f (x) where y is a vector in Rm and x is a vector in Rn .
dy
Dene the rst derivative dx by the matr
SVD continued: recall that the SVD factorization of a matrix A is
given by A = U V , where U contains the eigenvectors of AA in its
columns, V contains the eigenvectors of A A in its columns, and
co
Consider nding the minimum point of a quadratic form f (x, y) =
ax2 + 2bxy + cy 2 .
If the quadratic form has a minimum point, it is called positive denite.
Note that clearly solving fx = 0, fy = 0
Linear programming: simplex method, duality, application to zerosum games.
A linear program is an optimization problem that has a linear objective
function and linear constraints.
min c x,
x
s.t. Ax
Recall the dierential equation we solved earlier: du = Au where
A = 2 1 1 2 . The rst step is to nd the eigenvalues (-1 and
1
1
3) and the eigenvectors
and
.
1
1
The solution was a combination of th
Diagonalization: Form a matrix S with its columns as the eigenvectors
of A. Form matrix with diagonal entries equal to its eigenvalues
and 0 everywhere else. Then if all the eigenvectors are independ
Lecture 1
June 29, 2014
Matrices are a bunch of numbers put into an array:
Ex:
6 5 4
.
3 2 1.
A=
A is a matrix. It has 2 rows and 3 columns, i.e. A is a 2 3 matrix.
Notation: Ajk is the entry in the