is the case,
{
˛
u
1
,
. . .
,
˛
u
m
}
is said to be
linearly dependent
.

Subscribe to view the full document.
10
CHAPTER 1.
LINEAR ALGEBRA (APPROX. 9 LECTURES)
We want to focus on sets
{
˛
u
1
,
. . .
,
˛
u
m
}
that have no such redundancy. Such a set of
vectors is called
linearly independent
; formally, this means that the only set of scalars
–
i
,
i
= 1 :
m
such that
m
ÿ
n
=1
–
n
˛
u
n
=
˛
0
is
–
i
= 0, for all
i
.
Definition 5.
A set
{
˛
u
1
,
. . .
,
˛
u
m
}
of vectors in
W
forms a
basis for
W
i
ff
these two
properties are true:
1.
span
(
˛
u
1
,
. . .
,
˛
u
m
) =
W
2.
{
˛
u
1
,
. . .
,
˛
u
m
}
is
linearly independent
.
In other words, “
{
˛
u
1
,
. . .
,
˛
u
m
}
is a basis for
W
” means
˛
x
=
q
m
n
=1
–
n
˛
u
n
is uniquely
solvable for any
˛
x
œ
W
.
Examples 3.
1.
Standard basis
for both for
R
3
and
C
3
:
e
1
= (1, 0, 0)
T
,
e
2
= (0, 1, 0)
T
,
e
3
= (0, 0, 1)
T
.
Check this is a basis: For any
X
= (
x
1
,
x
2
,
x
3
)
T
,
X
=
q
i
–
i
e
i
is uniquely solved by
x
i
=
–
i
,
i
= 1, 2, 3
.
Note that in case
V
=
C
3
(respectively
R
3
), the scalars are
complex (real).
2.
u
1
= (1, 1, 0)
T
,
u
2
= (1,
≠
1, 0)
T
,
u
3
= (1, 1, 1)
T
is another basis for
R
3
and
C
3
. Why?
Answer by row reduction: For any
X
, to find
A
= (
–
1
,
–
2
,
–
3
)
T
such that
X
= (
x
1
,
x
2
,
x
3
)
T
=
–
1
u
1
+
–
2
u
2
+
–
3
u
3
we use the same trick as before. We write
U
= [
u
1
,
u
2
,
u
3
]
and solve
X
=
UA
for the
3
unknown components of
A
. By row reducing the augmented matrix
[
U
|
X
]
,
we can show the system is uniquely solvable for any
X
.
More succinctly, for any
X
œ
C
3
, its coordinates
A
œ
C
3
are computed by
A
=
U
≠
1
X
where
U
≠
1
is the
inverse of
U
.
Algorithm 1.
(Computing
U
≠
1
): use Gauss-Jordan elimination on the fully aug-
mented
[3, 6]
matrix
[
U
|
I
]
æ
[
I
|
U
≠
1
]
Exercise:
Review Algorithm 1 and compute the inverse of
U
=
Q
c
a
1
1
1
1
≠
1
1
0
0
1
R
d
b
.

1.4.
LINEAR OPERATORS
11
3.
V
=
C
[0, 1]: The three functions
{
cos
x
, sin
x
, 1
}
are linearly independent, which
means they form a basis for their span.
On the contrary, the three functions
{
cos
2
x
, sin
2
x
, 1
}
are linearly dependent because of the identity
1
·
cos
2
x
+ 1
·
sin
2
x
+ (
≠
1)
·
1 = 0
(1.5)
4. The functions
v
n
(
x
) =
e
inx
for
n
=
≠
N
:
N
form a basis for the vector space
W
N
.
5. The pair of functions
J
n
,
Y
n
form a basis for the solution space of (1.1). We some-
times say they form a
fundamental set of solutions
.
Definition 6.
1.
V
has dimension
N
means it has a set of
N
linearly independent
vectors, but no set of
N
+ 1
linearly independent vectors
2.
V
is
infinite dimensional
means it has a set of
N
linearly independent vectors for
every
N
.
Theorem 1.
(“Dimension theorem”) Any two bases
{
˛
u
1
,
. . .
,
˛
u
m
}
and
{
˛
v
1
,
. . .
,
˛
v
n
}
for a
vector space
V
with finite dimension
N
have the same number of vectors
m
=
n
=
N
.
Examples 4.
1.
R
3
has dimension
3
. So does
C
3
.
2.
V
=
C
[0, 1]
is infinite dimensional because one can show that
{
1,
x
,
x
2
,
. . .
,
x
m
}
is
linearly independent for all
m
. (Homework!)
3. The solution space of (1.1) is two-dimensional.
Given a specific basis
{
˛
u
n
}
N
n
=1
of an
N
-dimensional vector space
V
, every
˛
x
œ
V
can
be uniquely expressed in the form
˛
x
=
N
ÿ
n
=1
–
n
˛
u
n
The components of
a
= (
–
1
,
. . .
,
–
N
)
T
form a column
N
-vector
a
œ
C
N
is called the
coordinate
N
-vector of
˛
x
relative to the basis
{
˛
u
}
, or the
u

Subscribe to view the full document.

- Winter '10
- kovarik