Lecture 3
3.1
Method of moments.
Consider a family of distributions
{
θ
:
θ
∈
Θ
}
and and consider a sample
X
=
(
X
1
, . . . , X
n
) of i.i.d.
random variables with distribution
θ
0
, where
θ
0
∈
Θ
.
We
assume that
θ
0
is unknown and we want to construct an estimate
ˆ
θ
=
ˆ
θ
n
(
X
1
,
· · ·
, X
n
)
of
θ
0
based on the sample
X.
Let us recall some standard facts from probability that we be often used through
out this course.
•
Law of Large Numbers (LLN):
If the distribution of the i.i.d.
sample
X
1
, . . . , X
n
is such that
X
1
has finite
expectation, i.e.

X
1

<
∞
,
then the sample average
¯
X
n
=
X
1
+
. . .
+
X
n
n
→
X
1
converges to the expectation in some sense, for example, for any arbitrarily
small
ε >
0
,
(

¯
X
n

X
1

>
)
→
0 as
n
→ ∞
.
Convergence in the above sense is called convergence in probability.
Note.
Whenever we will use the LLN below we will simply say that the av
erage converges to the expectation and will not mention in what sense. More
mathematically inclined students are welcome to carry out these steps more
rigorously, especially when we use LLN in combination with the Central Limit
Theorem.
•
Central Limit Theorem (CLT):
If the distribution of the i.i.d.
sample
X
1
, . . . , X
n
is such that
X
1
has finite
expectation and variance, i.e.

X
1

<
∞
and Var(
X
)
<
∞
,
then
√
n
(
¯
X
n

X
1
)
→
d
N
(0
, σ
2
)
8
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
LECTURE 3.
9
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '09
 JJ
 Central Limit Theorem, Normal Distribution, Variance, Probability theory, LLN, Xn

Click to edit the document details