So far: we have shown that for
Next: compute
an iid sample from
. First
Take expected values to get
Definition: The Fisher Information is
General properties of likelihoods
In general if
for all
.
denotes the joint density of all the data then
we have
This
Cramr Rao Inequality
Suppose T is an unbiased estimate of
(last time we did only
derive some information from the identity
) we can
Differentiate both sides to get
where U is the score function. Since we already know that the score has mean 0 we
see that
Maximum Likelihood Estimation
To find an MLE we maximize L. This is a typical function maximization problem
which we approach by setting the gradient of L equal to 0 and then checking to see
that the root is a maximum, not a minimum or saddle point.
The B
Estimating Equations
An equation of the form
is called an estimating equation; we find an estimate of
by solving the equation.
Examples:
1.
The likelihood equations:
2.
The normal equations in a linear regression model:
are of the form
3.
The method of mo
Expectation, moments
We give two definitions of expected values:
Def'n If X has density f then
Def'n: If X has discrete density f then
Now if Y=g(X) for smooth g and X has density fX then
by the change of variables formula for integration. This is good be
Large Sample Theory
Our goal is to study the behaviour of
theory for
and to develop approximate distribution
. Here is a summary of the conclusions of the theory:
1.
The log likelihood is probably bigger at
single
, the true value of
than at any
.
2.
The
The Fisher information matrix is
Theorem: In iid sampling
Examples, then uses
D: Uniform
. We find
We have
iid with density
This family has the feature that the support of the density, namely
depends on
. In such families it is common for the standard mle
Other methods of estimation: Method of Moments
Basic strategy: set sample moments equal to population moments and solve for the
parameters.
Definition: The
The
sample moment (about the origin) is
population moment is
Central moments are
and
If we have p p
Motivate sufficiency and completeness
What can we do to find UMVUEs when the CRLB is a strict inequality?
Example: Suppose X has a Binomial(n,p) distribution. The score function is
Thus the CRLB will be strict unless T=cX for some c. If we are trying to
Normal samples: Distribution Theory
Theorem 1 Suppose
are independent
random variables.
(That is each satisfies my definition above in 1 dimension.) Then
1.
The sample mean
and the sample variance s2 are independent.
2.
3.
4.
Proof: Let
so
. Then
are inde
A1
A2
It displays a time series plot of cad from earliest date to most recent date. Overall, it presents a
decreasing pattern. In details, the time series plot can be explained by four parts. From
September 2015 to February 2016, cad waves between 2.0 and