This preview shows pages 1–2. Sign up to view the full content.
3.
a.
The log likelihood for sampling from the normal distribution is
logL
=
(1/2)[nlog2
π
+ nlog
σ
2
+ (1/
σ
2
)
Σ
i
(x
i

μ
)
2
]
write the summation in the last term as
Σ
x
i
2
+ n
μ
2
 2
μΣ
i
x
i
.
Thus, it is clear that the log likelihood is of the
form for an exponential family, and the sufficient statistics are the sum and sum of squares of the
observations.
b.
The log of the density for the Weibull distribution is
logf(x) = log
α
+ log
β
+ (
β
1)logx
i

αΣ
i
x
i
β
.
The log likelihood is found by summing these functions.
The third term does not factor in the fashion
needed to produce an exponential family.
There are no sufficient statistics for this distribution.
c.
The log of the density for the mixture distribution is
logf(x,y) = log
θ
 (
β
+
θ
)y
i
+ x
i
log
β
+ x
i
logy
i
 log(x!)
This is an exponential family; the sufficient statistics are
Σ
i
y
i
and
Σ
i
x
i
..
4.
The question is (deliberately) misleading. We showed in Chapter 8 and in this chapter that in the
classical regression model with heteroscedasticity, the OLS estimator is the GMM estimator.
The
asymptotic covariance matrix of the OLS estimator is given in Section 8.2.
The estimator of the asymptotic
covariance matrices are s
2
(
X
′
X
)
1
for OLS and the White estimator for GMM.
5.
The GMM estimator would be chosen to minimize the criterion
q = n
m
′
Wm
where
W
is the weighting matrix and
m
is the empirical moment,
m
=
(1/
n
)
Σ
i
(
y
i

Φ
(
x
i
′β
))
x
i
For the first pass, we’ll use
W
=
I
and just minimize the sumof squares. This provides an initial set of
estimates that can be used to compute the optimal weighting matrix.
With this first round estimate, we
compute
W
=
[(1/n
2
)
Σ
i
(y
i

Φ
(
x
i
′β
))
2
x
i
x
i
′
]
1
then return to the optimization problem to find the optimal estimator.
The asymptotic covariance matrix is
computed from the first order conditions for the optimization.
The matrix of derivatives is
G
=
∂
m
/
∂β′
=
(1/n)
Σ
i

φ
(
x
i
′β
)
x
i
x
i
′
The estimator of the asymptotic covariance matrix will be
V
=
(1/
n
)[
G
′
WG
]
1
6.
This is the comparison between (1512) and (1511).
The proof can be done by comparing the inverses
of the two covariance matrices.
Thus, if the claim is correct, the matrix in (1511) is larger than that in (15
12), or its inverse is smaller.
We can ignore the (1/n) as well.
We require, then, that
1
−
′′
′
′
>
1
G
G GWG[GW WG] GWG
ΦΦ
7.
Suppose in a sample of 500 observations from a normal distribution with mean
μ
and standard deviation
σ
,
you are told that 35% of the observations are less than 2.1 and 55% of the observations are less than 3.6.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 11/13/2011 for the course ECE 4105 taught by Professor Dr.fang during the Spring '10 term at University of Florida.
 Spring '10
 Dr.Fang

Click to edit the document details