5. For the model in Example 5.10, derive the LM statistic for the test of the hypothesis that =0.
The derivatives of the log-likelihood with = 0 imposed are g =
n x / 2 and
n
x2
n
i =1 i
g2 =
+
. The
Appendix D
Large Sample Distribution Theory
There are no exercises for Appendix D.
183
Appendix E
Computation and Optimization
1. Show how to maximize the function
1 ( c ) 2 / 2
e
f() =
2
with respect
For part (c), we just note that = /(+). For a sample of observations on x, the log-likelihood
lnL = nln + ln(1-) i =1 xi
n
would be
lnL/d = n/ -
i =1 xi /(1-).
n
A solution is obtained by first noting
The joint solution is = 3.2301 and = 2.9354. It might not seem obvious, but we can also derive asymptotic
standard errors for these estimates by constructing them as method of moments estimators. Obs
(a) From the previous problem, Mx(t) = exp[(et - 1)]. Suppose y is distributed as Poisson with
parameter . Then, My(t)=exp[(et-1)]. The product of these two moment generating functions is
Mx(t)My(t)=
characteristic roots and vectors of AA. The inverse square root defined in Section B.7.12 would also provide
a method of transforming x to obtain the desired covariance matrix.
18. The density of the
(a) x ~ Normal[0,32], and -4 < x < 4.
(b) x ~ chi-squared, 8 degrees of freedom, 0 < x < 16.
The inequality given in (3-18) states that Prob[|x - | < k] > 1 - 1/k2. Note that the result is not
informa
Chapter 25
Models for Event Counts and Duration
Exercises
1. a. Conditional variance in the ZIP model. The essential ingredients that are needed for this derivation
are
i
E[ y* | y* > 0, xi ] =
= Ei*
4. Using Theorem 24.5, we have 1 - (z) = 14/35 = .4, z = -1(.6) = .253, (z) = .9659,
(z) = .6886. The two moment equations are based on the mean and variance of y in the observed data,
5.9746 and 9.86
7. This is similar to Exercise 1. It is simplest to prove it in that framework. Since the model has only a
dummy variable, we can use the same log likelihood as in Exercise 1. But, in this exercise, t
Chapter 21
Time Series Models
There are no exercises or applications in Chapter 21.
131
Chapter 22
Nonstationary Data
Exercise
1. The autocorrelations are simple to obtain just by multiplying out vt2,
t-1 = ut-1 + (-)ut-2 + (-)ut-3 + 2(-)ut-4+ .
Therefore, the middle term is zero and the third is simply u2. Thus,
Cov[t,t-1] = u2cfw_[(1 + 2 - 2)]/(1 - 2) - ] = u2[( - )(1 - )/(1 - 2)]
For lags greate
11. The asymptotic variance of the MLE is, in fact, equal to the Cramer-Rao Lower Bound for the variance
of a consistent, asymptotically normally distributed estimator, so this completes the argument.
logL/ = n/ +
i =1 log xi - i =1(log xi ) xi
n
n
Since the first likelihood equation implies that at the maximum, = n / i =1 xi , one approach would be to
n
scan over the range of and compute the impli
3. a. The log likelihood for sampling from the normal distribution is
logL = (-1/2)[nlog2 + nlog2 + (1/2)i (xi - )2]
write the summation in the last term as xi2 + n2 - 2ixi. Thus, it is clear that the
Application
?=
? Application 13.1 - Simultaneous Equations
?=
? Read the data
? For convenience, rename the variables so they correspond
? to the example in the text.
sample ; 1 - 204 $
create ; ct=re
1 "known" (identified), the only remaining unknown is 2, which is therefore identified. With 1 and 2 in
hand, may be deduced from 2. With 2 and in hand, 22 is the residual variance in the equation (y2
The mean squared error of the OLS estimator is the variance plus the squared bias,
M(b|) = (2/n)QXX-1 + QXX-1QXX-1
the mean squared error of the 2SLS estimator equals its variance. For OLS to be more