5. For the model in Example 5.10, derive the LM statistic for the test of the hypothesis that =0.
The derivatives of the log-likelihood with = 0 imposed are g =
n x / 2 and
i =1 i
. The estimator for 2 will be obtained by equating the second
Large Sample Distribution Theory
There are no exercises for Appendix D.
Computation and Optimization
1. Show how to maximize the function
1 ( c ) 2 / 2
with respect to for a constant, c, using Newton's method. Show that
For part (c), we just note that = /(+). For a sample of observations on x, the log-likelihood
lnL = nln + ln(1-) i =1 xi
lnL/d = n/ -
i =1 xi /(1-).
A solution is obtained by first noting that at the solution, (1-)/ = x = 1/ - 1. The solution
The joint solution is = 3.2301 and = 2.9354. It might not seem obvious, but we can also derive asymptotic
standard errors for these estimates by constructing them as method of moments estimators. Observe, first, that
the two estimates are based on moment
(a) From the previous problem, Mx(t) = exp[(et - 1)]. Suppose y is distributed as Poisson with
parameter . Then, My(t)=exp[(et-1)]. The product of these two moment generating functions is
Mx(t)My(t)= exp[(e t - 1)]exp[(e t - 1)] = exp[(+)(e t - 1)], which
characteristic roots and vectors of AA. The inverse square root defined in Section B.7.12 would also provide
a method of transforming x to obtain the desired covariance matrix.
18. The density of the standard normal distribution, denoted (x), is given in
(a) x ~ Normal[0,32], and -4 < x < 4.
(b) x ~ chi-squared, 8 degrees of freedom, 0 < x < 16.
The inequality given in (3-18) states that Prob[|x - | < k] > 1 - 1/k2. Note that the result is not
informative if k is less than or equal to 1.
(a) The range is
1 3 3
1. For the matrices A =
and B = 1 5 compute AB, AB, and BA.
2 4 1
10 22 10
10 11 10
, BA = 11 23 8 , AB = (BA) = 22 23 26 .
10 26 20
10 8 20
2. Prove that tr(AB) = tr(BA) where A and B are any
Models for Event Counts and Duration
1. a. Conditional variance in the ZIP model. The essential ingredients that are needed for this derivation
E[ y* | y* > 0, xi ] =
1 exp( i )
Var[ y* | y* > 0, xi ] =
4. Using Theorem 24.5, we have 1 - (z) = 14/35 = .4, z = -1(.6) = .253, (z) = .9659,
(z) = .6886. The two moment equations are based on the mean and variance of y in the observed data,
5.9746 and 9.869, respectively. The equations would be 5.9746 = + (.7)
7. This is similar to Exercise 1. It is simplest to prove it in that framework. Since the model has only a
dummy variable, we can use the same log likelihood as in Exercise 1. But, in this exercise, there are no
observations in the cell (y=1,x=0). The res
Time Series Models
There are no exercises or applications in Chapter 21.
1. The autocorrelations are simple to obtain just by multiplying out vt2, vtvt-1 and so on. The
autocovariances are 1+12 + 22, -
| Prob[ChiSqd > value] =
| Hosmer-Lemeshow chi-squared = 23.44388
| P-value= .00284 with deg.fr. =
g. The restricted log likelihood given with the initial results equals -18019.55. This is the log
likelihood for a model that contains
11. The asymptotic variance of the MLE is, in fact, equal to the Cramer-Rao Lower Bound for the variance
of a consistent, asymptotically normally distributed estimator, so this completes the argument.
In example 4.9, we proposed a regression with a gamma
logL/ = n/ +
i =1 log xi - i =1(log xi ) xi
Since the first likelihood equation implies that at the maximum, = n / i =1 xi , one approach would be to
scan over the range of and compute the implied value of . Two practical complications are the allow
3. a. The log likelihood for sampling from the normal distribution is
logL = (-1/2)[nlog2 + nlog2 + (1/2)i (xi - )2]
write the summation in the last term as xi2 + n2 - 2ixi. Thus, it is clear that the log likelihood is of the
form for an exponential famil
? Application 13.1 - Simultaneous Equations
? Read the data
? For convenience, rename the variables so they correspond
? to the example in the text.
sample ; 1 - 204 $
create ; ct=realcons$
create ; it=realinvs$
create ; gt=realgovt$
1 "known" (identified), the only remaining unknown is 2, which is therefore identified. With 1 and 2 in
hand, may be deduced from 2. With 2 and in hand, 22 is the residual variance in the equation (y2 - x 2y1) = 2, which is directly estimable, therefore,
The mean squared error of the OLS estimator is the variance plus the squared bias,
M(b|) = (2/n)QXX-1 + QXX-1QXX-1
the mean squared error of the 2SLS estimator equals its variance. For OLS to be more precise then 2SLS,
we would have to have
(2/n)QXX-1 + Q