ECON
TimeSeriesBook.pdf

Note that all the entities necessary to evaluate the

Info icon This preview shows pages 377–379. Sign up to view the full content.

Note that all the entities necessary to evaluate the likelihood function are provided by the Kalman filter. Thus, the evaluation of the likelihood function is a byproduct of the Kalman filter. The maximum likelihood estimator (MLE) is then given by the maximizer of the likelihood function, or more conveniently the log-likelihood function. Usually, there is no analytic solution available so that one must resort to numerical methods. An estimation of the asymptotic covariance matrix can be obtained by evaluating the Hessian matrix at the optimum. Under the usual assumptions, the MLE is consistent and delivers asymptotically normally distributed estimates (Greene, 2008; Amemiya, 1994). The direct maximization of the likelihood function is often not easy in practice, especially for large systems with many parameters. The expectation- maximization algorithm, EM algorithm for short, represents a valid, though slower alternative. As the name indicates, it consists of two steps which have to be carried out iteratively. Based on some starting values for the parameters, the first step (expectation step) computes estimates, X t | T , of the unobserved state vector X t using the Kalman smoother. In the second step (maximization step), the likelihood function is maximized taking the estimates of X t , X t | T , as additional observations. The treatment of X t | T as additional observations, allows to reduce the maximization step to a simple multivariate regression. Indeed, by treating X t | T as if they were known, the state equation becomes a simple VAR(1) which can be readily estimated by linear least-squares to obtain the parameters F and Q . The parameters A , G and R are also easily retrieved from a regression of Y t on X t | T . Based on these new parameter estimates, we go back to step one and derive new esti- mates for X t | T which are then used in the maximization step. One can show that this procedure maximizes the original likelihood function (see Dempster et al., 1977; Wu, 1983). A more detailed analysis of the EM algorithm in the time series context is provided by Brockwell and Davis (1996). 10 10 The analogue to the EM algorithm in the Bayesian context is given by the Gibbs sampler. In contrast to the EM algorithm, we compute in the first step not the expected value of the states, but we draw a state vector from the distribution of state vectors given the parameters. In the second step, we do not maximize the likelihood function, but draw a parameter from the distribution of parameters given the state vector drawn previously. Going back and forth between these two steps, we get a Markov chain in the parameters
Image of page 377

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

17.3. ESTIMATION OF STATE SPACE MODELS 359 Sometimes it is of interest not only to compute parameter estimates and to derive from them estimates for the state vector via the Kalman filter or smoother, but also to find confidence intervals for the estimated state vector to take the uncertainty into account. If the parameters are known, the meth- ods outlined previously showed how to obtain these confidence intervals. If,
Image of page 378
Image of page 379
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern