gretl-guide

# gretl-guide - Gretl User’s Guide Gnu Regression,...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Gretl User’s Guide Gnu Regression, Econometrics and Time-series Allin Cottrell Department of Economics Wake Forest university Riccardo “Jack” Lucchetti Dipartimento di Economia Università Politecnica delle Marche February, 2010 Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). Contents 1 Introduction 1.1 1.2 1.3 Features at a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgements ......................................... 1 1 1 2 Installing the programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I 2 Running the program Getting started 2.1 2.2 2.3 2.4 2.5 Let’s run a regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The main window menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Keyboard shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The gretl toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 5 5 7 8 11 11 13 13 15 15 16 19 19 19 19 20 22 26 27 27 30 30 30 30 32 3 Modes of working 3.1 3.2 3.3 3.4 Command scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saving script objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The gretl console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Session concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Data ﬁles 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Native format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other data ﬁle formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binary databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a data ﬁle from scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structuring a dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Missing data values ......................................... Maximum size of data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data ﬁle collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Special functions in genr 5.1 5.2 5.3 5.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Long-run variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time-series ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Panel data speciﬁcs ......................................... i Contents 5.5 5.6 5.7 5.8 5.9 Resampling and bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative densities and p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Handling missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Retrieving internal variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 34 35 35 36 37 39 44 44 44 45 46 46 47 47 51 52 52 53 57 57 57 60 61 65 65 67 67 68 73 74 79 79 82 87 87 5.10 The discrete Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Sub-sampling a dataset 6.1 6.2 6.3 6.4 6.5 7 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting the sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restricting the sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Sample menu items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphs and plots 7.1 7.2 Gnuplot graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boxplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Discrete variables 8.1 8.2 Declaring variables as discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Commands for discrete variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Loop constructs 9.1 9.2 9.3 9.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loop control variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Progressive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loop examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 User-deﬁned functions 10.1 Deﬁning a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Calling a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Deleting a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Function programming details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Old-style function syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Function packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Named lists and strings 11.1 Named lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Named strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Matrix manipulation 12.1 Creating matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 12.2 Empty matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Selecting sub-matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Matrix operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Matrix–scalar operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Matrix functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Matrix accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 Namespace issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.9 Creating a data series from a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.10 Matrices and lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 88 89 90 91 91 97 99 99 99 12.11 Deleting a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 12.12 Printing a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 12.13 Example: OLS using matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 13 Cheat sheet 103 13.1 Dataset handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 13.2 Creating/modifying variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 13.3 Neat tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 II Econometric methods 108 109 14 Robust covariance matrix estimation 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 14.2 Cross-sectional data and the HCCME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 14.3 Time series data and HAC covariance matrices . . . . . . . . . . . . . . . . . . . . . . . . 111 14.4 Special issues with panel data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 15 Panel data 15.1 Estimation of panel models 117 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 15.2 Dynamic panel models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 15.3 Panel illustration: the Penn World Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 16 Nonlinear least squares 125 16.1 Introduction and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 16.2 Initializing the parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 16.3 NLS dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 16.4 Analytical and numerical derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 16.5 Controlling termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 16.6 Details on the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 16.7 Numerical accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 17 Maximum likelihood estimation 130 Contents iv 17.1 Generic ML estimation with gretl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 17.2 Gamma estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 17.3 Stochastic frontier cost function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 17.4 GARCH models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 17.5 Analytical derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 17.6 Debugging ML scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 17.7 Using functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 18 GMM estimation 141 18.1 Introduction and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 18.2 OLS as GMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 18.3 TSLS as GMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 18.4 Covariance matrix options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 18.5 A real example: the Consumption Based Asset Pricing Model . . . . . . . . . . . . . . . . 146 18.6 Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 19 Model selection criteria 150 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 19.2 Information criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 20 Time series models 152 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 20.2 ARIMA models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 20.3 Unit root tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 20.4 ARCH and GARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 21 Forecasting 164 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 21.2 Saving and inspecting ﬁtted values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 21.3 The fcast command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 21.4 Univariate forecast evaluation statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 21.5 Forecasts based on VAR models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 21.6 Forecasting from simultaneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 22 Cointegration and Vector Error Correction Models 168 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 22.2 Vector Error Correction Models as representation of a cointegrated system . . . . . . . 169 22.3 Interpretation of the deterministic components . . . . . . . . . . . . . . . . . . . . . . . . 170 22.4 The Johansen cointegration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 22.5 Identiﬁcation of the cointegration vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 22.6 Over-identifying restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Contents v 22.7 Numerical solution methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 23 The Kalman Filter 184 23.1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 23.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 23.3 Intended usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 23.4 Overview of syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 23.5 Deﬁning the ﬁlter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 23.6 The kfilter function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 23.7 The ksmooth function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 23.8 The ksimul function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 23.9 Example 1: ARMA estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 23.10 Example 2: local level model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 24 Discrete and censored dependent variables 195 24.1 Logit and probit models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 24.2 Ordered response models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 24.3 Multinomial logit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 24.4 The Tobit model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 24.5 Interval regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 24.6 Sample selection model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 25 Quantile regression 206 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 25.2 Basic syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 25.3 Conﬁdence intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 25.4 Multiple quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 25.5 Large datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 III Technical details 210 211 26 Gretl and TEX 26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 26.2 TEX-related menu items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 26.3 Fine-tuning typeset output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 26.4 Character encodings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 26.5 Installing and learning TEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 27 Gretl and R 217 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 27.2 Starting an interactive R session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Contents vi 27.3 Running an R script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 27.4 Taking stuﬀ back and forth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 27.5 Interacting with R from the command line . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 27.6 Performance issues with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 27.7 Further use of the R library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 28 Gretl and Ox 227 28.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 28.2 Ox support in gretl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 28.3 Illustration: replication of DPD model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29 Troubleshooting gretl 231 29.1 Bug reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 29.2 Auxiliary programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 30 The command line interface 232 IV A Appendices Data ﬁle details A.1 A.2 A.3 233 234 Basic native format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Traditional ESL format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Binary database details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 237 B Data import via ODBC B.1 B.2 B.3 ODBC base concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 243 C Building gretl C.1 C.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Build instructions: a step-by-step guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 248 249 250 251 D E F Numerical accuracy Related free software Listing of URLs Bibliography Chapter 1 Introduction 1.1 Features at a glance Gretl is an econometrics package, including a shared library, a command-line client program and a graphical user interface. User-friendly Gretl oﬀers an intuitive user interface; it is very easy to get up and running with econometric analysis. Thanks to its association with the econometrics textbooks by Ramu Ramanathan, Jeﬀrey Wooldridge, and James Stock and Mark Watson, the package oﬀers many practice data ﬁles and command scripts. These are well annotated and accessible. Two other useful resources for gretl users are the available documentation and the gretl-users mailing list. Flexible You can choose your preferred point on the spectrum from interactive point-and-click to batch processing, and can easily combine these approaches. Cross-platform Gretl’s “home” platform is Linux but it is also available for MS Windows and Mac OS X, and should work on any unix-like system that has the appropriate basic libraries (see Appendix C). Open source The full source code for gretl is available to anyone who wants to critique it, patch it, or extend it. See Appendix C. Sophisticated Gretl oﬀers a full range of least-squares based estimators, either for single equations and for systems, including vector autoregressions and vector error correction models. Several speciﬁc maximum likelihood estimators (e.g. probit, ARIMA, GARCH) are also provided natively; more advanced estimation methods can be implemented by the user via generic maximum likelihood or nonlinear GMM. Extensible Users can enhance gretl by writing their own functions and procedures in gretl’s scripting language, which includes a wide range of matrix functions. Accurate Gretl has been thoroughly tested on several benchmarks, among which the NIST reference datasets. See Appendix D. Internet ready Gretl can access and fetch databases from a server at Wake Forest University. The MS Windows version comes with an updater program which will detect when a new version is available and oﬀer the option of auto-updating. International Gretl will produce its output in English, French, Italian, Spanish, Polish, Portuguese, German, Basque, Turkish or Russian depending on your computer’s native language setting. 1.2 Acknowledgements The gretl code base originally derived from the program ESL (“Econometrics Software Library”), written by Professor Ramu Ramanathan of the University of California, San Diego. We are much in debt to Professor Ramanathan for making this code available under the GNU General Public Licence and for helping to steer gretl’s early development. 1 Chapter 1. Introduction 2 We are also grateful to the authors of several econometrics textbooks for permission to package for gretl various datasets associated with their texts. This list currently includes William Greene, author of Econometric Analysis ; Jeﬀrey Wooldridge (Introductory Econometrics: A Modern Approach); James Stock and Mark Watson (Introduction to Econometrics ); Damodar Gujarati (Basic Econometrics ); Russell Davidson and James MacKinnon (Econometric Theory and Methods ); and Marno Verbeek (A Guide to Modern Econometrics ). GARCH estimation in gretl is based on code deposited in the archive of the Journal of Applied Econometrics by Professors Fiorentini, Calzolari and Panattoni, and the code to generate p -values for Dickey–Fuller tests is due to James MacKinnon. In each case we are grateful to the authors for permission to use their work. With regard to the internationalization of gretl, thanks go to Ignacio Díaz-Emparanza (Spanish), Michel Robitaille and Florent Bresson (French), Cristian Rigamonti (Italian), Tadeusz Kufel and Pawel Kufel (Polish), Markus Hahn and Sven Schreiber (German), Hélio Guilherme and Henrique Andrade (Portuguese), Susan Orbe (Basque), Talha Yalta (Turkish) and Alexander Gedranovich (Russian). Gretl has beneﬁtted greatly from the work of numerous developers of free, open-source software: for speciﬁcs please see Appendix C. Our thanks are due to Richard Stallman of the Free Software Foundation, for his support of free software in general and for agreeing to “adopt” gretl as a GNU program in particular. Many users of gretl have submitted useful suggestions and bug reports. In this connection particular thanks are due to Ignacio Díaz-Emparanza, Tadeusz Kufel, Pawel Kufel, Alan Isaac, Cri Rigamonti, Sven Schreiber, Talha Yalta, Andreas Rosenblad, and Dirk Eddelbuettel, who maintains the gretl package for Debian GNU/Linux. 1.3 Linux Installing the programs On the Linux1 platform you have the choice of compiling the gretl code yourself or making use of a pre-built package. Building gretl from the source is necessary if you want to access the development version or customize gretl to your needs, but this takes quite a few skills; most users will want to go for a pre-built package. Some Linux distributions feature gretl as part of their standard oﬀering: Debian, for example, or Ubuntu (in the universe repository). If this is the case, all you need to do is install gretl through your package manager of choice (e.g. synaptic). Ready-to-run packages are available in rpm format (suitable for Red Hat Linux and related systems) on the gretl webpage http://gretl.sourceforge.net. However, we’re hopeful that some users with coding skills may consider gretl suﬃciently interesting to be worth improving and extending. The documentation of the libgretl API is by no means complete, but you can ﬁnd some details by following the link “Libgretl API docs” on the gretl homepage. People interested in the gretl development are welcome to subscribe to the gretl-devel mailing list. If you prefer to compile your own (or are using a unix system for which pre-built packages are not available), instructions on building gretl can be found in Appendix C. MS Windows The MS Windows version comes as a self-extracting executable. Installation is just a matter of downloading gretl_install.exe and running this program. You will be prompted for a location to install the package. 1 In this manual we use “Linux” as shorthand to refer to the GNU/Linux operating system. What is said herein about Linux mostly applies to other unix-type systems too, though some local modiﬁcations may be needed. Chapter 1. Introduction Updating 3 If your computer is connected to the Internet, then on start-up gretl can query its home website at Wake Forest University to see if any program updates are available; if so, a window will open up informing you of that fact. If you want to activate this feature, check the box marked “Tell me about gretl updates” under gretl’s “Tools, Preferences, General” menu. The MS Windows version of the program goes a step further: it tells you that you can update gretl automatically if you wish. To do this, follow the instructions in the popup window: close gretl then run the program titled “gretl updater” (you should ﬁnd this along with the main gretl program item, under the Programs heading in the Windows Start menu). Once the updater has completed its work you may restart gretl. Part I Running the program 4 Chapter 2 Getting started 2.1 Let’s run a regression This introduction is mostly angled towards the graphical client program; please see Chapter 30 below and the Gretl Command Reference for details on the command-line program, gretlcli. You can supply the name of a data ﬁle to open as an argument to gretl, but for the moment let’s not do that: just ﬁre up the program.1 You should see a main window (which will hold information on the data set but which is at ﬁrst blank) and various menus, some of them disabled at ﬁrst. What can you do at this point? You can browse the supplied data ﬁles (or databases), open a data ﬁle, create a new data ﬁle, read the help items, or open a command script. For now let’s browse the supplied data ﬁles. Under the File menu choose “Open data, Sample ﬁle”. A second notebook-type window will open, presenting the sets of data ﬁles supplied with the package (see Figure 2.1). Select the ﬁrst tab, “Ramanathan”. The numbering of the ﬁles in this section corresponds to the chapter organization of Ramanathan (2002), which contains discussion of the analysis of these data. The data will be useful for practice purposes even without the text. Figure 2.1: Practice data ﬁles window If you select a row in this window and click on “Info” this opens a window showing information on the data set in question (for example, on the sources and deﬁnitions of the variables). If you ﬁnd a ﬁle that is of interest, you may open it by clicking on “Open”, or just double-clicking on the ﬁle name. For the moment let’s open data3-6. In gretl windows containing lists, double-clicking on a line launches a default action for the associated list entry: e.g. displaying the values of a data series, opening a ﬁle. 1 For convenience we refer to the graphical client program simply as gretl in this manual. Note, however, that the speciﬁc name of the program diﬀers according to the computer platform. On Linux it is called gretl_x11 while on MS Windows it is gretlw32.exe. On Linux systems a wrapper script named gretl is also installed — see also the Gretl Command Reference. 5 Chapter 2. Getting started 6 This ﬁle contains data pertaining to a classic econometric “chestnut”, the consumption function. The data window should now display the name of the current data ﬁle, the overall data range and sample range, and the names of the variables along with brief descriptive tags — see Figure 2.2. Figure 2.2: Main window, with a practice data ﬁle open OK, what can we do now? Hopefully the various menu options should be fairly self explanatory. For now we’ll dip into the Model menu; a brief tour of all the main window menus is given in Section 2.3 below. gretl’s Model menu oﬀers numerous various econometric estimation routines. The simplest and most standard is Ordinary Least Squares (OLS). Selecting OLS pops up a dialog box calling for a model speciﬁcation — see Figure 2.3. Figure 2.3: Model speciﬁcation dialog To select the dependent variable, highlight the variable you want in the list on the left and click the arrow that points to the Dependent variable slot. If you check the “Set as default” box this variable will be pre-selected as dependent when you next open the model dialog box. Shortcut: double-clicking on a variable on the left selects it as dependent and also sets it as the default. To select independent variables, highlight them on the left and click the green arrow (or right-click the Chapter 2. Getting started 7 highlighted variable); to remove variables from the selected list, use the rad arrow. To select several variable in the list box, drag the mouse over them; to select several non-contiguous variables, hold down the Ctrl key and click on the variables you want. To run a regression with consumption as the dependent variable and income as independent, click Ct into the Dependent slot and add Yt to the Independent variables list. 2.2 Estimation output Once you’ve speciﬁed a model, a window displaying the regression output will appear. The output is reasonably comprehensive and in a standard format (Figure 2.4). Figure 2.4: Model output window The output window contains menus that allow you to inspect or graph the residuals and ﬁtted values, and to run various diagnostic tests on the model. A For most models there is also an option to print the regression output in L TEX format. See Chapter 26 for details. To import gretl output into a word processor, you may copy and paste from an output window, using its Edit menu (or Copy button, in some contexts) to the target program. Many (not all) gretl A windows oﬀer the option of copying in RTF (Microsoft’s “Rich Text Format”) or as L TEX. If you are pasting into a word processor, RTF may be a good option because the tabular formatting of the output is preserved.2 Alternatively, you can save the output to a (plain text) ﬁle then import the ﬁle into the target program. When you ﬁnish a gretl session you are given the option of saving all the output from the session to a single ﬁle. Note that on the gnome desktop and under MS Windows, the File menu includes a command to send the output directly to a printer. When pasting or importing plain text gretl output into a word processor, select a monospaced or typewriterstyle font (e.g. Courier) to preserve the output’s tabular formatting. Select a small font (10-point Courier should do) to prevent the output lines from being broken in the wrong place. 2 Note that when you copy as RTF under MS Windows, Windows will only allow you to paste the material into applications that “understand” RTF. Thus you will be able to paste into MS Word, but not into notepad. Note also that there appears to be a bug in some versions of Windows, whereby the paste will not work properly unless the “target” application (e.g. MS Word) is already running prior to copying the material in question. Chapter 2. Getting started 8 2.3 The main window menus Reading left to right along the main window’s menu bar, we ﬁnd the File, Tools, Data, View, Add, Sample, Variable, Model and Help menus. • File menu – Open data: Open a native gretl data ﬁle or import from other formats. See Chapter 4. – Append data: Add data to the current working data set, from a gretl data ﬁle, a commaseparated values ﬁle or a spreadsheet ﬁle. – Save data: Save the currently open native gretl data ﬁle. – Save data as: Write out the current data set in native format, with the option of using gzip data compression. See Chapter 4. – Export data: Write out the current data set in Comma Separated Values (CSV) format, or the formats of GNU R or GNU Octave. See Chapter 4 and also Appendix E. – Send to: Send the current data set as an e-mail attachment. – New data set: Allows you to create a blank data set, ready for typing in values or for importing series from a database. See below for more on databases. – Clear data set: Clear the current data set out of memory. Generally you don’t have to do this (since opening a new data ﬁle automatically clears the old one) but sometimes it’s useful. – Script ﬁles: A “script” is a ﬁle containing a sequence of gretl commands. This item contains entries that let you open a script you have created previously (“User ﬁle”), open a sample script, or open an editor window in which you can create a new script. – Session ﬁles: A “session” ﬁle contains a snapshot of a previous gretl session, including the data set used and any models or graphs that you saved. Under this item you can open a saved session or save the current session. – Databases: Allows you to browse various large databases, either on your own computer or, if you are connected to the internet, on the gretl database server. See Section 4.3 for details. – Function ﬁles: Handles “function packages” (see Section 10.6), which allow you to access functions written by other users and share the ones written by you. – Exit: Quit the program. You’ll be prompted to save any unsaved work. • Tools menu – Statistical tables: Look up critical values for commonly used distributions (normal or Gaussian, t, chi-square, F and Durbin–Watson). – P-value ﬁnder: Look up p-values from the Gaussian, t, chi-square, F, gamma, binomial or Poisson distributions. See also the pvalue command in the Gretl Command Reference. – Distribution graphs: Produce graphs of various probability distributions. In the resulting graph window, the pop-up menu includes an item “Add another curve”, which enables you to superimpose a further plot (for example, you can draw the t distribution with various diﬀerent degrees of freedom). – Test statistic calculator: Calculate test statistics and p-values for a range of common hypothesis tests (population mean, variance and proportion; diﬀerence of means, variances and proportions). – Nonparametric tests: Calculate test statistics for various nonparametric tests (Sign test, Wilcoxon rank sum test, Wilcoxon signed rank test, Runs test). Chapter 2. Getting started 9 – Seed for random numbers: Set the seed for the random number generator (by default this is set based on the system time when the program is started). – Command log: Open a window containing a record of the commands executed so far. – Gretl console: Open a “console” window into which you can type commands as you would using the command-line program, gretlcli (as opposed to using point-and-click). – Start Gnu R: Start R (if it is installed on your system), and load a copy of the data set currently open in gretl. See Appendix E. – Sort variables: Rearrange the listing of variables in the main window, either by ID number or alphabetically by name. – NIST test suite: Check the numerical accuracy of gretl against the reference results for linear regression made available by the (US) National Institute of Standards and Technology. – Preferences: Set the paths to various ﬁles gretl needs to access. Choose the font in which gretl displays text output. Activate or suppress gretl’s messaging about the availability of program updates, and so on. See the Gretl Command Reference for further details. • Data menu – Select all: Several menu items act upon those variables that are currently selected in the main window. This item lets you select all the variables. – Display values: Pops up a window with a simple (not editable) printout of the values of the selected variable or variables. – Edit values: Opens a spreadsheet window where you can edit the values of the selected variables. – Add observations: Gives a dialog box in which you can choose a number of observations to add at the end of the current dataset; for use with forecasting. – Remove extra observations: Active only if extra observations have been added automatically in the process of forecasting; deletes these extra observations. – Read info, Edit info: “Read info” just displays the summary information for the current data ﬁle; “Edit info” allows you to make changes to it (if you have permission to do so). – Print description: Opens a window containing a full account of the current dataset, including the summary information and any speciﬁc information on each of the variables. – Add case markers: Prompts for the name of a text ﬁle containing “case markers” (short strings identifying the individual observations) and adds this information to the data set. See Chapter 4. – Remove case markers: Active only if the dataset has case markers identifying the observations; removes these case markers. – Dataset structure: invokes a series of dialog boxes which allow you to change the structural interpretation of the current dataset. For example, if data were read in as a cross section you can get the program to interpret them as time series or as a panel. See also section 4.5. – Compact data: For time-series data of higher than annual frequency, gives you the option of compacting the data to a lower frequency, using one of four compaction methods (average, sum, start of period or end of period). – Expand data: For time-series data, gives you the option of expanding the data to a higher frequency. – Transpose data: Turn each observation into a variable and vice versa (or in other words, each row of the data matrix becomes a column in the modiﬁed data matrix); can be useful with imported data that have been read in “sideways”. • View menu Chapter 2. Getting started 10 – Icon view: Opens a window showing the content of the current session as a set of icons; see section 3.4. – Graph speciﬁed vars: Gives a choice between a time series plot, a regular X–Y scatter plot, an X–Y plot using impulses (vertical bars), an X–Y plot “with factor separation” (i.e. with the points colored diﬀerently depending to the value of a given dummy variable), boxplots, and a 3-D graph. Serves up a dialog box where you specify the variables to graph. See Chapter 7 for details. – Multiple graphs: Allows you to compose a set of up to six small graphs, either pairwise scatter-plots or time-series graphs. These are displayed together in a single window. – Summary statistics: Shows a full set of descriptive statistics for the variables selected in the main window. – Correlation matrix: Shows the pairwise correlation coeﬃcients for the selected variables. – Cross Tabulation: Shows a cross-tabulation of the selected variables. This works only if at least two variables in the data set have been marked as discrete (see Chapter 8). – Principal components: Produces a Principal Components Analysis for the selected variables. – Mahalanobis distances: Computes the Mahalanobis distance of each observation from the centroid of the selected set of variables. – Cross-correlogram: Computes and graphs the cross-correlogram for two selected variables. • Add menu Oﬀers various standard transformations of variables (logs, lags, squares, etc.) that you may wish to add to the data set. Also gives the option of adding random variables, and (for time-series data) adding seasonal dummy variables (e.g. quarterly dummy variables for quarterly data). • Sample menu – Set range: Select a diﬀerent starting and/or ending point for the current sample, within the range of data available. – Restore full range: self-explanatory. – Deﬁne, based on dummy: Given a dummy (indicator) variable with values 0 or 1, this drops from the current sample all observations for which the dummy variable has value 0. – Restrict, based on criterion: Similar to the item above, except that you don’t need a predeﬁned variable: you supply a Boolean expression (e.g. sqft > 1400) and the sample is restricted to observations satisfying that condition. See the entry for genr in the Gretl Command Reference for details on the Boolean operators that can be used. – Random sub-sample: Draw a random sample from the full dataset. – Drop all obs with missing values: Drop from the current sample all observations for which at least one variable has a missing value (see Section 4.6). – Count missing values: Give a report on observations where data values are missing. May be useful in examining a panel data set, where it’s quite common to encounter missing values. – Set missing value code: Set a numerical value that will be interpreted as “missing” or “not available”. This is intended for use with imported data, when gretl has not recognized the missing-value code used. • Variable menu Most items under here operate on a single variable at a time. The “active” variable is set by highlighting it (clicking on its row) in the main data window. Most options will be self-explanatory. Note that you can rename a variable and can edit its descriptive label under “Edit attributes”. You can also “Deﬁne a new variable” via a formula (e.g. involving Chapter 2. Getting started 11 some function of one or more existing variables). For the syntax of such formulae, look at the online help for “Generate variable syntax” or see the genr command in the Gretl Command Reference. One simple example: foo = x1 * x2 will create a new variable foo as the product of the existing variables x1 and x2. In these formulae, variables must be referenced by name, not number. • Model menu For details on the various estimators oﬀered under this menu please consult the Gretl Command Reference. Also see Chapter 16 regarding the estimation of nonlinear models. • Help menu Please use this as needed! It gives details on the syntax required in various dialog entries. 2.4 Keyboard shortcuts When working in the main gretl window, some common operations may be performed using the keyboard, as shown in the table below. Return Delete e F2 g h F1 r t Opens a window displaying the values of the currently selected variables: it is the same as selecting “Data, Display Values”. Pressing this key has the eﬀect of deleting the selected variables. A conﬁrmation is required, to prevent accidental deletions. Has the same eﬀect as selecting “Edit attributes” from the “Variable” menu. Same as “e”. Included for compatibility with other programs. Has the same eﬀect as selecting “Deﬁne new variable” from the “Variable” menu (which maps onto the genr command). Opens a help window for gretl commands. Same as “h”. Included for compatibility with other programs. Refreshes the variable list in the main window: has the same eﬀect as selecting “Refresh window” from the “Data” menu. Graphs the selected variable; a line graph is used for time-series datasets, whereas a distribution plot is used for cross-sectional data. 2.5 The gretl toolbar At the bottom left of the main window sits the toolbar. The icons have the following functions, reading from left to right: 1. Launch a calculator program. A convenience function in case you want quick access to a calculator when you’re working in gretl. The default program is calc.exe under MS Windows, or xcalc under the X window system. You can change the program under the “Tools, Preferences, General” menu, “Programs” tab. 2. Start a new script. Opens an editor window in which you can type a series of commands to be sent to the program as a batch. 3. Open the gretl console. A shortcut to the “Gretl console” menu item (Section 2.3 above). Chapter 2. Getting started 4. Open the gretl session icon window. 5. Open a window displaying available gretl function packages. 6. Open this manual in PDF format. 12 7. Open the help item for script commands syntax (i.e. a listing with details of all available commands). 8. Open the dialog box for deﬁning a graph. 9. Open the dialog box for estimating a model using ordinary least squares. 10. Open a window listing the sample datasets supplied with gretl, and any other data ﬁle collections that have been installed. Chapter 3 Modes of working 3.1 Command scripts As you execute commands in gretl, using the GUI and ﬁlling in dialog entries, those commands are recorded in the form of a “script” or batch ﬁle. Such scripts can be edited and re-run, using either gretl or the command-line client, gretlcli. To view the current state of the script at any point in a gretl session, choose “Command log” under the Tools menu. This log ﬁle is called session.inp and it is overwritten whenever you start a new session. To preserve it, save the script under a diﬀerent name. Script ﬁles will be found most easily, using the GUI ﬁle selector, if you name them with the extension “.inp”. To open a script you have written independently, use the “File, Script ﬁles” menu item; to create a script from scratch use the “File, Script ﬁles, New script” item or the “new script” toolbar button. In either case a script window will open (see Figure 3.1). Figure 3.1: Script window, editing a command ﬁle The toolbar at the top of the script window oﬀers the following functions (left to right): (1) Save the ﬁle; (2) Save the ﬁle under a speciﬁed name; (3) Print the ﬁle (this option is not available on all platforms); (4) Execute the commands in the ﬁle; (5) Copy selected text; (6) Paste the selected text; (7) Find and replace text; (8) Undo the last Paste or Replace action; (9) Help (if you place the cursor in a command word and press the question mark you will get help on that command); (10) Close the window. When you execute the script, by clicking on the Execute icon or by pressing Ctrl-r, all output is directed to a single window, where it can be edited, saved or copied to the clipboard. To learn more about the possibilities of scripting, take a look at the gretl Help item “Command reference,” 13 Chapter 3. Modes of working 14 or start up the command-line program gretlcli and consult its help, or consult the Gretl Command Reference. If you run the script when part of it is highlighted, gretl will only run that portion. Moreover, if you want to run just the current line, you can do so by pressing Ctrl-Enter.1 Clicking the right mouse button in the script editor window produces a pop-up menu. This gives you the option of executing either the line on which the cursor is located, or the selected region of the script if there’s a selection in place. If the script is editable, this menu also gives the option of adding or removing comment markers from the start of the line or lines. The gretl package includes over 70 “practice” scripts. Most of these relate to Ramanathan (2002), but they may also be used as a free-standing introduction to scripting in gretl and to various points of econometric theory. You can explore the practice ﬁles under “File, Script ﬁles, Practice ﬁle” There you will ﬁnd a listing of the ﬁles along with a brief description of the points they illustrate and the data they employ. Open any ﬁle and run it to see the output. Note that long commands in a script can be broken over two or more lines, using backslash as a continuation character. You can, if you wish, use the GUI controls and the scripting approach in tandem, exploiting each method where it oﬀers greater convenience. Here are two suggestions. • Open a data ﬁle in the GUI. Explore the data — generate graphs, run regressions, perform tests. Then open the Command log, edit out any redundant commands, and save it under a speciﬁc name. Run the script to generate a single ﬁle containing a concise record of your work. • Start by establishing a new script ﬁle. Type in any commands that may be required to set up transformations of the data (see the genr command in the Gretl Command Reference). Typically this sort of thing can be accomplished more eﬃciently via commands assembled with forethought rather than point-and-click. Then save and run the script: the GUI data window will be updated accordingly. Now you can carry out further exploration of the data via the GUI. To revisit the data at a later point, open and rerun the “preparatory” script ﬁrst. Scripts and data ﬁles One common way of doing econometric research with gretl is as follows: compose a script; execute the script; inspect the output; modify the script; run it again — with the last three steps repeated as many times as necessary. In this context, note that when you open a data ﬁle this clears out most of gretl’s internal state. It’s therefore probably a good idea to have your script start with an open command: the data ﬁle will be re-opened each time, and you can be conﬁdent you’re getting “fresh” results. One further point should be noted. When you go to open a new data ﬁle via the graphical interface, you are always prompted: opening a new data ﬁle will lose any unsaved work, do you really want to do this? When you execute a script that opens a data ﬁle, however, you are not prompted. The assumption is that in this case you’re not going to lose any work, because the work is embodied in the script itself (and it would be annoying to be prompted at each iteration of the work cycle described above). This means you should be careful if you’ve done work using the graphical interface and then decide to run a script: the current data ﬁle will be replaced without any questions asked, and it’s your responsibility to save any changes to your data ﬁrst. 1 This feature is not unique to gretl; other econometric packages oﬀer the same facility. However, experience shows that while this can be remarkably useful, it can also lead to writing dinosaur scripts that are never meant to be executed all at once, but rather used as a chaotic repository to cherry-pick snippets from. Since gretl allows you to have several script windows open at the same time, you may want to keep your scripts tidy and reasonably small. Chapter 3. Modes of working 15 3.2 Saving script objects When you estimate a model using point-and-click, the model results are displayed in a separate window, oﬀering menus which let you perform tests, draw graphs, save data from the model, and so on. Ordinarily, when you estimate a model using a script you just get a non-interactive printout of the results. You can, however, arrange for models estimated in a script to be “captured”, so that you can examine them interactively when the script is ﬁnished. Here is an example of the syntax for achieving this eﬀect: Model1 <- ols Ct 0 Yt That is, you type a name for the model to be saved under, then a back-pointing “assignment arrow”, then the model command. You may use names that have embedded spaces if you like, but such names must be wrapped in double quotes: "Model 1" <- ols Ct 0 Yt Models saved in this way will appear as icons in the gretl icon view window (see Section 3.4) after the script is executed. In addition, you can arrange to have a named model displayed (in its own window) automatically as follows: Model1.show Again, if the name contains spaces it must be quoted: "Model 1".show The same facility can be used for graphs. For example the following will create a plot of Ct against Yt, save it under the name “CrossPlot” (it will appear under this name in the icon view window), and have it displayed: CrossPlot <- gnuplot Ct Yt CrossPlot.show You can also save the output from selected commands as named pieces of text (again, these will appear in the session icon window, from where you can open them later). For example this command sends the output from an augmented Dickey–Fuller test to a “text object” named ADF1 and displays it in a window: ADF1 <- adf 2 x1 ADF1.show Objects saved in this way (whether models, graphs or pieces of text output) can be destroyed using the command .free appended to the name of the object, as in ADF1.free. 3.3 The gretl console A further option is available for your computing convenience. Under gretl’s “Tools” menu you will ﬁnd the item “Gretl console” (there is also an “open gretl console” button on the toolbar in the main window). This opens up a window in which you can type commands and execute them one by one (by pressing the Enter key) interactively. This is essentially the same as gretlcli’s mode of operation, except that the GUI is updated based on commands executed from the console, enabling you to work back and forth as you wish. In the console, you have “command history”; that is, you can use the up and down arrow keys to navigate the list of command you have entered to date. You can retrieve, edit and then re-enter a previous command. Chapter 3. Modes of working 16 In console mode, you can create, display and free objects (models, graphs or text) aa described above for script mode. 3.4 The Session concept gretl oﬀers the idea of a “session” as a way of keeping track of your work and revisiting it later. The basic idea is to provide an iconic space containing various objects pertaining to your current working session (see Figure 3.2). You can add objects (represented by icons) to this space as you go along. If you save the session, these added objects should be available again if you re-open the session later. Figure 3.2: Icon view: one model and one graph have been added to the default icons If you start gretl and open a data set, then select “Icon view” from the View menu, you should see the basic default set of icons: these give you quick access to information on the data set (if any), correlation matrix (“Correlations”) and descriptive summary statistics (“Summary”). All of these are activated by double-clicking the relevant icon. The “Data set” icon is a little more complex: double-clicking opens up the data in the built-in spreadsheet, but you can also right-click on the icon for a menu of other actions. To add a model to the Icon view, ﬁrst estimate it using the Model menu. Then pull down the File menu in the model window and select “Save to session as icon. . . ” or “Save as icon and close”. Simply hitting the S key over the model window is a shortcut to the latter action. To add a graph, ﬁrst create it (under the View menu, “Graph speciﬁed vars”, or via one of gretl’s other graph-generating commands). Click on the graph window to bring up the graph menu, and select “Save to session as icon”. Once a model or graph is added its icon will appear in the Icon view window. Double-clicking on the icon redisplays the object, while right-clicking brings up a menu which lets you display or delete the object. This popup menu also gives you the option of editing graphs. The model table In econometric research it is common to estimate several models with a common dependent variable — the models diﬀering in respect of which independent variables are included, or perhaps in respect of the estimator used. In this situation it is convenient to present the regression results in the form of a table, where each column contains the results (coeﬃcient estimates and standard errors) for a given model, and each row contains the estimates for a given variable across the models. In the Icon view window gretl provides a means of constructing such a table (and copying it in plain A text, L TEX or Rich Text Format). The procedure is outlined below. (The model table can also be built Chapter 3. Modes of working 17 non-interactively, in script mode. For details, see the entry for modeltab in the Gretl Command Reference.) 1. Estimate a model which you wish to include in the table, and in the model display window, under the File menu, select “Save to session as icon” or “Save as icon and close”. 2. Repeat step 1 for the other models to be included in the table (up to a total of six models). 3. When you are done estimating the models, open the icon view of your gretl session, by selecting “Icon view” under the View menu in the main gretl window, or by clicking the “session icon view” icon on the gretl toolbar. 4. In the Icon view, there is an icon labeled “Model table”. Decide which model you wish to appear in the left-most column of the model table and add it to the table, either by dragging its icon onto the Model table icon, or by right-clicking on the model icon and selecting “Add to model table” from the pop-up menu. 5. Repeat step 4 for the other models you wish to include in the table. The second model selected will appear in the second column from the left, and so on. 6. When you are ﬁnished composing the model table, display it by double-clicking on its icon. Under the Edit menu in the window which appears, you have the option of copying the table to the clipboard in various formats. 7. If the ordering of the models in the table is not what you wanted, right-click on the model table icon and select “Clear table”. Then go back to step 4 above and try again. A simple instance of gretl’s model table is shown in Figure 3.3. Figure 3.3: Example of model table The graph page The “graph page” icon in the session window oﬀers a means of putting together several graphs A for printing on a single page. This facility will work only if you have the L TEX typesetting system installed, and are able to generate and view either PDF or PostScript output. The output format is controlled by your choice of program for compiling TEX ﬁles, which can be found under the Chapter 3. Modes of working 18 “Programs” tab in the Preferences dialog box (under the “Tools” menu in the main window). Usually this should be pdﬂatex for PDF output or latex for PostScript. In the latter case you must have a working set-up for handling PostScript, which will usually include dvips, ghostscript and a viewer such as gv, ggv or kghostview. In the Icon view window, you can drag up to eight graphs onto the graph page icon. When you double-click on the icon (or right-click and select “Display”), a page containing the selected graphs (in PDF or EPS format) will be composed and opened in your viewer. From there you should be able to print the page. To clear the graph page, right-click on its icon and select “Clear”. As with the model table, it is also possible to manipulate the graph page via commands in script or console mode — see the entry for the graphpg command in the Gretl Command Reference. Saving and re-opening sessions If you create models or graphs that you think you may wish to re-examine later, then before quitting gretl select “Session ﬁles, Save session” from the File menu and give a name under which to save the session. To re-open the session later, either • Start gretl then re-open the session ﬁle by going to the “File, Session ﬁles, Open session”, or • From the command line, type gretl -r sessionﬁle, where sessionﬁle is the name under which the session was saved, or • Drag the icon representing a gretl session ﬁle onto gretl. Chapter 4 Data ﬁles 4.1 Native format gretl has its own format for data ﬁles. Most users will probably not want to read or write such ﬁles outside of gretl itself, but occasionally this may be useful and full details on the ﬁle formats are given in Appendix A. 4.2 Other data ﬁle formats gretl will read various other data formats. • Plain text (ASCII) ﬁles. These can be brought in using gretl’s “File, Open Data, Import ASCII. . . ” menu item, or the import script command. For details on what gretl expects of such ﬁles, see Section 4.4. • Comma-Separated Values (CSV) ﬁles. These can be imported using gretl’s “File, Open Data, Import CSV. . . ” menu item, or the import script command. See also Section 4.4. • Spreadsheets: MS Excel, Gnumeric and Open Document (ODS). These are also brought in using gretl’s “File, Open Data, Import” menu. The requirements for such ﬁles are given in Section 4.4. • Stata data ﬁles (.dta). • SPSS data ﬁles (.sav). • Eviews workﬁles (.wf1).1 • JMulTi data ﬁles. When you import data from the ASCII or CSV formats, gretl opens a “diagnostic” window, reporting on its progress in reading the data. If you encounter a problem with ill-formatted data, the messages in this window should give you a handle on ﬁxing the problem. As of version 1.7.5, gretl also oﬀers ODBC connctivity. Be warned: this is a recent feature meant for somewhat advanced users; it may still have a few rough edges and there is no GUI interface for this yet. Interested readers will ﬁnd more info in appendix B. For the convenience of anyone wanting to carry out more complex data analysis, gretl has a facility for writing out data in the native formats of GNU R, Octave, JMulTi and PcGive (see Appendix E). In the GUI client this option is found under the “File, Export data” menu; in the command-line client use the store command with the appropriate option ﬂag. 4.3 Binary databases For working with large amounts of data gretl is supplied with a database-handling routine. A database, as opposed to a data ﬁle, is not read directly into the program’s workspace. A database 1 See http://www.ecn.wfu.edu/eviews_format/. 19 Chapter 4. Data ﬁles 20 can contain series of mixed frequencies and sample ranges. You open the database and select series to import into the working dataset. You can then save those series in a native format data ﬁle if you wish. Databases can be accessed via gretl’s menu item “File, Databases”. For details on the format of gretl databases, see Appendix A. Online access to databases As of version 0.40, gretl is able to access databases via the internet. Several databases are available from Wake Forest University. Your computer must be connected to the internet for this option to work. Please see the description of the “data” command under gretl’s Help menu. Visit the gretl data page for details and updates on available data. Foreign database formats Thanks to Thomas Doan of Estima, who made available the speciﬁcation of the database format used by RATS 4 (Regression Analysis of Time Series), gretl can handle such databases — or at least, a subset of same, namely time-series databases containing monthly and quarterly series. Gretl can also import data from PcGive databases. These take the form of a pair of ﬁles, one containing the actual data (with suﬃx .bn7) and one containing supplementary information (.in7). 4.4 Creating a data ﬁle from scratch There are several ways of doing this: 1. Find, or create using a text editor, a plain text data ﬁle and open it with gretl’s “Import ASCII” option. 2. Use your favorite spreadsheet to establish the data ﬁle, save it in Comma Separated Values format if necessary (this should not be necessary if the spreadsheet format is MS Excel, Gnumeric or Open Document), then use one of gretl’s “Import” options. 3. Use gretl’s built-in spreadsheet. 4. Select data series from a suitable database. 5. Use your favorite text editor or other software tools to a create data ﬁle in gretl format independently. Here are a few comments and details on these methods. Common points on imported data Options (1) and (2) involve using gretl’s “import” mechanism. For gretl to read such data successfully, certain general conditions must be satisﬁed: • The ﬁrst row must contain valid variable names. A valid variable name is of 15 characters maximum; starts with a letter; and contains nothing but letters, numbers and the underscore character, _. (Longer variable names will be truncated to 15 characters.) Qualiﬁcations to the above: First, in the case of an ASCII or CSV import, if the ﬁle contains no row with variable names the program will automatically add names, v1, v2 and so on. Second, by “the ﬁrst row” is meant the ﬁrst relevant row. In the case of ASCII and CSV imports, blank rows and rows beginning with a hash mark, #, are ignored. In the case of Excel and Gnumeric imports, you are presented with a dialog box where you can select an oﬀset into the spreadsheet, so that gretl will ignore a speciﬁed number of rows and/or columns. Chapter 4. Data ﬁles 21 • Data values: these should constitute a rectangular block, with one variable per column (and one observation per row). The number of variables (data columns) must match the number of variable names given. See also section 4.6. Numeric data are expected, but in the case of importing from ASCII/CSV, the program oﬀers limited handling of character (string) data: if a given column contains character data only, consecutive numeric codes are substituted for the strings, and once the import is complete a table is printed showing the correspondence between the strings and the codes. • Dates (or observation labels): Optionally, the ﬁrst column may contain strings such as dates, or labels for cross-sectional observations. Such strings have a maximum of 8 characters (as with variable names, longer strings will be truncated). A column of this sort should be headed with the string obs or date, or the ﬁrst row entry may be left blank. For dates to be recognized as such, the date strings must adhere to one or other of a set of speciﬁc formats, as follows. For annual data: 4-digit years. For quarterly data: a 4-digit year, followed by a separator (either a period, a colon, or the letter Q), followed by a 1-digit quarter. Examples: 1997.1, 2002:3, 1947Q1. For monthly data: a 4-digit year, followed by a period or a colon, followed by a two-digit month. Examples: 1997.01, 2002:10. CSV ﬁles can use comma, space or tab as the column separator. When you use the “Import CSV” menu item you are prompted to specify the separator. In the case of “Import ASCII” the program attempts to auto-detect the separator that was used. If you use a spreadsheet to prepare your data you are able to carry out various transformations of the “raw” data with ease (adding things up, taking percentages or whatever): note, however, that you can also do this sort of thing easily — perhaps more easily — within gretl, by using the tools under the “Add” menu. Appending imported data You may wish to establish a gretl dataset piece by piece, by incremental importation of data from other sources. This is supported via the “File, Append data” menu items: gretl will check the new data for conformability with the existing dataset and, if everything seems OK, will merge the data. You can add new variables in this way, provided the data frequency matches that of the existing dataset. Or you can append new observations for data series that are already present; in this case the variable names must match up correctly. Note that by default (that is, if you choose “Open data” rather than “Append data”), opening a new data ﬁle closes the current one. Using the built-in spreadsheet Under gretl’s “File, New data set” menu you can choose the sort of dataset you want to establish (e.g. quarterly time series, cross-sectional). You will then be prompted for starting and ending dates (or observation numbers) and the name of the ﬁrst variable to add to the dataset. After supplying this information you will be faced with a simple spreadsheet into which you can type data values. In the spreadsheet window, clicking the right mouse button will invoke a popup menu which enables you to add a new variable (column), to add an observation (append a row at the foot of the sheet), or to insert an observation at the selected point (move the data down and insert a blank row.) Once you have entered data into the spreadsheet you import these into gretl’s workspace using the spreadsheet’s “Apply changes” button. Please note that gretl’s spreadsheet is quite basic and has no support for functions or formulas. Data transformations are done via the “Add” or “Variable” menus in the main gretl window. Selecting from a database Another alternative is to establish your dataset by selecting variables from a database. Chapter 4. Data ﬁles 22 Begin with gretl’s “File, Databases” menu item. This has four forks: “Gretl native”, “RATS 4”, “PcGive” and “On database server”. You should be able to ﬁnd the ﬁle fedstl.bin in the ﬁle selector that opens if you choose the “Gretl native” option — this ﬁle, which contains a large collection of US macroeconomic time series, is supplied with the distribution. You won’t ﬁnd anything under “RATS 4” unless you have purchased RATS data.2 If you do possess RATS data you should go into gretl’s “Tools, Preferences, General” dialog, select the Databases tab, and ﬁll in the correct path to your RATS ﬁles. If your computer is connected to the internet you should ﬁnd several databases (at Wake Forest University) under “On database server”. You can browse these remotely; you also have the option of installing them onto your own computer. The initial remote databases window has an item showing, for each ﬁle, whether it is already installed locally (and if so, if the local version is up to date with the version at Wake Forest). Assuming you have managed to open a database you can import selected series into gretl’s workspace by using the “Series, Import” menu item in the database window, or via the popup menu that appears if you click the right mouse button, or by dragging the series into the program’s main window. Creating a gretl data ﬁle independently It is possible to create a data ﬁle in one or other of gretl’s own formats using a text editor or software tools such as awk, sed or perl. This may be a good choice if you have large amounts of data already in machine readable form. You will, of course, need to study the gretl data formats (XML format or “traditional” format) as described in Appendix A. 4.5 Structuring a dataset Once your data are read by gretl, it may be necessary to supply some information on the nature of the data. We distinguish between three kinds of datasets: 1. Cross section 2. Time series 3. Panel data The primary tool for doing this is the “Data, Dataset structure” menu entry in the graphical interface, or the setobs command for scripts and the command-line interface. Cross sectional data By a cross section we mean observations on a set of “units” (which may be ﬁrms, countries, individuals, or whatever) at a common point in time. This is the default interpretation for a data ﬁle: if gretl does not have suﬃcient information to interpret data as time-series or panel data, they are automatically interpreted as a cross section. In the unlikely event that cross-sectional data are wrongly interpreted as time series, you can correct this by selecting the “Data, Dataset structure” menu item. Click the “cross-sectional” radio button in the dialog box that appears, then click “Forward”. Click “OK” to conﬁrm your selection. Time series data When you import data from a spreadsheet or plain text ﬁle, gretl will make fairly strenuous eﬀorts to glean time-series information from the ﬁrst column of the data, if it looks at all plausible that such information may be present. If time-series structure is present but not recognized, again you 2 See www.estima.com Chapter 4. Data ﬁles 23 can use the “Data, Dataset structure” menu item. Select “Time series” and click “Forward”; select the appropriate data frequency and click “Forward” again; then select or enter the starting observation and click “Forward” once more. Finally, click “OK” to conﬁrm the time-series interpretation if it is correct (or click “Back” to make adjustments if need be). Besides the basic business of getting a data set interpreted as time series, further issues may arise relating to the frequency of time-series data. In a gretl time-series data set, all the series must have the same frequency. Suppose you wish to make a combined dataset using series that, in their original state, are not all of the same frequency. For example, some series are monthly and some are quarterly. Your ﬁrst step is to formulate a strategy: Do you want to end up with a quarterly or a monthly data set? A basic point to note here is that “compacting” data from a higher frequency (e.g. monthly) to a lower frequency (e.g. quarterly) is usually unproblematic. You lose information in doing so, but in general it is perfectly legitimate to take (say) the average of three monthly observations to create a quarterly observation. On the other hand, “expanding” data from a lower to a higher frequency is not, in general, a valid operation. In most cases, then, the best strategy is to start by creating a data set of the lower frequency, and then to compact the higher frequency data to match. When you import higher-frequency data from a database into the current data set, you are given a choice of compaction method (average, sum, start of period, or end of period). In most instances “average” is likely to be appropriate. You can also import lower-frequency data into a high-frequency data set, but this is generally not recommended. What gretl does in this case is simply replicate the values of the lower-frequency series as many times as required. For example, suppose we have a quarterly series with the value 35.5 in 1990:1, the ﬁrst quarter of 1990. On expansion to monthly, the value 35.5 will be assigned to the observations for January, February and March of 1990. The expanded variable is therefore useless for ﬁne-grained time-series analysis, outside of the special case where you know that the variable in question does in fact remain constant over the sub-periods. When the current data frequency is appropriate, gretl oﬀers both “Compact data” and “Expand data” options under the “Data” menu. These options operate on the whole data set, compacting or exanding all series. They should be considered “expert” options and should be used with caution. Panel data Panel data are inherently three dimensional — the dimensions being variable, cross-sectional unit, and time-period. For example, a particular number in a panel data set might be identiﬁed as the observation on capital stock for General Motors in 1980. (A note on terminology: we use the terms “cross-sectional unit”, “unit” and “group” interchangeably below to refer to the entities that compose the cross-sectional dimension of the panel. These might, for instance, be ﬁrms, countries or persons.) For representation in a textual computer ﬁle (and also for gretl’s internal calculations) the three dimensions must somehow be ﬂattened into two. This “ﬂattening” involves taking layers of the data that would naturally stack in a third dimension, and stacking them in the vertical dimension. Gretl always expects data to be arranged “by observation”, that is, such that each row represents an observation (and each variable occupies one and only one column). In this context the ﬂattening of a panel data set can be done in either of two ways: • Stacked time series: the successive vertical blocks each comprise a time series for a given unit. • Stacked cross sections: the successive vertical blocks each comprise a cross-section for a given period. You may input data in whichever arrangement is more convenient. Internally, however, gretl always stores panel data in the form of stacked time series. Chapter 4. Data ﬁles 24 When you import panel data into gretl from a spreadsheet or comma separated format, the panel nature of the data will not be recognized automatically (most likely the data will be treated as “undated”). A panel interpretation can be imposed on the data using the graphical interface or via the setobs command. In the graphical interface, use the menu item “Data, Dataset structure”. In the ﬁrst dialog box that appears, select “Panel”. In the next dialog you have a three-way choice. The ﬁrst two options, “Stacked time series” and “Stacked cross sections” are applicable if the data set is already organized in one of these two ways. If you select either of these options, the next step is to specify the number of cross-sectional units in the data set. The third option, “Use index variables”, is applicable if the data set contains two variables that index the units and the time periods respectively; the next step is then to select those variables. For example, a data ﬁle might contain a country code variable and a variable representing the year of the observation. In that case gretl can reconstruct the panel structure of the data regardless of how the observation rows are organized. The setobs command has options that parallel those in the graphical interface. If suitable index variables are available you can do, for example setobs unitvar timevar --panel-vars where unitvar is a variable that indexes the units and timevar is a variable indexing the periods. Alternatively you can use the form setobs freq 1:1 structure, where freq is replaced by the “block size” of the data (that is, the number of periods in the case of stacked time series, or the number of units in the case of stacked cross-sections) and structure is either --stacked-time-series or --stacked-cross-section. Two examples are given below: the ﬁrst is suitable for a panel in the form of stacked time series with observations from 20 periods; the second for stacked cross sections with 5 units. setobs 20 1:1 --stacked-time-series setobs 5 1:1 --stacked-cross-section Panel data arranged by variable Publicly available panel data sometimes come arranged “by variable.” Suppose we have data on two variables, x1 and x2, for each of 50 states in each of 5 years (giving a total of 250 observations per variable). One textual representation of such a data set would start with a block for x1, with 50 rows corresponding to the states and 5 columns corresponding to the years. This would be followed, vertically, by a block with the same structure for variable x2. A fragment of such a data ﬁle is shown below, with quinquennial observations 1965–1985. Imagine the table continued for 48 more states, followed by another 50 rows for variable x2. x1 1965 AR AZ 100.0 100.0 1970 110.5 104.3 1975 118.7 113.8 1980 131.2 120.9 1985 160.4 140.6 If a dataﬁle with this sort of structure is read into gretl,3 the program will interpret the columns as distinct variables, so the data will not be usable “as is.” But there is a mechanism for correcting the situation, namely the stack function within the genr command. Consider the ﬁrst data column in the fragment above: the ﬁrst 50 rows of this column constitute a cross-section for the variable x1 in the year 1965. If we could create a new variable by stacking the 3 Note that you will have to modify such a dataﬁle slightly before it can be read at all. The line containing the variable name (in this example x1) will have to be removed, and so will the initial row containing the years, otherwise they will be taken as numerical data. Chapter 4. Data ﬁles 25 ﬁrst 50 entries in the second column underneath the ﬁrst 50 entries in the ﬁrst, we would be on the way to making a data set “by observation” (in the ﬁrst of the two forms mentioned above, stacked cross-sections). That is, we’d have a column comprising a cross-section for x1 in 1965, followed by a cross-section for the same variable in 1970. The following gretl script illustrates how we can accomplish the stacking, for both x1 and x2. We assume that the original data ﬁle is called panel.txt, and that in this ﬁle the columns are headed with “variable names” p1, p2, . . . , p5. (The columns are not really variables, but in the ﬁrst instance we “pretend” that they are.) open panel.txt genr x1 = stack(p1..p5) --length=50 genr x2 = stack(p1..p5) --offset=50 --length=50 setobs 50 1:1 --stacked-cross-section store panel.gdt x1 x2 The second line illustrates the syntax of the stack function. The double dots within the parentheses indicate a range of variables to be stacked: here we want to stack all 5 columns (for all 5 years). The full data set contains 100 rows; in the stacking of variable x1 we wish to read only the ﬁrst 50 rows from each column: we achieve this by adding --length=50. Note that if you want to stack a non-contiguous set of columns you can put a comma-separated list within the parentheses, as in genr x = stack(p1,p3,p5) On line 3 we do the stacking for variable x2. Again we want a length of 50 for the components of the stacked series, but this time we want gretl to start reading from the 50th row of the original data, and we specify --offset=50. Line 4 imposes a panel interpretation on the data; ﬁnally, we save the data in gretl format, with the panel interpretation, discarding the original “variables” p1 through p5. The illustrative script above is appropriate when the number of variable to be processed is small. When then are many variables in the data set it will be more eﬃcient to use a command loop to accomplish the stacking, as shown in the following script. The setup is presumed to be the same as in the previous section (50 units, 5 periods), but with 20 variables rather than 2. open panel.txt loop for i=1..20 genr k = ($i - 1) * 50 genr x$i = stack(p1..p5) --offset=k --length=50 endloop setobs 50 1.01 --stacked-cross-section store panel.gdt x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 \ x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 Panel data marker strings It can be helpful with panel data to have the observations identiﬁed by mnemonic markers. A special function in the genr command is available for this purpose. In the example above, suppose all the states are identiﬁed by two-letter codes in the left-most column of the original dataﬁle. When the stacking operation is performed, these codes will be stacked along with the data values. If the ﬁrst row is marked AR for Arkansas, then the marker AR will end up being shown on each row containing an observation for Arkansas. That’s all very well, but these markers don’t tell us anything about the date of the observation. To rectify this we could do: Chapter 4. Data ﬁles genr time genr year = 1960 + (5 * time) genr markers = "%s:%d", marker, year 26 The ﬁrst line generates a 1-based index representing the period of each observation, and the second line uses the time variable to generate a variable representing the year of the observation. The third line contains this special feature: if (and only if) the name of the new “variable” to generate is markers, the portion of the command following the equals sign is taken as C-style format string (which must be wrapped in double quotes), followed by a comma-separated list of arguments. The arguments will be printed according to the given format to create a new set of observation markers. Valid arguments are either the names of variables in the dataset, or the string marker which denotes the pre-existing observation marker. The format speciﬁers which are likely to be useful in this context are %s for a string and %d for an integer. Strings can be truncated: for example %.3s will use just the ﬁrst three characters of the string. To chop initial characters oﬀ an existing observation marker when constructing a new one, you can use the syntax marker + n, where n is a positive integer: in the case the ﬁrst n characters will be skipped. After the commands above are processed, then, the observation markers will look like, for example, AR:1965, where the two-letter state code and the year of the observation are spliced together with a colon. 4.6 Missing data values These are represented internally as DBL_MAX, the largest ﬂoating-point number that can be represented on the system (which is likely to be at least 10 to the power 300, and so should not be confused with legitimate data values). In a native-format data ﬁle they should be represented as NA. When importing CSV data gretl accepts several common representations of missing values including −999, the string NA (in upper or lower case), a single dot, or simply a blank cell. Blank cells should, of course, be properly delimited, e.g. 120.6,,5.38, in which the middle value is presumed missing. As for handling of missing values in the course of statistical analysis, gretl does the following: • In calculating descriptive statistics (mean, standard deviation, etc.) under the summary command, missing values are simply skipped and the sample size adjusted appropriately. • In running regressions gretl ﬁrst adjusts the beginning and end of the sample range, truncating the sample if need be. Missing values at the beginning of the sample are common in time series work due to the inclusion of lags, ﬁrst diﬀerences and so on; missing values at the end of the range are not uncommon due to diﬀerential updating of series and possibly the inclusion of leads. If gretl detects any missing values “inside” the (possibly truncated) sample range for a regression, the result depends on the character of the dataset and the estimator chosen. In many cases, the program will automatically skip the missing observations when calculating the regression results. In this situation a message is printed stating how many observations were dropped. On the other hand, the skipping of missing observations is not supported for all procedures: exceptions include all autoregressive estimators, system estimators such as SUR, and nonlinear least squares. In the case of panel data, the skipping of missing observations is supported only if their omission leaves a balanced panel. If missing observations are found in cases where they are not supported, gretl gives an error message and refuses to produce estimates. In case missing values in the middle of a dataset present a problem, the misszero function (use with care!) is provided under the genr command. By doing genr foo = misszero(bar) you can produce a series foo which is identical to bar except that any missing values become zeros. Then Chapter 4. Data ﬁles 27 you can use carefully constructed dummy variables to, in eﬀect, drop the missing observations from the regression while retaining the surrounding sample range.4 4.7 Maximum size of data sets Basically, the size of data sets (both the number of variables and the number of observations per variable) is limited only by the characteristics of your computer. Gretl allocates memory dynamically, and will ask the operating system for as much memory as your data require. Obviously, then, you are ultimately limited by the size of RAM. Aside from the multiple-precision OLS option, gretl uses double-precision ﬂoating-point numbers throughout. The size of such numbers in bytes depends on the computer platform, but is typically eight. To give a rough notion of magnitudes, suppose we have a data set with 10,000 observations on 500 variables. That’s 5 million ﬂoating-point numbers or 40 million bytes. If we deﬁne the megabyte (MB) as 1024 × 1024 bytes, as is standard in talking about RAM, it’s slightly over 38 MB. The program needs additional memory for workspace, but even so, handling a data set of this size should be quite feasible on a current PC, which at the time of writing is likely to have at least 256 MB of RAM. If RAM is not an issue, there is one further limitation on data size (though it’s very unlikely to be a binding constraint). That is, variables and observations are indexed by signed integers, and on a typical PC these will be 32-bit values, capable of representing a maximum positive value of 231 − 1 = 2, 147, 483, 647. The limits mentioned above apply to gretl’s “native” functionality. There are tighter limits with regard to two third-party programs that are available as add-ons to gretl for certain sorts of timeseries analysis including seasonal adjustment, namely TRAMO/SEATS and X-12-ARIMA. These programs employ a ﬁxed-size memory allocation, and can’t handle series of more than 600 observations. 4.8 Data ﬁle collections If you’re using gretl in a teaching context you may be interested in adding a collection of data ﬁles and/or scripts that relate speciﬁcally to your course, in such a way that students can browse and access them easily. There are three ways to access such collections of ﬁles: • For data ﬁles: select the menu item “File, Open data, Sample ﬁle”, or click on the folder icon on the gretl toolbar. • For script ﬁles: select the menu item File, Script ﬁles, Practice ﬁle”. When a user selects one of the items: • The data or script ﬁles included in the gretl distribution are automatically shown (this includes ﬁles relating to Ramanathan’s Introductory Econometrics and Greene’s Econometric Analysis ). • The program looks for certain known collections of data ﬁles available as optional extras, for instance the dataﬁles from various econometrics textbooks (Davidson and MacKinnon, Gujarati, Stock and Watson, Verbeek, Wooldridge) and the Penn World Table (PWT 5.6). (See the data page at the gretl website for information on these collections.) If the additional ﬁles are found, they are added to the selection windows. 4 genr also oﬀers the inverse function to misszero, namely zeromiss, which replaces zeros in a given series with the missing observation code. Chapter 4. Data ﬁles 28 • The program then searches for valid ﬁle collections (not necessarily known in advance) in these places: the “system” data directory, the system script directory, the user directory, and all ﬁrst-level subdirectories of these. For reference, typical values for these directories are shown in Table 4.1. (Note that PERSONAL is a placeholder that is expanded by Windows, corresponding to “My Documents” on English-language systems.) Linux system data dir system script dir user dir /usr/share/gretl/data /usr/share/gretl/scripts $HOME/gretl MS Windows c:\Program Files\gretl\data c:\Program Files\gretl\scripts PERSONAL\gretl Table 4.1: Typical locations for ﬁle collections Any valid collections will be added to the selection windows. So what constitutes a valid ﬁle collection? This comprises either a set of data ﬁles in gretl XML format (with the .gdt suﬃx) or a set of script ﬁles containing gretl commands (with .inp suﬃx), in each case accompanied by a “master ﬁle” or catalog. The gretl distribution contains several example catalog ﬁles, for instance the ﬁle descriptions in the misc sub-directory of the gretl data directory and ps_descriptions in the misc sub-directory of the scripts directory. If you are adding your own collection, data catalogs should be named descriptions and script catalogs should be be named ps_descriptions. In each case the catalog should be placed (along with the associated data or script ﬁles) in its own speciﬁc sub-directory (e.g. /usr/share/gretl/ data/mydata or c:\userdata\gretl\data\mydata). The syntax of the (plain text) description ﬁles is straightforward. Here, for example, are the ﬁrst few lines of gretl’s “misc” data catalog: # Gretl: various illustrative datafiles "arma","artificial data for ARMA script example" "ects_nls","Nonlinear least squares example" "hamilton","Prices and exchange rate, U.S. and Italy" The ﬁrst line, which must start with a hash mark, contains a short name, here “Gretl”, which will appear as the label for this collection’s tab in the data browser window, followed by a colon, followed by an optional short description of the collection. Subsequent lines contain two elements, separated by a comma and wrapped in double quotation marks. The ﬁrst is a dataﬁle name (leave oﬀ the .gdt suﬃx here) and the second is a short description of the content of that dataﬁle. There should be one such line for each dataﬁle in the collection. A script catalog ﬁle looks very similar, except that there are three ﬁelds in the ﬁle lines: a ﬁlename (without its .inp suﬃx), a brief description of the econometric point illustrated in the script, and a brief indication of the nature of the data used. Again, here are the ﬁrst few lines of the supplied “misc” script catalog: # Gretl: various sample scripts "arma","ARMA modeling","artificial data" "ects_nls","Nonlinear least squares (Davidson)","artificial data" "leverage","Influential observations","artificial data" "longley","Multicollinearity","US employment" If you want to make your own data collection available to users, these are the steps: 1. Assemble the data, in whatever format is convenient. Chapter 4. Data ﬁles 29 2. Convert the data to gretl format and save as gdt ﬁles. It is probably easiest to convert the data by importing them into the program from plain text, CSV, or a spreadsheet format (MS Excel or Gnumeric) then saving them. You may wish to add descriptions of the individual variables (the “Variable, Edit attributes” menu item), and add information on the source of the data (the “Data, Edit info” menu item). 3. Write a descriptions ﬁle for the collection using a text editor. 4. Put the dataﬁles plus the descriptions ﬁle in a subdirectory of the gretl data directory (or user directory). 5. If the collection is to be distributed to other people, package the data ﬁles and catalog in some suitable manner, e.g. as a zipﬁle. If you assemble such a collection, and the data are not proprietary, we would encourage you to submit the collection for packaging as a gretl optional extra. Chapter 5 Special functions in genr 5.1 Introduction The genr command provides a ﬂexible means of deﬁning new variables. It is documented in the Gretl Command Reference. This chapter oﬀers a more expansive discussion of some of the special functions available via genr and some of the ﬁner points of the command. 5.2 Long-run variance As is well known, the variance of the average of T random variables x1 , x2 , . . . , xT with equal variance σ 2 equals σ 2 /T if the data are uncorrelated. In this case, the sample variance of xt over the sample size provides a consistent estimator. T ¯ If, however, there is serial correlation among the xt s, the variance of X = T −1 t =1 xt must be estimated diﬀerently. One of the most widely used statistics for this purpose is a nonparametric kernel estimator with the Bartlett kernel deﬁned as T −k t =k k ˆ ω2 (k) = T −1 i=−k ¯ ¯ wi (xt − X)(xt −i − X) , (5.1) where the integer k is known as the window size and the wi terms are the so-called Bartlett weights, i ˆ deﬁned as wi = 1 − k|+|1 . It can be shown that, for k large enough, ω2 (k)/T yields a consistent ¯. estimator of the variance of X Gretl implements this estimator by means of the function lrvar(), which takes two arguments: the series whose long-run variance must be estimated and the scalar k. If k is negative, the popular choice T 1/3 is used. 5.3 Time-series ﬁlters One sort of specialized function in genr is time-series ﬁltering. In addition to the usual application of lags and diﬀerences, gretl provides fractional diﬀerencing and two ﬁlters commonly used in macroeconomics for trend-cycle decomposition: the Hodrick–Prescott ﬁlter (Hodrick and Prescott, 1997) and the Baxter–King bandpass ﬁlter (Baxter and King, 1999). Fractional diﬀerencing The concept of diﬀerencing a time series d times is pretty obvious when d is an integer; it may seem odd when d is fractional. However, this idea has a well-deﬁned mathematical content: consider the function f (z) = (1 − z)−d , where z and d are real numbers. By taking a Taylor series expansion around z = 0, we see that f (z) = 1 + dz + d(d + 1) 2 z + ··· 2 30 Chapter 5. Special functions in genr or, more compactly, ∞ 31 f (z) = 1 + i=1 ψi z i with ψk = k i=1 (d + i − 1) d+k−1 = ψk−1 k! k The same expansion can be used with the lag operator, so that if we deﬁned Yt = (1 − L)0.5 Xt this could be considered shorthand for Yt = Xt − 0.5Xt −1 − 0.125Xt −2 − 0.0625Xt −3 − · · · In gretl this transformation can be accomplished by the syntax genr Y = fracdiff(X,0.5) The Hodrick–Prescott ﬁlter This ﬁlter is accessed using the hpfilt() function, which takes one argument, the name of the variable to be processed. A time series yt may be decomposed into a trend or growth component gt and a cyclical component ct . yt = gt + ct , t = 1, 2, . . . , T The Hodrick–Prescott ﬁlter eﬀects such a decomposition by minimizing the following: T T −1 (yt − gt )2 + λ t =1 t =2 (gt +1 − gt ) − (gt − gt −1 ) 2 . The ﬁrst term above is the sum of squared cyclical components ct = yt − gt . The second term is a multiple λ of the sum of squares of the trend component’s second diﬀerences. This second term penalizes variations in the growth rate of the trend component: the larger the value of λ, the higher is the penalty and hence the smoother the trend series. Note that the hpfilt function in gretl produces the cyclical component, ct , of the original series. If you want the smoothed trend you can subtract the cycle from the original: genr ct = hpfilt(yt) genr gt = yt - ct Hodrick and Prescott (1997) suggest that a value of λ = 1600 is reasonable for quarterly data. The default value in gretl is 100 times the square of the data frequency (which, of course, yields 1600 for quarterly data). The value can be adjusted using the set command, with a parameter of hp_lambda. For example, set hp_lambda 1200. The Baxter and King ﬁlter This ﬁlter is accessed using the bkfilt() function, which again takes the name of the variable to be processed as its single argument. Consider the spectral representation of a time series yt : π yt = eiω dZ(ω) −π Chapter 5. Special functions in genr 32 To extract the component of yt that lies between the frequencies ω and ω one could apply a bandpass ﬁlter: ∗ ct = π F ∗ (ω)eiω dZ(ω) −π where F ∗ (ω) = 1 for ω < |ω| < ω and 0 elsewhere. This would imply, in the time domain, applying to the series a ﬁlter with an inﬁnite number of coeﬃcients, which is undesirable. The Baxter and King bandpass ﬁlter applies to yt a ﬁnite polynomial in the lag operator A(L): ct = A(L)yt where A(L) is deﬁned as k A(L) = i=−k ai Li The coeﬃcients ai are chosen such that F (ω) = A(eiω )A(e−iω ) is the best approximation to F ∗ (ω) for a given k. Clearly, the higher k the better the approximation is, but since 2k observations have to be discarded, a compromise is usually sought. Moreover, the ﬁlter has also other appealing theoretical properties, among which the property that A(1) = 0, so a series with a single unit root is made stationary by application of the ﬁlter. In practice, the ﬁlter is normally used with monthly or quarterly data to extract the “business cycle” component, namely the component between 6 and 36 quarters. Usual choices for k are 8 or 12 (maybe higher for monthly series). The default values for the frequency bounds are 8 and 32, and the default value for the approximation order, k, is 8. You can adjust these values using the set command. The keyword for setting the frequency limits is bkbp_limits and the keyword for k is bkbp_k. Thus for example if you were using monthly data and wanted to adjust the frequency bounds to 18 and 96, and k to 24, you could do set bkbp_limits 18 96 set bkbp_k 24 These values would then remain in force for calls to the bkfilt function until changed by a further use of set. 5.4 Panel data speciﬁcs Dummy variables In a panel study you may wish to construct dummy variables of one or both of the following sorts: (a) dummies as unique identiﬁers for the units or groups, and (b) dummies as unique identiﬁers for the time periods. The former may be used to allow the intercept of the regression to diﬀer across the units, the latter to allow the intercept to diﬀer across periods. Two special functions are available to create such dummies. These are found under the “Add” menu in the GUI, or under the genr command in script mode or gretlcli. 1. “unit dummies” (script command genr unitdum). This command creates a set of dummy variables identifying the cross-sectional units. The variable du_1 will have value 1 in each row corresponding to a unit 1 observation, 0 otherwise; du_2 will have value 1 in each row corresponding to a unit 2 observation, 0 otherwise; and so on. 2. “time dummies” (script command genr timedum). This command creates a set of dummy variables identifying the periods. The variable dt_1 will have value 1 in each row corresponding to a period 1 observation, 0 otherwise; dt_2 will have value 1 in each row corresponding to a period 2 observation, 0 otherwise; and so on. Chapter 5. Special functions in genr 33 If a panel data set has the YEAR of the observation entered as one of the variables you can create a periodic dummy to pick out a particular year, e.g. genr dum = (YEAR=1960). You can also create periodic dummy variables using the modulus operator, %. For instance, to create a dummy with value 1 for the ﬁrst observation and every thirtieth observation thereafter, 0 otherwise, do genr index genr dum = ((index-1) % 30) = 0 Lags, diﬀerences, trends If the time periods are evenly spaced you may want to use lagged values of variables in a panel regression (but see section 15.2 below); you may also wish to construct ﬁrst diﬀerences of variables of interest. Once a dataset is identiﬁed as a panel, gretl will handle the generation of such variables correctly. For example the command genr x1_1 = x1(-1) will create a variable that contains the ﬁrst lag of x1 where available, and the missing value code where the lag is not available (e.g. at the start of the time series for each group). When you run a regression using such variables, the program will automatically skip the missing observations. When a panel data set has a fairly substantial time dimension, you may wish to include a trend in the analysis. The command genr time creates a variable named time which runs from 1 to T for each unit, where T is the length of the time-series dimension of the panel. If you want to create an index that runs consecutively from 1 to m × T , where m is the number of units in the panel, use genr index. Basic statistics by unit Gretl contains functions which can be used to generate basic descriptive statistics for a given variable, on a per-unit basis; these are pnobs() (number of valid cases), pmin() and pmax() (minimum and maximum) and pmean() and psd() (mean and standard deviation). As a brief illustration, suppose we have a panel data set comprising 8 time-series observations on each of N units or groups. Then the command genr pmx = pmean(x) creates a series of this form: the ﬁrst 8 values (corresponding to unit 1) contain the mean of x for unit 1, the next 8 values contain the mean for unit 2, and so on. The psd() function works in a similar manner. The sample standard deviation for group i is computed as si = ¯ (x − xi )2 Ti − 1 ¯ where Ti denotes the number of valid observations on x for the given unit, xi denotes the group mean, and the summation is across valid observations for the group. If Ti < 2, however, the standard deviation is recorded as 0. One particular use of psd() may be worth noting. If you want to form a sub-sample of a panel that contains only those units for which the variable x is time-varying, you can either use smpl (pmin(x) < pmax(x)) --restrict or smpl (psd(x) > 0) --restrict Chapter 5. Special functions in genr Special functions for data manipulation 34 Besides the functions discussed above, there are some facilities in genr designed speciﬁcally for manipulating panel data — in particular, for the case where the data have been read into the program from a third-party source and they are not in the correct form for panel analysis. These facilities are explained in Chapter 4. 5.5 Resampling and bootstrapping Another specialized function is the resampling, with replacement, of a series. Given an original data series x, the command genr xr = resample(x) creates a new series each of whose elements is drawn at random from the elements of x. If the original series has 100 observations, each element of x is selected with probability 1/100 at each drawing. Thus the eﬀect is to “shuﬄe” the elements of x, with the twist that each element of x may appear more than once, or not at all, in xr. The primary use of this function is in the construction of bootstrap conﬁdence intervals or p-values. Here is a simple example. Suppose we estimate a simple regression of y on x via OLS and ﬁnd that the slope coeﬃcient has a reported t -ratio of 2.5 with 40 degrees of freedom. The two-tailed pvalue for the null hypothesis that the slope parameter equals zero is then 0.0166, using the t(40) distribution. Depending on the context, however, we may doubt whether the ratio of coeﬃcient to standard error truly follows the t(40) distribution. In that case we could derive a bootstrap p-value as shown in Example 5.1. Under the null hypothesis that the slope with respect to x is zero, y is simply equal to its mean plus an error term. We simulate y by resampling the residuals from the initial OLS and re-estimate the model. We repeat this procedure a large number of times, and record the number of cases where the absolute value of the t -ratio is greater than 2.5: the proportion of such cases is our bootstrap p-value. For a good discussion of simulation-based tests and bootstrapping, see Davidson and MacKinnon (2004, chapter 4). Example 5.1: Calculation of bootstrap p-value ols y 0 x # save the residuals genr ui =$uhat scalar ybar = mean(y) # number of replications for bootstrap scalar replics = 10000 scalar tcount = 0 series ysim = 0 loop replics --quiet # generate simulated y by resampling ysim = ybar + resample(ui) ols ysim 0 x scalar tsim = abs($coeff(x) /$stderr(x)) tcount += (tsim > 2.5) endloop printf "proportion of cases with |t| > 2.5 = %g\n", tcount / replics Chapter 5. Special functions in genr 35 5.6 Cumulative densities and p-values The two functions cdf and pvalue provide complementary means of examining values from several probability distributions: the standard normal, Student’s t , χ 2 , F , gamma, and binomial. The syntax of these functions is set out in the Gretl Command Reference; here we expand on some subtleties. The cumulative density function or CDF for a random variable is the integral of the variable’s density from its lower limit (typically either −∞ or 0) to any speciﬁed value x . The p-value (at least the one-tailed, right-hand p-value as returned by the pvalue function) is the complementary probability, the integral from x to the upper limit of the distribution, typically +∞. In principle, therefore, there is no need for two distinct functions: given a CDF value p0 you could easily ﬁnd the corresponding p-value as 1 − p0 (or vice versa). In practice, with ﬁnite-precision computer arithmetic, the two functions are not redundant. This requires a little explanation. In gretl, as in most statistical programs, ﬂoating point numbers are represented as “doubles” — double-precision values that typically have a storage size of eight bytes or 64 bits. Since there are only so many bits available, only so many ﬂoating-point numbers can be represented: doubles do not model the real line. Typically doubles can represent numbers over the range (roughly) ±1.7977 × 10308 , but only to about 15 digits of precision. Suppose you’re interested in the left tail of the χ 2 distribution with 50 degrees of freedom: you’d like to know the CDF value for x = 0.9. Take a look at the following interactive session: ? genr p1 = cdf(X, 50, 0.9) Generated scalar p1 (ID 2) = 8.94977e-35 ? genr p2 = pvalue(X, 50, 0.9) Generated scalar p2 (ID 3) = 1 ? genr test = 1 - p2 Generated scalar test (ID 4) = 0 The cdf function has produced an accurate value, but the pvalue function gives an answer of 1, from which it is not possible to retrieve the answer to the CDF question. This may seem surprising at ﬁrst, but consider: if the value of p1 above is correct, then the correct value for p2 is 1 − 8.94977 × 10−35 . But there’s no way that value can be represented as a double: that would require over 30 digits of precision. Of course this is an extreme example. If the x in question is not too far oﬀ into one or other tail of the distribution, the cdf and pvalue functions will in fact produce complementary answers, as shown below: ? genr p1 = cdf(X, 50, 30) Generated scalar p1 (ID 2) = 0.0111648 ? genr p2 = pvalue(X, 50, 30) Generated scalar p2 (ID 3) = 0.988835 ? genr test = 1 - p2 Generated scalar test (ID 4) = 0.0111648 But the moral is that if you want to examine extreme values you should be careful in selecting the function you need, in the knowledge that values very close to zero can be represented as doubles while values very close to 1 cannot. 5.7 Handling missing values Four special functions are available for the handling of missing values. The boolean function missing() takes the name of a variable as its single argument; it returns a series with value 1 for each observation at which the given variable has a missing value, and value 0 otherwise (that is, if the given variable has a valid value at that observation). The function ok() is complementary to missing; it is just a shorthand for !missing (where ! is the boolean NOT operator). For example, one can count the missing values for variable x using Chapter 5. Special functions in genr genr nmiss_x = sum(missing(x)) 36 The function zeromiss(), which again takes a single series as its argument, returns a series where all zero values are set to the missing code. This should be used with caution — one does not want to confuse missing values and zeros — but it can be useful in some contexts. For example, one can determine the ﬁrst valid observation for a variable x using genr time genr x0 = min(zeromiss(time * ok(x))) The function misszero() does the opposite of zeromiss, that is, it converts all missing values to zero. It may be worth commenting on the propagation of missing values within genr formulae. The general rule is that in arithmetical operations involving two variables, if either of the variables has a missing value at observation t then the resulting series will also have a missing value at t . The one exception to this rule is multiplication by zero: zero times a missing value produces zero (since this is mathematically valid regardless of the unknown value). 5.8 Retrieving internal variables The genr command provides a means of retrieving various values calculated by the program in the course of estimating models or testing hypotheses. The variables that can be retrieved in this way are listed in the Gretl Command Reference; here we say a bit more about the special variables $test and$pvalue. These variables hold, respectively, the value of the last test statistic calculated using an explicit testing command and the p-value for that test statistic. If no such test has been performed at the time when these variables are referenced, they will produce the missing value code. The “explicit testing commands” that work in this way are as follows: add (joint test for the signiﬁcance of variables added to a model); adf (Augmented Dickey–Fuller test, see below); arch (test for ARCH); chow (Chow test for a structural break); coeffsum (test for the sum of speciﬁed coeﬃcients); cusum (the Harvey–Collier t -statistic); kpss (KPSS stationarity test, no p-value available); lmtest (see below); meantest (test for diﬀerence of means); omit (joint test for the signiﬁcance of variables omitted from a model); reset (Ramsey’s RESET); restrict (general linear restriction); runs (runs test for randomness); testuhat (test for normality of residual); and vartest (test for diﬀerence of variances). In most cases both a $test and a$pvalue are stored; the exception is the KPSS test, for which a p-value is not currently available. An important point to notice about this mechanism is that the internal variables $test and$pvalue are over-written each time one of the tests listed above is performed. If you want to reference these values, you must do so at the correct point in the sequence of gretl commands. A related point is that some of the test commands generate, by default, more than one test statistic and p-value; in these cases only the last values are stored. To get proper control over the retrieval of values via $test and$pvalue you should formulate the test command in such a way that the result is unambiguous. This comment applies in particular to the adf and lmtest commands. • By default, the adf command generates three variants of the Dickey–Fuller test: one based on a regression including a constant, one using a constant and linear trend, and one using a constant and a quadratic trend. When you wish to reference $test or$pvalue in connection with this command, you can control the variant that is recorded by using one of the ﬂags --nc, --c, --ct or --ctt with adf. • By default, the lmtest command (which must follow an OLS regression) performs several diagnostic tests on the regression in question. To control what is recorded in $test and$pvalue you should limit the test using one of the ﬂags --logs, --autocorr, --squares or --white. Chapter 5. Special functions in genr 37 As an aid in working with values retrieved using $test and$pvalue, the nature of the test to which these values relate is written into the descriptive label for the generated variable. You can read the label for the variable using the label command (with just one argument, the name of the variable), to check that you have retrieved the right value. The following interactive session illustrates this point. ? adf 4 x1 --c Augmented Dickey-Fuller tests, order 4, for x1 sample size 59 unit-root null hypothesis: a = 1 test with constant model: (1 - L)y = b0 + (a-1)*y(-1) + ... + e estimated value of (a - 1): -0.216889 test statistic: t = -1.83491 asymptotic p-value 0.3638 P-values based on MacKinnon (JAE, 1996) ? genr pv = $pvalue Generated scalar pv (ID 13) = 0.363844 ? label pv pv=Dickey-Fuller pvalue (scalar) 5.9 Numerical procedures Two special functions are available to aid in the construction of special-purpose estimators, namely BFGSmax (the BFGS maximizer, discussed in Chapter 17) and fdjac, which produces a forwarddiﬀerence approximation to the Jacobian. The BFGS maximizer The BFGSmax function has two required arguments: a vector holding the initial values of a set of parameters, and a call to a function that calculates the (scalar) criterion to be maximized, given the current parameter values and any other relevant data. If the object is in fact minimization, this function should return the negative of the criterion. On successful completion, BFGSmax returns the maximized value of the criterion and the matrix given via the ﬁrst argument holds the parameter values which produce the maximum. Here is an example: matrix X = { dataset } matrix theta = { 1, 100 }’ scalar J = BFGSmax(theta, ObjFunc(&theta, &X)) It is assumed here that ObjFunc is a user-deﬁned function (see Chapter 10) with the following general set-up: function scalar ObjFunc (matrix *theta, matrix *X) scalar val = ... # do some computation return val end function The operation of the BFGS maximizer can be adjusted using the set variables bfgs_maxiter and bfgs_toler (see Chapter 17). In addition you can provoke verbose output from the maximizer by assigning a positive value to max_verbose, again via the set command. The Rosenbrock function is often used as a test problem for optimization algorithms. It is also known as “Rosenbrock’s Valley” or “Rosenbrock’s Banana Function”, on account of the fact that its contour lines are banana-shaped. It is deﬁned by: f (x, y) = (1 − x)2 + 100(y − x 2 )2 Chapter 5. Special functions in genr 38 Example 5.2: Finding the minimum of the Rosenbrock function function scalar Rosenbrock(matrix *param) scalar x = param[1] scalar y = param[2] return -(1-x)^2 - 100 * (y - x^2)^2 end function matrix theta = { 0 , 0 } set max_verbose 1 M = BFGSmax(theta, Rosenbrock(&theta)) print theta The function has a global minimum at (x, y) = (1, 1) where f (x, y) = 0. Example 5.2 shows a gretl script that discovers the minimum using BFGSmax (giving a verbose account of progress). Supplying analytical derivatives for BFGS An optional third argument to the BFGSmax function enables the user to supply analytical derivatives of the criterion function with respect to the parameters (without which a numerical approximation to the gradient is computed). This argument is similar to the second one in that it speciﬁes a function call. In this case the function that is called must have the following signature. Its ﬁrst argument should be a pre-deﬁned matrix correctly dimensioned to hold the gradient; that is, if the parameter vector contains k elements, the gradient matrix must also be a k-vector. This matrix argument must be given in “pointer” form so that its content can be modiﬁed by the function. (Note that unlike the parameter vector, where the choice of initial values can be important, the initial values given to the gradient are immaterial and do not aﬀect the results.) In addition the gradient function must have as one of its argument the parameter vector. This may be given in pointer form (which enhances eﬁciency) but that is not required. Additional arguments may be speciﬁed if necessary. Given the current parameter values, the function call must ﬁll out the gradient vector appropriately. It is not required that the gradient function returns any value directly; if it does, that value is ignored. Example 5.3 illustrates, showing how the Rosenbrock script can be modiﬁed to use analytical derivatives. (Note that since this is a minimization problem the values written into g[1] and g[2] in the function Rosen_grad are in fact the derivatives of the negative of the Rosenbrock function.) Computing a Jacobian Gretl oﬀers the possibility of diﬀerentiating numerically a user-deﬁned function via the fdjac function. This function again takes two arguments: an n × 1 matrix holding initial parameter values and a function call that calculates and returns an m × 1 matrix, given the current parameter values and any other relevant data. On successful completion it returns an m × n matrix holding the Jacobian. For example, matrix Jac = fdjac(theta, SumOC(&theta, &X)) Chapter 5. Special functions in genr 39 Example 5.3: Rosenbrock function with analytical gradient function scalar Rosenbrock (matrix *param) scalar x = param[1] scalar y = param[2] return -(1-x)^2 - 100 * (y - x^2)^2 end function function void Rosen_grad (matrix *g, matrix *param) scalar x = param[1] scalar y = param[2] g[1] = 2*(1-x) + 2*x*(200*(y-x^2)) g[2] = -200*(y - x^2) end function matrix theta = { 0, 0 } matrix grad = { 0, 0 } set max_verbose 1 M = BFGSmax(theta, Rosenbrock(&theta), Rosen_grad(&grad, &theta)) print theta print grad where we assume that SumOC is a user-deﬁned function with the following structure: function matrix SumOC (matrix *theta, matrix *X) matrix V = ... # do some computation return V end function This may come in handy in several cases: for example, if you use BFGSmax to estimate a model, you may wish to calculate a numerical approximation to the relevant Jacobian to construct a covariance matrix for your estimates. Another example is the delta method: if you have a consistent estimator of a vector of parameters ˆ θ , and a consistent estimate of its covariance matrix Σ, you may need to compute estimates for a nonlinear continuous transformation ψ = g(θ). In this case, a standard result in asymptotic theory is that ψ = g(θ) − ψ = g(θ) ˆp ˆp ˆ θ− θ → → = ⇒ d d √T θ − θ − N(0, Σ) √T ψ − ψ − N(0, J ΣJ ) ˆ ˆ → → where T is the sample size and J is the Jacobian ∂g(x) . ∂x x =θ Script 5.4 exempliﬁes such a case: the example is taken from Greene (2003), section 9.3.1. The slight diﬀerences between the results reported in the original source and what gretl returns are due to the fact that the Jacobian is computed numerically, rather than analytically as in the book. 5.10 The discrete Fourier transform The discrete Fourier transform can be best thought of as a linear, invertible transform of a complex vector. Hence, if x is an n-dimensional vector whose k-th element is xk = ak + ibk , then the output Chapter 5. Special functions in genr of the discrete Fourier transform is a vector f = F (x) whose k-th element is n−1 40 fk = j =0 jk e−iω(j,k) xj where ω(j, k) = 2π i n . Since the transformation is invertible, the vector x can be recovered from f via the so-called inverse transform xk = 1 n n −1 eiω(j,k) fj . j =0 The Fourier transform is used in many diverse situations on account of this key property: the convolution of two vectors can be performed eﬃciently by multiplying the elements of their Fourier transforms and inverting the result. If n zk = j =1 xj yk−j , then F (z) = F (x) That is, F (z)k = F (x)k F (y)k . For computing the Fourier transform, gretl uses the external library fftw3: see Frigo and Johnson (2003). This guarantees extreme speed and accuracy. In fact, the CPU time needed to perform the transform is O(n log n) for any n. This is why the array of numerical techniques employed in fftw3 is commonly known as the Fast Fourier Transform. Gretl provides two matrix functions1 for performing the Fourier transform and its inverse: fft and ffti. In fact, gretl’s implementation of the Fourier transform is somewhat more specialized: the input to the fft function is understood to be real. Conversely, ffti takes a complex argument and delivers a real result. For example: x1 = { 1 ; 2 ; 3 } # perform the transform f = fft(a) # perform the inverse transform x2 = ffti(f) F (y). yields 1 6 −1.5 0 1 x1 = 2 3 f = −1.5 0.866 −0.866 x2 = 2 3 where the ﬁrst column of f holds the real part and the second holds the complex part. In general, if the input to fft has n columns, the output has 2n columns, where the real parts are stored in the odd columns and the complex parts in the even ones. Should it be necessary to compute the Fourier transform on several vectors with the same number of elements, it is numerically more eﬃcient to group them into a matrix rather than invoking fft for each vector separately. As an example, consider the multiplication of two polynomals: a(x) b(x) c(x) = a(x) · b(x) = = = 1 + 0.5x 1 + 0.3x − 0.8x 2 1 + 0.8x − 0.65x 2 − 0.4x 3 The coeﬃcients of the polynomial c(x) are the convolution of the coeﬃcents of a(x) and b(x); the following gretl code fragment illustrates how to compute the coeﬃcients of c(x): 1 See chapter 12. Chapter 5. Special functions in genr # define the two polynomials a = { 1, 0.5, 0, 0 }’ b = { 1, 0.3, -0.8, 0 }’ # perform the transforms fa = fft(a) fb = fft(b) # complex-multiply the two transforms fc = cmult(fa, fb) # compute the coefficients of c via the inverse transform c = ffti(fc) 41 Maximum eﬃciency would have been achieved by grouping a and b into a matrix. The computational advantage is so little in this case that the exercise is a bit silly, but the following alternative may be preferable for a large number of rows/columns: # define the two polynomials a = { 1 ; 0.5; 0 ; 0 } b = { 1 ; 0.3 ; -0.8 ; 0 } # perform the transforms jointly f = fft(a ~ b) # complex-multiply the two transforms fc = cmult(f[,1:2], f[,3:4]) # compute the coefficients of c via the inverse transform c = ffti(fc) Traditionally, the Fourier tranform in econometrics has been mostly used in time-series analysis, the periodogram being the best known example. Example script 5.5 shows how to compute the periodogram of a time series via the fft function. Chapter 5. Special functions in genr 42 Example 5.4: Delta Method function matrix MPC(matrix *param, matrix *Y) beta = param[2] gamma = param[3] y = Y[1] return beta*gamma*y^(gamma-1) end function # William Greene, Econometric Analysis, 5e, Chapter 9 set echo off set messages off open greene5_1.gdt # Use OLS to initialize the parameters ols realcons 0 realdpi --quiet genr a =$coeff(0) genr b = $coeff(realdpi) genr g = 1.0 # Run NLS with analytical derivatives nls realcons = a + b * (realdpi^g) deriv a = 1 deriv b = realdpi^g deriv g = b * realdpi^g * log(realdpi) end nls matrix Y = realdpi[2000:4] matrix theta =$coeff matrix V = $vcv mpc = MPC(&theta, &Y) matrix Jac = fdjac(theta, MPC(&theta, &Y)) Sigma = qform(Jac, V) printf "\nmpc = %g, std.err = %g\n", mpc, sqrt(Sigma) scalar teststat = (mpc-1)/sqrt(Sigma) printf "\nTest for MPC = 1: %g (p-value = %g)\n", \ teststat, pvalue(n,abs(teststat)) Chapter 5. Special functions in genr 43 Example 5.5: Periodogram via the Fourier transform nulldata 50 # generate an AR(1) process series e = normal() series x = 0 x = 0.9*x(-1) + e # compute the periodogram scale = 2*pi*$nobs X={x} F = fft(X) S = sumr(F.^2) S = S[2:($nobs/2)+1]/scale omega = seq(1,($nobs/2))’ .* (2*pi/$nobs) omega = omega ~ S # compare the built-in command pergm x print omega Chapter 6 Sub-sampling a dataset 6.1 Introduction Some subtle issues can arise here. This chapter attempts to explain the issues. A sub-sample may be deﬁned in relation to a full data set in two diﬀerent ways: we will refer to these as “setting” the sample and “restricting” the sample respectively. 6.2 Setting the sample By “setting” the sample we mean deﬁning a sub-sample simply by means of adjusting the starting and/or ending point of the current sample range. This is likely to be most relevant for time-series data. For example, one has quarterly data from 1960:1 to 2003:4, and one wants to run a regression using only data from the 1970s. A suitable command is then smpl 1970:1 1979:4 Or one wishes to set aside a block of observations at the end of the data period for out-of-sample forecasting. In that case one might do smpl ; 2000:4 where the semicolon is shorthand for “leave the starting observation unchanged”. (The semicolon may also be used in place of the second parameter, to mean that the ending observation should be unchanged.) By “unchanged” here, we mean unchanged relative to the last smpl setting, or relative to the full dataset if no sub-sample has been deﬁned up to this point. For example, after smpl 1970:1 2003:4 smpl ; 2000:4 the sample range will be 1970:1 to 2000:4. An incremental or relative form of setting the sample range is also supported. In this case a relative oﬀset should be given, in the form of a signed integer (or a semicolon to indicate no change), for both the starting and ending point. For example smpl +1 ; will advance the starting observation by one while preserving the ending observation, and smpl +2 -1 will both advance the starting observation by two and retard the ending observation by one. An important feature of “setting” the sample as described above is that it necessarily results in the selection of a subset of observations that are contiguous in the full dataset. The structure of the dataset is therefore unaﬀected (for example, if it is a quarterly time series before setting the sample, it remains a quarterly time series afterwards). 44 Chapter 6. Sub-sampling a dataset 45 6.3 Restricting the sample By “restricting” the sample we mean selecting observations on the basis of some Boolean (logical) criterion, or by means of a random number generator. This is likely to be most relevant for crosssectional or panel data. Suppose we have data on a cross-section of individuals, recording their gender, income and other characteristics. We wish to select for analysis only the women. If we have a gender dummy variable with value 1 for men and 0 for women we could do smpl gender=0 --restrict to this eﬀect. Or suppose we want to restrict the sample to respondents with incomes over$50,000. Then we could use smpl income>50000 --restrict A question arises here. If we issue the two commands above in sequence, what do we end up with in our sub-sample: all cases with income over 50000, or just women with income over 50000? By default, in a gretl script, the answer is the latter: women with income over 50000. The second restriction augments the ﬁrst, or in other words the ﬁnal restriction is the logical product of the new restriction and any restriction that is already in place. If you want a new restriction to replace any existing restrictions you can ﬁrst recreate the full dataset using smpl --full Alternatively, you can add the replace option to the smpl command: smpl income>50000 --restrict --replace This option has the eﬀect of automatically re-establishing the full dataset before applying the new restriction. Unlike a simple “setting” of the sample, “restricting” the sample may result in selection of noncontiguous observations from the full data set. It may also change the structure of the data set. This can be seen in the case of panel data. Say we have a panel of ﬁve ﬁrms (indexed by the variable firm) observed in each of several years (identiﬁed by the variable year). Then the restriction smpl year=1995 --restrict produces a dataset that is not a panel, but a cross-section for the year 1995. Similarly smpl firm=3 --restrict produces a time-series dataset for ﬁrm number 3. For these reasons (possible non-contiguity in the observations, possible change in the structure of the data), gretl acts diﬀerently when you “restrict” the sample as opposed to simply “setting” it. In the case of setting, the program merely records the starting and ending observations and uses these as parameters to the various commands calling for the estimation of models, the computation of statistics, and so on. In the case of restriction, the program makes a reduced copy of the dataset and by default treats this reduced copy as a simple, undated cross-section.1 If you wish to re-impose a time-series or panel interpretation of the reduced dataset you can do so using the setobs command, or the GUI menu item “Data, Dataset structure”. 1 With one exception: if you start with a balanced panel dataset and the restriction is such that it preserves a balanced panel — for example, it results in the deletion of all the observations for one cross-sectional unit — then the reduced dataset is still, by default, treated as a panel. Chapter 6. Sub-sampling a dataset 46 The fact that “restricting” the sample results in the creation of a reduced copy of the original dataset may raise an issue when the dataset is very large (say, several thousands of observations). With such a dataset in memory, the creation of a copy may lead to a situation where the computer runs low on memory for calculating regression results. You can work around this as follows: 1. Open the full data set, and impose the sample restriction. 2. Save a copy of the reduced data set to disk. 3. Close the full dataset and open the reduced one. 4. Proceed with your analysis. 6.4 Random sampling With very large datasets (or perhaps to study the properties of an estimator) you may wish to draw a random sample from the full dataset. This can be done using, for example, smpl 100 --random to select 100 cases. If you want the sample to be reproducible, you should set the seed for the random number generator ﬁrst, using set. This sort of sampling falls under the “restriction” category: a reduced copy of the dataset is made. 6.5 The Sample menu items The discussion above has focused on the script command smpl. You can also use the items under the Sample menu in the GUI program to select a sub-sample. The menu items work in the same way as the corresponding smpl variants. When you use the item “Sample, Restrict based on criterion”, and the dataset is already sub-sampled, you are given the option of preserving or replacing the current restriction. Replacing the current restriction means, in eﬀect, invoking the replace option described above (Section 6.3). Chapter 7 Graphs and plots 7.1 Gnuplot graphs A separate program, gnuplot, is called to generate graphs. Gnuplot is a very full-featured graphing program with myriad options. It is available from www.gnuplot.info (but note that a suitable copy of gnuplot is bundled with the packaged versions of gretl for MS Windows and Mac OS X). gretl gives you direct access, via a graphical interface, to a subset of gnuplot’s options and it tries to choose sensible values for you; it also allows you to take complete control over graph details if you wish. With a graph displayed, you can click on the graph window for a pop-up menu with the following options. • Save as PNG: Save the graph in Portable Network Graphics format (the same format that you see on screen). • Save as postscript: Save in encapsulated postscript (EPS) format. • Save as Windows metaﬁle: Save in Enhanced Metaﬁle (EMF) format. • Save to session as icon: The graph will appear in iconic form when you select “Icon view” from the View menu. • Zoom: Lets you select an area within the graph for closer inspection (not available for all graphs). • Print: (Current GTK or MS Windows only) lets you print the graph directly. • Copy to clipboard: MS Windows only, lets you paste the graph into Windows applications such as MS Word. • Edit: Opens a controller for the plot which lets you adjust many aspects of its appearance. • Close: Closes the graph window. Displaying data labels For simple X-Y scatter plots, some further options are available if the dataset includes “case markers” (that is, labels identifying each observation).1 With a scatter plot displayed, when you move the mouse pointer over a data point its label is shown on the graph. By default these labels are transient: they do not appear in the printed or copied version of the graph. They can be removed by selecting “Clear data labels” from the graph pop-up menu. If you want the labels to be aﬃxed permanently (so they will show up when the graph is printed or copied), select the option “Freeze data labels” from the pop-up menu; “Clear data labels” cancels this operation. The other label-related option, “All data labels”, requests that case markers be shown for all observations. At present the display of case markers is disabled for graphs containing more than 250 data points. 1 For an example of such a dataset, see the Ramanathan ﬁle data4-10: this contains data on private school enrollment for the 50 states of the USA plus Washington, DC; the case markers are the two-letter codes for the states. 47 Chapter 7. Graphs and plots GUI plot editor 48 Selecting the Edit option in the graph popup menu opens an editing dialog box, shown in Figure 7.1. Notice that there are several tabs, allowing you to adjust many aspects of a graph’s appearance: font, title, axis scaling, line colors and types, and so on. You can also add lines or descriptive labels to a graph (under the Lines and Labels tabs). The “Apply” button applies your changes without closing the editor; “OK” applies the changes and closes the dialog. Figure 7.1: gretl’s gnuplot controller Publication-quality graphics: advanced options The GUI plot editor has two limitations. First, it cannot represent all the myriad options that gnuplot oﬀers. Users who are suﬃciently familiar with gnuplot to know what they’re missing in the plot editor presumably don’t need much help from gretl, so long as they can get hold of the gnuplot command ﬁle that gretl has put together. Second, even if the plot editor meets your needs, in terms of ﬁne-tuning the graph you see on screen, a few details may need further work in order to get optimal results for publication. Either way, the ﬁrst step in advanced tweaking of a graph is to get access to the graph command ﬁle. • In the graph display window, right-click and choose “Save to session as icon”. • If it’s not already open, open the icon view window — either via the menu item View/Icon view, or by clicking the “session icon view” button on the main-window toolbar. • Right-click on the icon representing the newly added graph and select “Edit plot commands” from the pop-up menu. • You get a window displaying the plot ﬁle (Figure 7.2). Here are the basic things you can do in this window. Obviously, you can edit the ﬁle you just opened. You can also send it for processing by gnuplot, by clicking the “Execute” (cogwheel) icon in the toolbar. Or you can use the “Save as” button to save a copy for editing and processing as you wish. Chapter 7. Graphs and plots 49 Figure 7.2: Plot commands editor Unless you’re a gnuplot expert, most likely you’ll only need to edit a couple of lines at the top of the ﬁle, specifying a driver (plus options) and an output ﬁle. We oﬀer here a brief summary of some points that may be useful. First, gnuplot’s output mode is set via the command set term followed by the name of a supported driver (“terminal” in gnuplot parlance) plus various possible options. (The top line in the plot commands window shows the set term line that gretl used to make a PNG ﬁle, commented out.) The graphic formats that are most suitable for publication are PDF and EPS. These are supported by the gnuplot term types pdf, pdfcairo and postscript (with the eps option). The pdfcairo driver has the virtue that is behaves in a very similar manner to the PNG one, the output of which you see on screen. This is provided by the version of gnuplot that is included in the gretl packages for MS Windows and Mac OS X; if you’re on Linux it may or may be supported. If pdfcairo is not available, the pdf terminal may be available; the postscript terminal is almost certainly available. Besides selecting a term type, if you want to get gnuplot to write the actual output ﬁle you need to append a set output line giving a ﬁlename. Here are a few examples of the ﬁrst two lines you might type in the window editing your plot commands. We’ll make these more “realistic” shortly. set term pdfcairo set output ’mygraph.pdf’ set term pdf set output ’mygraph.pdf’ set term postscript eps set output ’mygraph.eps’ There are a couple of things worth remarking here. First, you may want to adjust the size of the graph, and second you may want to change the font. The default sizes produced by the above drivers are 5 inches by 3 inches for pdfcairo and pdf, and 5 inches by 3.5 inches for postscript eps. In each case you can change this by giving a size speciﬁcation, which takes the form XX,YY (examples below). Chapter 7. Graphs and plots 50 You may ask, why bother changing the size in the gnuplot command ﬁle? After all, PDF and EPS are both vector formats, so the graphs can be scaled at will. True, but a uniform scaling will also aﬀect the font size, which may end looking wrong. You can get optimal results by experimenting with the font and size options to gnuplot’s set term command. Here are some examples (comments follow below). # pdfcairo, regular size, slightly amended set term pdfcairo font "Sans,6" size 5in,3.5in # or small size set term pdfcairo font "Sans,5" size 3in,2in # pdf, regular size, slightly amended set term pdf font "Helvetica,8" size 5in,3.5in # or small set term pdf font "Helvetica,6" size 3in,2in # postscript, regular set term post eps solid font "Helvetica,16" # or small set term post eps solid font "Helvetica,12" size 3in,2in On the ﬁrst line we set a sans serif font for pdfcairo at a suitable size for a 5 × 3.5 inch plot (which you may ﬁnd looks better than the rather “letterboxy” default of 5 × 3). And on the second we illustrate what you might do to get a smaller 3 × 2 inch plot. You can specify the plot size in centimeters if you prefer, as in set term pdfcairo font "Sans,6" size 6cm,4cm We then repeat the exercise for the pdf terminal. Notice that here we’re specifying one of the 35 standard PostScript fonts, namely Helvetica. Unlike pdfcairo, the plain pdf driver is unlikely to be able to ﬁnd fonts other than these. In the third pair of lines we illustrate options for the postscript driver (which, as you see, can be abbreviated as post). Note that here we have added the option solid. Unlike most other drivers, this one uses dashed lines unless you specify the solid option. Also note that we’ve (apparently) speciﬁed a much larger font in this case. That’s because the eps option in eﬀect tells the postscript driver to work at half-size (among other things), so we need to double the font size. Table 7.1 summarizes the basics for the three drivers we have mentioned. Terminal pdfcairo pdf post eps default size (inches) 5×3 5×3 5 × 3.5 suggested font Sans,6 Helvetica,8 Helvetica,16 Table 7.1: Drivers for publication-quality graphics To ﬁnd out more about gnuplot visit www.gnuplot.info. This site has documentation for the current version of the program in various formats. Additional tips To be written. Line widths, enhanced text. Show a “before and after” example. Chapter 7. Graphs and plots 51 7.2 Boxplots These plots (after Tukey and Chambers) display the distribution of a variable. The central box encloses the middle 50 percent of the data, i.e. it is bounded by the ﬁrst and third quartiles. The “whiskers” extend to the minimum and maximum values. A line is drawn across the box at the median and a “+” sign identiﬁes the mean — see Figure 7.3. 0.25 0.2 0.15 Q3 0.1 mean median 0.05 Q1 ENROLL Figure 7.3: Sample boxplot In the case of boxplots with conﬁdence intervals, dotted lines show the limits of an approximate 90 percent conﬁdence interval for the median. This is obtained by the bootstrap method, which can take a while if the data series is very long. After each variable speciﬁed in the boxplot command, a parenthesized boolean expression may be added, to limit the sample for the variable in question. A space must be inserted between the variable name or number and the expression. Suppose you have salary ﬁgures for men and women, and you have a dummy variable GENDER with value 1 for men and 0 for women. In that case you could draw comparative boxplots with the following line in the boxplots dialog: salary (GENDER=1) salary (GENDER=0) Chapter 8 Discrete variables When a variable can take only a ﬁnite, typically small, number of values, then the variable is said to be discrete. Some gretl commands act in a slightly diﬀerent way when applied to discrete variables; moreover, gretl provides a few commands that only apply to discrete variables. Speciﬁcally, the dummify and xtab commands (see below) are available only for discrete variables, while the freq (frequency distribution) command produces diﬀerent output for discrete variables. 8.1 Declaring variables as discrete Gretl uses a simple heuristic to judge whether a given variable should be treated as discrete, but you also have the option of explicitly marking a variable as discrete, in which case the heuristic check is bypassed. The heuristic is as follows: First, are all the values of the variable “reasonably round”, where this is taken to mean that they are all integer multiples of 0.25? If this criterion is met, we then ask whether the variable takes on a “fairly small” set of distinct values, where “fairly small” is deﬁned as less than or equal to 8. If both conditions are satisﬁed, the variable is automatically considered discrete. To mark a variable as discrete you have two options. 1. From the graphical interface, select “Variable, Edit Attributes” from the menu. A dialog box will appear and, if the variable seems suitable, you will see a tick box labeled “Treat this variable as discrete”. This dialog box can also be invoked via the context menu (right-click on a variable) or by pressing the F2 key. 2. From the command-line interface, via the discrete command. The command takes one or more arguments, which can be either variables or list of variables. For example: list xlist = x1 x2 x3 discrete z1 xlist z2 This syntax makes it possible to declare as discrete many variables at once, which cannot presently be done via the graphical interface. The switch --reverse reverses the declaration of a variable as discrete, or in other words marks it as continuous. For example: discrete foo # now foo is discrete discrete foo --reverse # now foo is continuous The command-line variant is more powerful, in that you can mark a variable as discrete even if it does not seem to be suitable for this treatment. Note that marking a variable as discrete does not aﬀect its content. It is the user’s responsibility to make sure that marking a variable as discrete is a sensible thing to do. Note that if you want to recode a continuous variable into classes, you can use the genr command and its arithmetic functions, as in the following example: 52 Chapter 8. Discrete variables nulldata 100 # generate a variable with mean 2 and variance 1 genr x = normal() + 2 # split into 4 classes genr z = (x>0) + (x>2) + (x>4) # now declare z as discrete discrete z 53 Once a variable is marked as discrete, this setting is remembered when you save the ﬁle. 8.2 Commands for discrete variables The dummify command The dummify command takes as argument a series x and creates dummy variables for each distinct value present in x , which must have already been declared as discrete. Example: open greene22_2 discrete Z5 # mark Z5 as discrete dummify Z5 The eﬀect of the above command is to generate 5 new dummy variables, labeled DZ5_1 through DZ5_5, which correspond to the diﬀerent values in Z5. Hence, the variable DZ5_4 is 1 if Z5 equals 4 and 0 otherwise. This functionality is also available through the graphical interface by selecting the menu item “Add, Dummies for selected discrete variables”. The dummify command can also be used with the following syntax: list dlist = dummify(x) This not only creates the dummy variables, but also a named list (see section 11.1) that can be used afterwards. The following example computes summary statistics for the variable Y for each value of Z5: open greene22_2 discrete Z5 # mark Z5 as discrete list foo = dummify(Z5) loop foreach i foo smpl $i --restrict --replace summary Y endloop smpl --full Since dummify generates a list, it can be used directly in commands that call for a list as input, such as ols. For example: open greene22_2 discrete Z5 # mark Z5 as discrete ols Y 0 dummify(Z5) The freq command The freq command displays absolute and relative frequencies for a given variable. The way frequencies are counted depends on whether the variable is continuous or discrete. This command is also available via the graphical interface by selecting the “Variable, Frequency distribution” menu entry. Chapter 8. Discrete variables 54 For discrete variables, frequencies are counted for each distinct value that the variable takes. For continuous variables, values are grouped into “bins” and then the frequencies are counted for each bin. The number of bins, by default, is computed as a function of the number of valid observations in the currently selected sample via the rule shown in Table 8.1. However, when the command is invoked through the menu item “Variable, Frequency Plot”, this default can be overridden by the user. Observations 8 ≤ n < 16 16 ≤ n < 50 50 ≤ n ≤ 850 n > 850 Bins 5 7 √ n 29 Table 8.1: Number of bins for various sample sizes For example, the following code open greene19_1 freq TUCE discrete TUCE # mark TUCE as discrete freq TUCE yields Read datafile /usr/local/share/gretl/data/greene/greene19_1.gdt periodicity: 1, maxobs: 32, observations range: 1-32 Listing 5 variables: 0) const 1) GPA ? freq TUCE Frequency distribution for TUCE, obs 1-32 number of bins = 7, mean = 21.9375, sd = 3.90151 interval < 13.417 16.250 19.083 21.917 24.750 >= 13.417 16.250 19.083 21.917 24.750 27.583 27.583 midpt 12.000 14.833 17.667 20.500 23.333 26.167 29.000 frequency 1 1 6 6 9 7 2 rel. 3.12% 3.12% 18.75% 18.75% 28.12% 21.88% 6.25% cum. 3.12% 6.25% 25.00% 43.75% 71.88% 93.75% 100.00% * * ****** ****** ********** ******* ** 2) TUCE 3) PSI 4) GRADE Test for null hypothesis of normal distribution: Chi-square(2) = 1.872 with p-value 0.39211 ? discrete TUCE # mark TUCE as discrete ? freq TUCE Frequency distribution for TUCE, obs 1-32 frequency 12 14 1 1 rel. 3.12% 3.12% cum. 3.12% * 6.25% * Chapter 8. Discrete variables 17 19 20 21 22 23 24 25 26 27 28 29 3 3 2 4 2 4 3 4 2 1 1 1 9.38% 9.38% 6.25% 12.50% 6.25% 12.50% 9.38% 12.50% 6.25% 3.12% 3.12% 3.12% 15.62% 25.00% 31.25% 43.75% 50.00% 62.50% 71.88% 84.38% 90.62% 93.75% 96.88% 100.00% *** *** ** **** ** **** *** **** ** * * * 55 Test for null hypothesis of normal distribution: Chi-square(2) = 1.872 with p-value 0.39211 As can be seen from the sample output, a Doornik-Hansen test for normality is computed automatically. This test is suppressed for discrete variables where the number of distinct values is less than 10. This command accepts two options: --quiet, to avoid generation of the histogram when invoked from the command line and --gamma, for replacing the normality test with Locke’s nonparametric test, whose null hypothesis is that the data follow a Gamma distribution. If the distinct values of a discrete variable need to be saved, the values() matrix construct can be used (see chapter 12). The xtab command The xtab command cab be invoked in either of the following ways. First, xtab ylist ; xlist where ylist and xlist are lists of discrete variables. This produces cross-tabulations (two-way frequencies) of each of the variables in ylist (by row) against each of the variables in xlist (by column). Or second, xtab xlist In the second case a full set of cross-tabulations is generated; that is, each variable in xlist is tabulated against each other variable in the list. In the graphical interface, this command is represented by the “Cross Tabulation” item under the View menu, which is active if at least two variables are selected. Here is an example of use: open greene22_2 discrete Z* # mark Z1-Z8 as discrete xtab Z1 Z4 ; Z5 Z6 which produces Cross-tabulation of Z1 (rows) against Z5 (columns) [ [ [ 0] 1] 1][ 20 28 2][ 91 73 3][ 75 54 4][ 93 97 5] 36 34 TOT. 315 286 Chapter 8. Discrete variables TOTAL 48 164 129 190 70 601 56 Pearson chi-square test = 5.48233 (4 df, p-value = 0.241287) Cross-tabulation of Z1 (rows) against Z6 (columns) [ [ [ 0] 1] 9][ 4 3 7 12][ 36 8 44 14][ 106 48 154 16][ 70 45 115 17][ 52 37 89 18][ 45 67 112 20] 2 78 80 TOT. 315 286 601 TOTAL Pearson chi-square test = 123.177 (6 df, p-value = 3.50375e-24) Cross-tabulation of Z4 (rows) against Z5 (columns) [ [ [ 0] 1] 1][ 17 31 48 2][ 60 104 164 3][ 35 94 129 4][ 45 145 190 5] 14 56 70 TOT. 171 430 601 TOTAL Pearson chi-square test = 11.1615 (4 df, p-value = 0.0248074) Cross-tabulation of Z4 (rows) against Z6 (columns) [ [ [ 0] 1] 9][ 1 6 7 12][ 8 36 44 14][ 39 115 154 16][ 47 68 115 17][ 30 59 89 18][ 32 80 112 20] 14 66 80 TOT. 171 430 601 TOTAL Pearson chi-square test = 18.3426 (6 df, p-value = 0.0054306) Pearson’s χ 2 test for independence is automatically displayed, provided that all cells have expected frequencies under independence greater than 10−7 . However, a common rule of thumb states that this statistic is valid only if the expected frequency is 5 or greater for at least 80 percent of the cells. If this condition is not met a warning is printed. Additionally, the --row or --column options can be given: in this case, the output displays row or column percentages, respectively. If you want to cut and paste the output of xtab to some other program, e.g. a spreadsheet, you may want to use the --zeros option; this option causes cells with zero frequency to display the number 0 instead of being empty. Chapter 9 Loop constructs 9.1 Introduction The command loop opens a special mode in which gretl accepts a block of commands to be repeated zero or more times. This feature may be useful for, among other things, Monte Carlo simulations, bootstrapping of test statistics and iterative estimation procedures. The general form of a loop is: loop control-expression [ --progressive | --verbose | --quiet ] loop body endloop Five forms of control-expression are available, as explained in section 9.2. Not all gretl commands are available within loops. The commands that are not presently accepted in this context are shown in Table 9.1. Table 9.1: Commands not usable in loops corrgm include setobs cusum leverage tabprint data nulldata vif delete open xcorrgm eqnprint rmplot foreign run function scatters hurst setmiss By default, the genr command operates quietly in the context of a loop (without printing information on the variable generated). To force the printing of feedback from genr you may specify the --verbose option to loop. The --quiet option suppresses the usual printout of the number of iterations performed, which may be desirable when loops are nested. The --progressive option to loop modiﬁes the behavior of the commands print and store, and certain estimation commands, in a manner that may be useful with Monte Carlo analyses (see Section 9.3). The following sections explain the various forms of the loop control expression and provide some examples of use of loops. If you are carrying out a substantial Monte Carlo analysis with many thousands of repetitions, memory capacity and processing time may be an issue. To minimize the use of computer resources, run your script using the command-line program, gretlcli, with output redirected to a ﬁle. 9.2 Loop control variants Count loop The simplest form of loop control is a direct speciﬁcation of the number of times the loop should be repeated. We refer to this as a “count loop”. The number of repetitions may be a numerical constant, as in loop 1000, or may be read from a scalar variable, as in loop replics. 57 Chapter 9. Loop constructs 58 In the case where the loop count is given by a variable, say replics, in concept replics is an integer; if the value is not integral, it is converted to an integer by truncation. Note that replics is evaluated only once, when the loop is initially compiled. While loop A second sort of control expression takes the form of the keyword while followed by a boolean expression. For example, loop while essdiff > .00001 Execution of the commands within the loop will continue so long as (a) the speciﬁed condition evaluates as true and (b) the number of iterations does not exceed the value of the internal variable loop_maxiter. By default this equals 250, but you can specify a diﬀerent value via the set command (see the Gretl Command Reference). Index loop A third form of loop control uses an index variable, for example i.1 In this case you specify starting and ending values for the index, which is incremented by one each time round the loop. The syntax looks like this: loop i=1..20. The index variable may be a pre-existing scalar; if this is not the case, the variable is created automatically and is destroyed on exit from the loop. The index may be used within the loop body in either of two ways: you can access the integer value of i (see Example 9.4) or you can use its string representation,$i (see Example 9.5). The starting and ending values for the index can be given in numerical form, or by reference to predeﬁned scalar variables. In the latter case the variables are evaluated once, at the start of the loop. In addition, with time series data you can give the starting and ending values in the form of dates, as in loop i=1950:1..1999:4. This form of loop control is intended to be quick and easy, and as such it is subject to certain limitations. You cannot do arithmetic within the loop control expression, as in loop i=k..2*k # won’t work But one extension is permitted for convenience: you can inﬂect a loop control variable with a minus sign, as in loop k=-lag..lag # OK Also note that in this sort of loop the index variable is always incremented by one at each iteration. If, for example, you have loop i=m..n where m and n are scalar variables with values m > n at the time of execution, the index will not be decremented; rather, the loop will simply be bypassed. If you need more complex loop control, see the “for” form below. The index loop is particularly useful in conjunction with the values() matrix function when some operation must be carried out for each value of some discrete variable (see chapter 8). Consider the following example: 1 It is common programming practice to use simple, one-character names for such variables. However, you may use any name that is acceptable by gretl: up to 15 characters, starting with a letter, and containing nothing but letters, numerals and the underscore character. Chapter 9. Loop constructs open greene22_2 open greene22_2 discrete Z8 v8 = values(Z8) n = rows(v8) loop i=1..n scalar xi = v8[$i] smpl (Z8=xi) --restrict --replace printf "mean(Y | Z8 = %g) = %8.5f, sd(Y | Z8 = %g) = %g\n", \ xi, mean(Y), xi, sd(Y) endloop 59 In this case, we evaluate the conditional mean and standard deviation of the variable Y for each value of Z8. Foreach loop The fourth form of loop control also uses an index variable, in this case to index a speciﬁed list of strings. The loop is executed once for each string in the list. This can be useful for performing repetitive operations on a list of variables. Here is an example of the syntax: loop foreach i peach pear plum print "$i" endloop This loop will execute three times, printing out “peach”, “pear” and “plum” on the respective iterations. The numerical value of the index starts at 1 and is incremented by 1 at each iteration. If you wish to loop across a list of variables that are contiguous in the dataset, you can give the names of the ﬁrst and last variables in the list, separated by “..”, rather than having to type all the names. For example, say we have 50 variables AK, AL, . . . , WY, containing income levels for the states of the US. To run a regression of income on time for each of the states we could do: genr time loop foreach i AL..WY ols $i const time endloop This loop variant can also be used for looping across the elements in a named list (see chapter 11). For example: list ylist = y1 y2 y3 loop foreach i ylist ols$i const x1 x2 endloop Note that if you use this idiom inside a function (see chapter 10), looping across a list that has been supplied to the function as an argument, it is necessary to use the syntax listname.$i to reference the list-member variables. In the context of the example above, this would mean replacing the third line with ols ylist.$i const x1 x2 For loop The ﬁnal form of loop control emulates the for statement in the C programming language. The sytax is loop for, followed by three component expressions, separated by semicolons and surrounded by parentheses. The three components are as follows: Chapter 9. Loop constructs 60 1. Initialization: This is evaluated only once, at the start of the loop. Common example: setting a scalar control variable to some starting value. 2. Continuation condition: this is evaluated at the top of each iteration (including the ﬁrst). If the expression evaluates as true (non-zero), iteration continues, otherwise it stops. Common example: an inequality expressing a bound on a control variable. 3. Modiﬁer: an expression which modiﬁes the value of some variable. This is evaluated prior to checking the continuation condition, on each iteration after the ﬁrst. Common example: a control variable is incremented or decremented. Here’s a simple example: loop for (r=0.01; r<.991; r+=.01) In this example the variable r will take on the values 0.01, 0.02, . . . , 0.99 across the 99 iterations. Note that due to the ﬁnite precision of ﬂoating point arithmetic on computers it may be necessary to use a continuation condition such as the above, r<.991, rather than the more “natural” r<=.99. (Using double-precision numbers on an x86 processor, at the point where you would expect r to equal 0.99 it may in fact have value 0.990000000000001.) Any or all of the three expressions governing a for loop may be omitted — the minimal form is (;;). If the continuation test is omitted it is implicitly true, so you have an inﬁnite loop unless you arrange for some other way out, such as a break statement. If the initialization expression in a for loop takes the common form of setting a scalar variable to a given value, the string representation of that scalar’s value will be available within the loop via the accessor $varname. 9.3 Progressive mode If the --progressive option is given for a command loop, special behavior is invoked for certain commands, namely, print, store and simple estimation commands. By “simple” here we mean commands which (a) estimate a single equation (as opposed to a system of equations) and (b) do so by means of a single command statement (as opposed to a block of statements, as with nls and mle). The paradigm is ols; other possibilities include tsls, wls, logit and so on. The special behavior is as follows. Estimators: The results from each individual iteration of the estimator are not printed. Instead, after the loop is completed you get a printout of (a) the mean value of each estimated coeﬃcient across all the repetitions, (b) the standard deviation of those coeﬃcient estimates, (c) the mean value of the estimated standard error for each coeﬃcient, and (d) the standard deviation of the estimated standard errors. This makes sense only if there is some random input at each step. print: When this command is used to print the value of a variable, you do not get a print each time round the loop. Instead, when the loop is terminated you get a printout of the mean and standard deviation of the variable, across the repetitions of the loop. This mode is intended for use with variables that have a scalar value at each iteration, for example the error sum of squares from a regression. Data series cannot be printed in this way. store: This command writes out the values of the speciﬁed scalars, from each time round the loop, to a speciﬁed ﬁle. Thus it keeps a complete record of their values across the iterations. For example, coeﬃcient estimates could be saved in this way so as to permit subsequent examination of their frequency distribution. Only one such store can be used in a given loop. Chapter 9. Loop constructs 61 9.4 Loop examples Monte Carlo example A simple example of a Monte Carlo loop in “progressive” mode is shown in Example 9.1. Example 9.1: Simple Monte Carlo loop nulldata 50 seed 547 genr x = 100 * uniform() # open a "progressive" loop, to be repeated 100 times loop 100 --progressive genr u = 10 * normal() # construct the dependent variable genr y = 10*x + u # run OLS regression ols y const x # grab the coefficient estimates and R-squared genr a =$coeff(const) genr b = $coeff(x) genr r2 =$rsq # arrange for printing of stats on these print a b r2 # and save the coefficients to file store coeffs.gdt a b endloop This loop will print out summary statistics for the ‘a’ and ‘b’ estimates and R 2 across the 100 repetitions. After running the loop, coeffs.gdt, which contains the individual coeﬃcient estimates from all the runs, can be opened in gretl to examine the frequency distribution of the estimates in detail. The command nulldata is useful for Monte Carlo work. Instead of opening a “real” data set, nulldata 50 (for instance) opens a dummy data set, containing just a constant and an index variable, with a series length of 50. Constructed variables can then be added using the genr command. See the set command for information on generating repeatable pseudo-random series. Iterated least squares Example 9.2 uses a “while” loop to replicate the estimation of a nonlinear consumption function of the form C = α + βY γ + as presented in Greene (2000, Example 11.3). This script is included in the gretl distribution under the name greene11_3.inp; you can ﬁnd it in gretl under the menu item “File, Script ﬁles, Practice ﬁle, Greene...”. The option --print-final for the ols command arranges matters so that the regression results will not be printed each time round the loop, but the results from the regression on the last iteration will be printed when the loop terminates. Example 9.3 shows how a loop can be used to estimate an ARMA model, exploiting the “outer product of the gradient” (OPG) regression discussed by Davidson and MacKinnon in their Estimation and Inference in Econometrics. Chapter 9. Loop constructs 62 Example 9.2: Nonlinear consumption function open greene11_3.gdt # run initial OLS ols C 0 Y genr essbak = $ess genr essdiff = 1 genr beta =$coeff(Y) genr gamma = 1 # iterate OLS till the error sum of squares converges loop while essdiff > .00001 # form the linearized variables genr C0 = C + gamma * beta * Y^gamma * log(Y) genr x1 = Y^gamma genr x2 = beta * Y^gamma * log(Y) # run OLS ols C0 0 x1 x2 --print-final --no-df-corr --vcv genr beta = $coeff(x1) genr gamma =$coeff(x2) genr ess = $ess genr essdiff = abs(ess - essbak)/essbak genr essbak = ess endloop # print parameter estimates using their "proper names" noecho printf "alpha = %g\n",$coeff(0) printf "beta = %g\n", beta printf "gamma = %g\n", gamma Indexed loop examples Example 9.4 shows an indexed loop in which the smpl is keyed to the index variable i. Suppose we have a panel dataset with observations on a number of hospitals for the years 1991 to 2000 (where the year of the observation is indicated by a variable named year). We restrict the sample to each of these years in turn and print cross-sectional summary statistics for variables 1 through 4. Example 9.5 illustrates string substitution in an indexed loop. The ﬁrst time round this loop the variable V will be set to equal COMP1987 and the dependent variable for the ols will be PBT1987. The next time round V will be redeﬁned as equal to COMP1988 and the dependent variable in the regression will be PBT1988. And so on. Chapter 9. Loop constructs 63 Example 9.3: ARMA 1, 1 open armaloop.gdt genr c = 0 genr a = 0.1 genr m = 0.1 series e = 1.0 genr de_c = e genr de_a = e genr de_m = e genr crit = 1 loop while crit > 1.0e-9 # one-step forecast errors genr e = y - c - a*y(-1) - m*e(-1) # log-likelihood genr loglik = -0.5 * sum(e^2) print loglik # partials of forecast errors wrt c, a, and m genr de_c = -1 - m * de_c(-1) genr de_a = -y(-1) -m * de_a(-1) genr de_m = -e(-1) -m * de_m(-1) # partials of l wrt genr sc_c = -de_c * genr sc_a = -de_a * genr sc_m = -de_m * c, a and m e e e # OPG regression ols const sc_c sc_a sc_m --print-final --no-df-corr --vcv # Update the parameters genr dc = $coeff(sc_c) genr c = c + dc genr da =$coeff(sc_a) genr a = a + da genr dm = $coeff(sc_m) genr m = m + dm printf " printf " printf " constant = %.8g (gradient = %#.6g)\n", c, dc ar1 coefficient = %.8g (gradient = %#.6g)\n", a, da ma1 coefficient = %.8g (gradient = %#.6g)\n", m, dm genr crit =$T - $ess print crit endloop genr se_c =$stderr(sc_c) genr se_a = $stderr(sc_a) genr se_m =$stderr(sc_m) noecho print " printf "constant = %.8g (se = %#.6g, t = %.4f)\n", c, se_c, c/se_c printf "ar1 term = %.8g (se = %#.6g, t = %.4f)\n", a, se_a, a/se_a printf "ma1 term = %.8g (se = %#.6g, t = %.4f)\n", m, se_m, m/se_m Chapter 9. Loop constructs 64 Example 9.4: Panel statistics open hospitals.gdt loop i=1991..2000 smpl (year=i) --restrict --replace summary 1 2 3 4 endloop Example 9.5: String substitution open bea.dat loop i=1987..2001 genr V = COMP$i genr TC = GOC$i - PBT$i genr C = TC - V ols PBT$i const TC V endloop Chapter 10 User-deﬁned functions 10.1 Deﬁning a function Gretl oﬀers a mechanism for deﬁning functions, which may be called via the command line, in the context of a script, or (if packaged appropriately, see section 10.6) via the program’s graphical interface. The syntax for deﬁning a function looks like this:1 function return-type function-name (parameters) function body end function The opening line of a function deﬁnition contains these elements, in strict order: 1. The keyword function. 2. return-type, which states the type of value returned by the function, if any. This must be one of void (if the function does not return anything), scalar, series, matrix, list or string. 3. function-name, the unique identiﬁer for the function. Names must start with a letter. They have a maximum length of 31 characters; if you type a longer name it will be truncated. Function names cannot contain spaces. You will get an error if you try to deﬁne a function having the same name as an existing gretl command. 4. The functions’s parameters, in the form of a comma-separated list enclosed in parentheses. This may be run into the function name, or separated by white space as shown. Function parameters can be of any of the types shown below. Type bool int scalar series list matrix string Description scalar variable acting as a Boolean switch scalar variable acting as an integer scalar variable data series named list of series matrix or vector string variable or string literal Each element in the listing of parameters must include two terms: a type speciﬁer, and the name by which the parameter shall be known within the function. An example follows: function scalar myfunc (series y, list xvars, bool verbose) 1 The syntax given here diﬀers from the standard prior to gretl version 1.8.4. For reasons of backward compatibility the old syntax is still supported; an account of the changes can be found in section 10.5 below. 65 Chapter 10. User-deﬁned functions 66 Each of the type-speciﬁers, with the exception of list and string, may be modiﬁed by prepending an asterisk to the associated parameter name, as in function scalar myfunc (series *y, scalar *b) The meaning of this modiﬁcation is explained below (see section 10.4); it is related to the use of pointer arguments in the C programming language. Function parameters: optional reﬁnements Besides the required elements mentioned above, the speciﬁcation of a function parameter may include some additional ﬁelds. For a parameter of type scalar or int, a minimum, maximum and default value may be speciﬁed. These values should directly follow the name of the parameter, enclosed in square brackets and with the individual elements separated by colons. For example, suppose we have an integer parameter order for which we wish to specify a minimum of 1, a maximum of 12, and a default of 4. We can write int order[1:12:4] If you wish to omit any of the three speciﬁers, leave the corresponding ﬁeld empty. For example [1::4] would specify a minimum of 1 and a default of 4 while leaving the maximum unlimited. For a parameter of type bool, you can specify a default of 1 (true) or 0 (false), as in bool verbose[0] In addition, a parameter may be preﬁxed by the keyword const. This constitutes a promise that the corresponding argument will not be modiﬁed within the function. This qualiﬁer is allowed for all parameters, but is in fact only meaningful if the argument is such that it could be modiﬁed in the function. See section 10.4 for details. Finally, for a parameter of any type you can append a short descriptive string. This will show up as an aid to the user if the function is packaged (see section 10.6 below) and called via gretl’s graphical interface. The string should be enclosed in double quotes, and inserted before the comma that precedes the following parameter (or the closing right parenthesis of the function deﬁnition, in the case of the last parameter), as illustrated in the following example. function scalar myfun (series y "dependent variable", series x "independent variable") Functions taking no parameters You may deﬁne a function that has no parameters (these are called “routines” in some programming languages). In this case, use the keyword void in place of the listing of parameters: function matrix myfunc2 (void) The function body The function body is composed of gretl commands, or calls to user-deﬁned functions (that is, function calls may be nested). A function may call itself (that is, functions may be recursive). While the function body may contain function calls, it may not contain function deﬁnitions. That is, you cannot deﬁne a function inside another function. For further details, see section 10.4. Chapter 10. User-deﬁned functions 67 10.2 Calling a function A user function is called by typing its name followed by zero or more arguments enclosed in parentheses. If there are two or more arguments these should be separated by commas. There are automatic checks in place to ensure that the number of arguments given in a function call matches the number of parameters, and that the types of the given arguments match the types speciﬁed in the deﬁnition of the function. An error is ﬂagged if either of these conditions is violated. One qualiﬁcation: allowance is made for omitting arguments at the end of the list, provided that default values are speciﬁed in the function deﬁnition. To be precise, the check is that the number of arguments is at least equal to the number of required parameters, and is no greater than the total number of parameters. A scalar, series or matrix argument to a function may be given either as the name of a pre-existing variable or as an expression which evaluates to a variable of the appropriate type. Scalar arguments may also be given as numerical values. List arguments must be speciﬁed by name. The following trivial example illustrates a function call that correctly matches the function deﬁnition. # function definition function scalar ols_ess(series y, list xvars) ols y 0 xvars --quiet scalar myess = $ess printf "ESS = %g\n", myess return myess end function # main script open data4-1 list xlist = 2 3 4 # function call (the return value is ignored here) ols_ess(price, xlist) The function call gives two arguments: the ﬁrst is a data series speciﬁed by name and the second is a named list of regressors. Note that while the function oﬀers the variable myess as a return value, it is ignored by the caller in this instance. (As a side note here, if you want a function to calculate some value having to do with a regression, but are not interested in the full results of the regression, you may wish to use the --quiet ﬂag with the estimation command as shown above.) A second example shows how to write a function call that assigns a return value to a variable in the caller: # function definition function series get_uhat(series y, list xvars) ols y 0 xvars --quiet series uh =$uhat return uh end function # main script open data4-1 list xlist = 2 3 4 # function call series resid = get_uhat(price, xlist) 10.3 Deleting a function If you have deﬁned a function and subsequently wish to clear it out of memory, you can do so using the keywords delete or clear, as in Chapter 10. User-deﬁned functions function myfunc delete function get_uhat clear 68 Note, however, that if myfunc is already a deﬁned function, providing a new deﬁnition automatically overwrites the previous one, so it should rarely be necessary to delete functions explicitly. 10.4 Function programming details Variables versus pointers Series, scalar, and matrix arguments to functions can be passed in two ways: “as they are”, or as pointers. For example, consider the following: function series triple1(series x) return 3*x end function function series triple2(series *x) return 3*x end function These two functions are nearly identical (and yield the same result); the only diﬀerence is that you need to feed a series into triple1, as in triple1(myseries), while triple2 must be supplied a pointer to a series, as in triple2(&myseries). Why make the distinction? There are two main reasons for doing so: modularity and performance. By modularity we mean the insulation of a function from the rest of the script which calls it. One of the many beneﬁts of this approach is that your functions are easily reusable in other contexts. To achieve modularity, variables created within a function are local to that function, and are destroyed when the function exits, unless they are made available as return values and these values are “picked up” or assigned by the caller. In addition, functions do not have access to variables in “outer scope” (that is, variables that exist in the script from which the function is called) except insofar as these are explicitly passed to the function as arguments. By default, when a variable is passed to a function as an argument, what the function actually “gets” is a copy of the outer variable, which means that the value of the outer variable is not modiﬁed by anything that goes on inside the function. But the use of pointers allows a function and its caller to “cooperate” such that an outer variable can be modiﬁed by the function. In eﬀect, this allows a function to “return” more than one value (although only one variable can be returned directly — see below). The parameter in question is marked with a preﬁx of * in the function deﬁnition, and the corresponding argument is marked with the complementary preﬁx & in the caller. For example, function series get_uhat_and_ess(series y, list xvars, scalar *ess) ols y 0 xvars --quiet ess = $ess series uh =$uhat return uh end function # main script open data4-1 list xlist = 2 3 4 # function call scalar SSR series resid = get_uhat_and_ess(price, xlist, &SSR) In the above, we may say that the function is given the address of the scalar variable SSR, and it assigns a value to that variable (under the local name ess). (For anyone used to programming in C: Chapter 10. User-deﬁned functions 69 note that it is not necessary, or even possible, to “dereference” the variable in question within the function using the * operator. Unadorned use of the name of the variable is suﬃcient to access the variable in outer scope.) An “address” parameter of this sort can be used as a means of oﬀering optional information to the caller. (That is, the corresponding argument is not strictly needed, but will be used if present). In that case the parameter should be given a default value of null and the the function should test to see if the caller supplied a corresponding argument or not, using the built-in function isnull(). For example, here is the simple function shown above, modiﬁed to make the ﬁlling out of the ess value optional. function series get_uhat_and_ess(series y, list xvars, scalar *ess[null]) ols y 0 xvars --quiet if !isnull(ess) ess = $ess endif return$uhat end function If the caller does not care to get the ess value, it can use null in place of a real argument: series resid = get_uhat_and_ess(price, xlist, null) Alternatively, trailing function arguments that have default values may be omitted, so the following would also be a valid call: series resid = get_uhat_and_ess(price, xlist) Pointer arguments may also be useful for optimizing performance: even if a variable is not modiﬁed inside the function, it may be a good idea to pass it as a pointer if it occupies a lot of memory. Otherwise, the time gretl spends transcribing the value of the variable to the local copy may be non-negligible, compared to the time the function spends doing the job it was written for. Example 10.1 takes this to the extreme. We deﬁne two functions which return the number of rows of a matrix (a pretty fast operation). One function gets a matrix as argument, the other one a pointer to a matrix. The two functions are evaluated on a matrix with 2000 rows and 2000 columns; on a typical system, ﬂoating-point numbers take 8 bytes of memory, so the space occupied by the matrix is roughly 32 megabytes. Running the code in example 10.1 will produce output similar to the following (the actual numbers depend on the machine you’re running the example on): Elapsed time: without pointers (copy) = 3.66 seconds, with pointers (no copy) = 0.01 seconds. If a pointer argument is used for this sort of purpose — and the object to which the pointer points is not modiﬁed by the function — it is a good idea to signal this to the user by adding the const qualiﬁer, as shown for function b in Example 10.1. When a pointer argument is qualiﬁed in this way, any attempt to modify the object within the function will generate an error. List arguments The use of a named list as an argument to a function gives a means of supplying a function with a set of variables whose number is unknown when the function is written — for example, sets of regressors or instruments. Within the function, the list can be passed on to commands such as ols. Chapter 10. User-deﬁned functions 70 Example 10.1: Performance comparison: values versus pointer function scalar a(matrix X) return rows(X) end function function scalar b(const matrix *X) return rows(X) end function nulldata 10 set echo off set messages off X = zeros(2000,2000) r=0 set stopwatch loop 100 r = a(X) endloop fa = $stopwatch set stopwatch loop 100 r = b(&X) endloop fb =$stopwatch printf "Elapsed time:\n\ \twithout pointers (copy) = %g seconds,\n\ \twith pointers (no copy) = %g seconds.\n", fa, fb A list argument can also be “unpacked” using a foreach loop construct, but this requires some care. For example, suppose you have a list X and want to calculate the standard deviation of each variable in the list. You can do: loop foreach i X scalar sd_$i = sd(X.$i) endloop Please note: a special piece of syntax is needed in this context. If we wanted to perform the above task on a list in a regular script (not inside a function), we could do loop foreach i X scalar sd_$i = sd($i) endloop where $i gets the name of the variable at position i in the list, and sd($i) gets its standard deviation. But inside a function, working on a list supplied as an argument, if we want to reference an individual variable in the list we must use the syntax listname.varname. Hence in the example above we write sd(X.$i). This is necessary to avoid possible collisions between the name-space of the function and the namespace of the caller script. For example, suppose we have a function that takes a list argument, and Chapter 10. User-deﬁned functions 71 that deﬁnes a local variable called y. Now suppose that this function is passed a list containing a variable named y. If the two name-spaces were not separated either we’d get an error, or the external variable y would be silently over-written by the local one. It is important, therefore, that list-argument variables should not be “visible” by name within functions. To “get hold of” such variables you need to use the form of identiﬁcation just mentioned: the name of the list, followed by a dot, followed by the name of the variable. Constancy of list arguments When a named list of variables is passed to a function, the function is actually provided with a copy of the list. The function may modify this copy (for instance, adding or removing members), but the original list at the level of the caller is not modiﬁed. Optional list arguments If a list argument to a function is optional, this should be indicated by appending a default value of null, as in function scalar myfunc (scalar y, list X[null]) In that case, if the caller gives null as the list argument (or simply omits the last argument) the named list X inside the function will be empty. This possibility can be detected using the nelem() function, which returns 0 for an empty list. String arguments String arguments can be used, for example, to provide ﬂexibility in the naming of variables created within a function. In the following example the function mavg returns a list containing two moving averages constructed from an input series, with the names of the newly created variables governed by the string argument. function list mavg (series y, string vname) series @vname_2 = (y+y(-1)) / 2 series @vname_4 = (y+y(-1)+y(-2)+y(-3)) / 4 list retlist = @vname_2 @vname_4 return retlist end function open data9-9 list malist = mavg(nocars, "nocars") print malist --byobs The last line of the script will print two variables named nocars_2 and nocars_4. For details on the handling of named strings, see chapter 11. If a string argument is considered optional, it may be given a null default value, as in function scalar foo (series y, string vname[null]) Retrieving the names of arguments The variables given as arguments to a function are known inside the function by the names of the corresponding parameters. For example, within the function whose signature is function void somefun (series y) we have the series known as y. It may be useful, however, to be able to determine the names of the variables provided as arguments. This can be done using the function argname, which takes the name of a function parameter as its single argument and returns a string. Here is a simple illustration: Chapter 10. User-deﬁned functions function void namefun (series y) printf "the series given as ’y’ was named %s\n", argname(y) end function open data9-7 namefun(QNC) 72 This produces the output the series given as ’y’ was named QNC Please note that this will not always work: the arguments given to functions may be anonymous variables, created on the ﬂy, as in somefun(log(QNC)) or somefun(CPI/100). In that case the argname function fails to return a string. Function writers who wish to make use of this facility should check the return from argname using the isstring() function, which returns 1 when given the name of a string variable, 0 otherwise. Return values Functions can return nothing (just printing a result, perhaps), or they can return a single variable — a scalar, series, list, matrix or string. The return value, if any, is speciﬁed via a statement within the function body beginning with the keyword return, followed by either the name of a variable (which must be of the type announced on the ﬁrst line of the function deﬁnition) or an expression which produces a value of the correct type. Having a function return a list is one way of permitting the “return” of more than one variable. That is, you can deﬁne several variable inside a function and package them as a list; in this case they are not destroyed when the function exits. Here is a simple example, which also illustrates the possibility of setting the descriptive labels for variables generated in a function. function list make_cubes (list xlist) list cubes = null loop foreach i xlist --quiet series$i3 = (xlist.$i)^3 setinfo$i3 -d "cube of $i" list cubes +=$i3 endloop return cubes end function open data4-1 list xlist = price sqft list cubelist = make_cubes(xlist) print xlist cubelist --byobs labels A return statement causes the function to return (exit) at the point where it appears within the body of the function. A function may also exit when (a) the end of the function code is reached (in the case of a function with no return value), (b) a gretl error occurs, or (c) a funcerr statement is reached. The funcerr keyword, which may be followed by a string enclosed in double quotes, causes a function to exit with an error ﬂagged. If a string is provided, this is printed on exit, otherwise a generic error message is printed. This mechanism enables the author of a function to pre-empt an ordinary execution error and/or oﬀer a more speciﬁc and helpful error message. For example, if nelem(xlist) = 0 funcerr "xlist must not be empty" endif Chapter 10. User-deﬁned functions A function may contain more than one return statement, as in function scalar multi (bool s) if s return 1000 else return 10 endif end function 73 However, it is recommended programming practice to have a single return point from a function unless this is very inconvenient. The simple example above would be better written as function scalar multi (bool s) return s ? 1000 : 10 end function Error checking When gretl ﬁrst reads and “compiles” a function deﬁnition there is minimal error-checking: the only checks are that the function name is acceptable, and, so far as the body is concerned, that you are not trying to deﬁne a function inside a function (see Section 10.1). Otherwise, if the function body contains invalid commands this will become apparent only when the function is called and its commands are executed. Debugging The usual mechanism whereby gretl echoes commands and reports on the creation of new variables is by default suppressed when a function is being executed. If you want more verbose output from a particular function you can use either or both of the following commands within the function: set echo on set messages on Alternatively, you can achieve this eﬀect for all functions via the command set debug 1. Usually when you set the value of a state variable using the set command, the eﬀect applies only to the current level of function execution. For instance, if you do set messages on within function f1, which in turn calls function f2, then messages will be printed for f1 but not f2. The debug variable, however, acts globally; all functions become verbose regardless of their level. Further, you can do set debug 2: in addition to command echo and the printing of messages, this is equivalent to setting max_verbose (which produces verbose output from the BFGS maximizer) at all levels of function execution. 10.5 Old-style function syntax Prior to gretl 1.8.4 diﬀerent rules were in force for deﬁning functions. As mentioned above, the old syntax is still supported for reasons of backward compatibility. The diﬀerences in relation to the currently recommended syntax concern the way in which a function’s return type is speciﬁed, and the syntax and semantics of the return statement. • In the old version there was no place for specifying a function’s return type in the ﬁrst line of its deﬁnition. You had to give the type as part of the return statement, as in return scalar x. • The ﬁnal element in the return statement could not be an expression; it had to be the name of a pre-deﬁned variable. Chapter 10. User-deﬁned functions • There could be only one return statement in a given function. 74 • The return statement did not actually cause the function to exit, it just announced the variable available for assignment by the caller (although usually such a statement would occur at the end of the function anyway). Here is a comparison of the old and the new syntax for a trivial example: # old style function triple (series x) y = 3*x return series y end function # new style function series triple (series x) return 3*x end function 10.6 Function packages Since gretl 1.6.0 there has been a mechanism to package functions and make them available to other users of gretl. Here is a walk-through of the process. Load a function in memory There are several ways to load a function: • If you have a script ﬁle containing function deﬁnitions, open that ﬁle and run it. • Create a script ﬁle from scratch. Include at least one function deﬁnition, and run the script. • Open the GUI console and type a function deﬁnition interactively. This method is not particularly recommended; you are probably better composing a function non-interactively. For example, suppose you decide to package a function that returns the percentage change of a time series. Open a script ﬁle and type function series pc(series y "Series to process") return 100 * diff(y)/y(-1) end function In this case, we have appended a string to the function argument, as explained in section 10.1, so as to make our interface more informative. This is not obligatory: if you omit the descriptive string, gretl will supply a predeﬁned one. Now run your function. You may want to make sure it works properly by running a few tests. For example, you may open the console and type genr x = uniform() genr dpcx = pc(x) print x dpcx --byobs You should see something similar to ﬁgure 10.1. The function seems to work ok. Once your function is debugged, you may proceed to the next stage. Chapter 10. User-deﬁned functions 75 Figure 10.1: Output of function check Create a package Start the GUI program and take a look at the “File, Function ﬁles” menu. This menu contains four items: “On local machine”, “On server”, “Edit package”, “New package”. Select “New package”. (This will produce an error message unless at least one user-deﬁned function is currently loaded in memory — see the previous point.) In the ﬁrst dialog you get to select: • A public function to package. • Zero or more “private” helper functions. Public functions are directly available to users; private functions are part of the “behind the scenes” mechanism in a function package. On clicking “OK” a second dialog should appear (see Figure 10.2), where you get to enter the package information (author, version, date, and a short description). You can also enter help text for the public interface. You have a further chance to edit the code of the function(s) to be packaged, by clicking on “Edit function code”. (If the package contains more than one function, a drop-down selector will be shown.) And you get to add a sample script that exercises your package. This will be helpful for potential users, and also for testing. A sample script is required if you want to upload the package to the gretl server (for which a check-box is supplied). You won’t need it right now, but the button labeled “Save as script” allows you to “reverse engineer” a function package, writing out a script that contains all the relevant function deﬁnitions. Clicking “Save” in this dialog leads you to a File Save dialog. All being well, this should be pointing towards a directory named functions, either under the gretl system directory (if you have write permission on that) or the gretl user directory. This is the recommended place to save function package ﬁles, since that is where the program will look in the special routine for opening such ﬁles (see below). Needless to say, the menu command “File, Function ﬁles, Edit package” allows you to make changes to a local function package. A word on the ﬁle you just saved. By default, it will have a .gfn extension. This is a “function package” ﬁle: unlike an ordinary gretl script ﬁle, it is an XML ﬁle containing both the function code Chapter 10. User-deﬁned functions 76 Figure 10.2: The package editor window and the extra information entered in the packager. Hackers might wish to write such a ﬁle from scratch rather than using the GUI packager, but most people are likely to ﬁnd it awkward. Note that XML-special characters in the function code have to be escaped, e.g. & must be represented as &amp;. Also, some elements of the function syntax diﬀer from the standard script representation: the parameters and return values (if any) are represented in XML. Basically, the function is preparsed, and ready for fast loading using libxml. Load a package Why package functions in this way? To see what’s on oﬀer so far, try the next phase of the walkthrough. Close gretl, then re-open it. Now go to “File, Function ﬁles, On local machine”. If the previous stage above has gone OK, you should see the ﬁle you packaged and saved, with its short description. If you click on “Info” you get a window with all the information gretl has gleaned from the function package. If you click on the “View code” icon in the toolbar of this new window, you get a script view window showing the actual function code. Now, back to the “Function packages” window, if you click on the package’s name, the relevant functions are loaded into gretl’s workspace, ready to be called by clicking on the “Call” button. After loading the function(s) from the package, open the GUI console. Try typing help foo, replacing foo with the name of the public interface from the loaded function package: if any help text was provided for the function, it should be presented. In a similar way, you can browse and load the function packages available on the gretl server, by selecting “File, Function ﬁles, On server”. Once your package is installed on your local machine, you can use the function it contains via the graphical interface as described above, or by using the CLI, namely in a script or through the console. In the latter case, you load the function via the include command, specifying the package ﬁle as the argument, complete with the .gfn extension. Chapter 10. User-deﬁned functions 77 Figure 10.3: Using your package To continue with our example, load the ﬁle np.gdt (supplied with gretl among the sample datasets). Suppose you want to compute the rate of change for the variable iprod via your new function and store the result in a series named foo. Go to “File, Function ﬁles, On local machine”. You will be shown a list of the installed packages, including the one you have just created. If you select it and click on “Execute” (or double-click on the name of the function package), a window similar to the one shown in ﬁgure 10.3 will appear. Notice that the description string “Series to process”, supplied with the function deﬁnition, appears to the left of the top series chooser. Click “Ok” and the series foo will be generated (see ﬁgure 10.4). You may have to go to “Data, Refresh data” in order to have your new variable show up in the main window variable list (or just press the “r” key). Alternatively, the same could have been accomplished by the script include pc.gfn open np foo = pc(iprod) Chapter 10. User-deﬁned functions 78 Figure 10.4: Percent change in industrial production Chapter 11 Named lists and strings 11.1 Named lists Many gretl commands take one or more lists of series as arguments. To make this easier to handle in the context of command scripts, and in particular within user-deﬁned functions, gretl oﬀers the possibility of named lists. Creating and modifying named lists A named list is created using the keyword list, followed by the name of the list, an equals sign, and an expression that forms a list. The most basic sort of expression that works in this context is a space-separated list of variables, given either by name or by ID number. For example, list xlist = 1 2 3 4 list reglist = income price Note that the variables in question must be of the series type: you can’t include scalars in a named list. Two special forms are available: • If you use the keyword null on the right-hand side, you get an empty list. • If you use the keyword dataset on the right, you get a list containing all the series in the current dataset (except the pre-deﬁned const). The name of the list must start with a letter, and must be composed entirely of letters, numbers or the underscore character. The maximum length of the name is 15 characters; list names cannot contain spaces. Once a named list has been created, it will be “remembered” for the duration of the gretl session, and can be used in the context of any gretl command where a list of variables is expected. One simple example is the speciﬁcation of a list of regressors: list xlist = x1 x2 x3 x4 ols y 0 xlist To get rid of a list, you can use the following syntax: list xlist delete Be careful: delete xlist will delete the variables contained in the list, so it implies data loss (which may not be what you want). On the other hand, list xlist delete will simply “undeﬁne” the xlist identiﬁer and the variables themselves will not be aﬀected. Lists can be modiﬁed in two ways. To redeﬁne an existing list altogether, use the same syntax as for creating a list. For example list xlist = 1 2 3 xlist = 4 5 6 79 Chapter 11. Named lists and strings After the second assignment, xlist contains just variables 4, 5 and 6. 80 To append or prepend variables to an existing list, we can make use of the fact that a named list stands in for a “longhand” list. For example, we can do list xlist = xlist 5 6 7 xlist = 9 10 xlist 11 12 Another option for appending a term (or a list) to an existing list is to use +=, as in xlist += cpi To drop a variable from a list, use -=: xlist -= cpi In most contexts where lists are used in gretl, it is expected that they do not contain any duplicated elements. If you form a new list by simple concatenation, as in list L3 = L1 L2 (where L1 and L2 are existing lists), it’s possible that the result may contain duplicates. To guard against this you can form a new list as the union of two existing ones: list L3 = L1 || L2 The result is a list that contains all the members of L1, plus any members of L2 that are not already in L1. In the same vein, you can construct a new list as the intersection of two existing ones: list L3 = L1 && L2 Here L3 contains all the elements that are present in both L1 and L2. Lists and matrices Another way of forming a list is by assignment from a matrix. The matrix in question must be interpretable as a vector containing ID numbers of (series) variables. It may be either a row or a column vector, and each of its elements must have an integer part that is no greater than the number of variables in the data set. For example: matrix m = {1,2,3,4} list L = m The above is OK provided the data set contains at least 4 variables. Querying a list You can determine whether an unknown variable actually represents a list using the function islist(). series xl1 series xl2 list xlogs genr is1 = genr is2 = = log(x1) = log(x2) = xl1 xl2 islist(xlogs) islist(xl1) The ﬁrst genr command above will assign a value of 1 to is1 since xlogs is in fact a named list. The second genr will assign 0 to is2 since xl1 is a data series, not a list. You can also determine the number of variables or elements in a list using the function nelem(). Chapter 11. Named lists and strings list xlist = 1 2 3 nl = nelem(xlist) 81 The (scalar) variable nl will be assigned a value of 3 since xlist contains 3 members. You can display the membership of a named list just by giving its name, as illustrated in this interactive session: ? list xlist = x1 x2 x3 Added list ’xlist’ ? xlist x1 x2 x3 Note that print xlist will do something diﬀerent, namely print the values of all the variables in xlist (as should be expected). Generating lists of transformed variables Given a named list of variables, you are able to generate lists of transformations of these variables using the functions log, lags, diff, ldiff, sdiff or dummify. For example list xlist = x1 x2 x3 list lxlist = log(xlist) list difflist = diff(xlist) When generating a list of lags in this way, you specify the maximum lag order inside the parentheses, before the list name and separated by a comma. For example list xlist = x1 x2 x3 list laglist = lags(2, xlist) or scalar order = 4 list laglist = lags(order, xlist) These commands will populate laglist with the speciﬁed number of lags of the variables in xlist. You can give the name of a single series in place of a list as the second argument to lags: this is equivalent to giving a list with just one member. The dummify function creates a set of dummy variables coding for all but one of the distinct values taken on by the original variable, which should be discrete. (The smallest value is taken as the omitted catgory.) Like lags, this function returns a list even if the input is a single series. Generating series from lists Once a list is deﬁned, gretl oﬀers several functions that apply to the list and return a series. In most cases, these functions also apply to single series and behave as natural extensions when applied to a list, but this is not always the case. For recognizing and handling missing values, Gretl oﬀers several functions (see the Gretl Command Reference for details). In this context, it is worth remarking that the ok() function can be used with a list argument. For example, list xlist = x1 x2 x3 series xok = ok(xlist) Chapter 11. Named lists and strings YpcFR 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 114.9 115.3 115.0 115.6 116.0 116.3 112.1 110.3 112.4 111.9 YpcGE 124.6 122.7 122.4 118.8 116.9 115.5 116.9 116.6 115.1 114.2 YpcIT 119.3 120.0 117.8 117.2 118.1 112.2 111.0 106.9 105.1 103.3 NFR 59830.635 60046.709 60348.255 60750.876 61181.560 61615.562 62041.798 62444.707 62818.185 63195.457 NGE 82034.771 82047.195 82100.243 82211.508 82349.925 82488.495 82534.176 82516.260 82469.422 82376.451 NIT 56890.372 56906.744 56916.317 56942.108 56977.217 57157.406 57604.658 58175.310 58607.043 58941.499 82 Table 11.1: GDP per capita and population in 3 European countries (Source: Eurostat) After these commands, the series xok will have value 1 for observations where none of x1, x2, or x3 has a missing value, and value 0 for any observations where this condition is not met. The functions max, min, mean, sd, sum and var behave horizontally rather than vertically when their argument is a list. For instance, the following commands list Xlist = x1 x2 x3 series m = mean(Xlist) produce a series m whose i-th element is the average of x1,i , x2,i and x3,i ; missing values, if any, are implicitly discarded. In addition, gretl provides three functions for weighted operations: wmean, wsd and wvar. Consider as an illustration Table 11.1: the ﬁrst three columns are GDP per capita for France, Germany and Italy; columns 4 to 6 contain the population for each country. If we want to compute an aggregate indicator of per capita GDP, all we have to do is list Ypc = YpcFR YpcGE YpcIT list N = NFR NGE NIT y = wmean(Ypc, N) so for example y1996 = 114.9 × 59830.635 + 124.6 × 82034.771 + 119.3 × 56890.372 = 120.163 59830.635 + 82034.771 + 56890.372 See the Gretl Command Reference for more details. 11.2 Named strings For some purposes it may be useful to save a string (that is, a sequence of characters) as a named variable that can be reused. Versions of gretl higher than 1.6.0 oﬀer this facility, but some of the reﬁnements noted below are available only in gretl 1.7.2 and higher. To deﬁne a string variable, you can use either of two commands, string or sprintf. The string command is simpler: you can type, for example, string s1 = "some stuff I want to save" string s2 = getenv("HOME") string s3 = s1 + 11 Chapter 11. Named lists and strings 83 The ﬁrst ﬁeld after string is the name under which the string should be saved, then comes an equals sign, then comes a speciﬁcation of the string to be saved. This can be the keyword null, to produce an empty string, or may take any of the following forms: • a string literal (enclosed in double quotes); or • the name of an existing string variable; or • a function that returns a string (see below); or • any of the above followed by + and an integer oﬀset. The role of the integer oﬀset is to use a substring of the preceding element, starting at the given character oﬀset. An empty string is returned if the oﬀset is greater than the length of the string in question. To add to the end of an existing string you can use the operator +=, as in string s1 = "some stuff I want to " string s1 += "save" or you can use the ~ operator to join two or more strings, as in string s1 = "sweet" string s2 = "Home, " ~ s1 ~ " home." Note that when you deﬁne a string variable using a string literal, no characters are treated as “special” (other than the double quotes that delimit the string). Speciﬁcally, the backslash is not used as an escape character. So, for example, string s = "\" is a valid assignment, producing a string that contains a single backslash character. If you wish to use backslash-escapes to denote newlines, tabs, embedded double-quotes and so on, use sprintf instead. The sprintf command is more ﬂexible. It works exactly as gretl’s printf command except that the “format” string must be preceded by the name of a string variable. For example, scalar x = 8 sprintf foo "var%d", x To use the value of a string variable in a command, give the name of the variable preceded by the “at” sign, @. This notation is treated as a “macro”. That is, if a sequence of characters in a gretl command following the symbol @ is recognized as the name of a string variable, the value of that variable is sustituted literally into the command line before the regular parsing of the command is carried out. This is illustrated in the following interactive session: ? scalar x = 8 scalar x = 8 Generated scalar x (ID 2) = 8 ? sprintf foo "var%d", x Saved string as ’foo’ ? print "@foo" var8 Note the eﬀect of the quotation marks in the line print "@foo". The line Chapter 11. Named lists and strings ? print @foo 84 would not print a literal “var8” as above. After pre-processing the line would read print var8 It would therefore print the value(s) of the variable var8, if such a variable exists, or would generate an error otherwise. In some contexts, however, one wants to treat string variables as variables in their own right: to do this, give the name of the variable without the leading @ symbol. This is the way to handle such variables in the following contexts: • When they appear among the arguments to the commands printf and sprintf. • On the right-hand side of a string assignment. • When they appear as an argument to the function taking a string argument. Here is an illustration of the use of named string arguments with printf: string vstr = "variance" Generated string vstr printf "vstr: %12s\n", vstr vstr: variance Note that vstr should not be put in quotes in this context. Similarly with ? string vstr_copy = vstr Built-in strings Apart from any strings that the user may deﬁne, some string variables are deﬁned by gretl itself. These may be useful for people writing functions that include shell commands. The built-in strings are as shown in Table 11.2. gretldir workdir dotdir gnuplot tramo x12a tramodir x12adir the gretl installation directory user’s current gretl working directory the directory gretl uses for temporary ﬁles path to, or name of, the gnuplot executable path to, or name of, the tramo executable path to, or name of, the x-12-arima executable tramo data directory x-12-arima data directory Table 11.2: Built-in string variables Reading strings from the environment In addition, it is possible to read into gretl’s named strings, values that are deﬁned in the external environment. To do this you use the function getenv, which takes the name of an environment variable as its argument. For example: Chapter 11. Named lists and strings ? string user = getenv("USER") Saved string as ’user’ ? string home = getenv("HOME") Saved string as ’home’ ? print "@user’s home directory is @home" cottrell’s home directory is /home/cottrell 85 To check whether you got a non-empty value from a given call to getenv, you can use the function strlen, which retrieves the length of the string, as in ? string temp = getenv("TEMP") Saved empty string as ’temp’ ? scalar x = strlen(temp) Generated scalar x (ID 2) = 0 The function isstring returns 1 if its argument is the name of a string variable, 0 otherwise. However, if the return is 1 the string may still be empty. At present the getenv function can only be used on the right-hand side of a string assignment, as in the above illustrations. Capturing strings via the shell If shell commands are enabled in gretl, you can capture the output from such commands using the syntax string stringname = $(shellcommand ) That is, you enclose a shell command in parentheses, preceded by a dollar sign. Reading from a ﬁle into a string You can read the content of a ﬁle into a string variable using the syntax string stringname = readfile(ﬁlename) The ﬁlename ﬁeld may be given as a string variable. For example ? sprintf fname "%s/QNC.rts", x12adir Generated string fname ? string foo = readfile(fname) Generated string foo The above could also be accomplished using the “macro” variant of a string variable, provided it is placed in quotation marks: string foo = readfile("@x12adir/QNC.rts") The strstr function Invocation of this function takes the form string stringname = strstr(s1, s2) The eﬀect is to search s1 for the ﬁrst occurrence of s2. If no such occurrence is found, an empty string is returned; otherwise the portion of s1 starting with s2 is returned. For example: ? string hw = "hello world" Saved string as ’hw’ ? string w = strstr(hw, "o") Saved string as ’w’ Chapter 11. Named lists and strings ? print "@w" o world 86 Chapter 12 Matrix manipulation Together with the other two basic types of data (series and scalars), gretl oﬀers a quite comprehensive array of matrix methods. This chapter illustrates the peculiarities of matrix syntax and discusses brieﬂy some of the more complex matrix functions. For a full listing of matrix functions and a comprehensive account of their syntax, please refer to the Gretl Command Reference. 12.1 Creating matrices Matrices can be created using any of these methods: 1. By direct speciﬁcation of the scalar values that compose the matrix — in numerical form, by reference to pre-existing scalar variables, or using computed values. 2. By providing a list of data series. 3. By providing a named list of series. 4. Using a formula of the same general type that is used with the genr command, whereby a new matrix is deﬁned in terms of existing matrices and/or scalars, or via some special functions. To specify a matrix directly in terms of scalars, the syntax is, for example: matrix A = { 1, 2, 3 ; 4, 5, 6 } The matrix is deﬁned by rows; the elements on each row are separated by commas and the rows are separated by semi-colons. The whole expression must be wrapped in braces. Spaces within the braces are not signiﬁcant. The above expression deﬁnes a 2 × 3 matrix. Each element should be a numerical value, the name of a scalar variable, or an expression that evaluates to a scalar. Directly after the closing brace you can append a single quote (’) to obtain the transpose. To specify a matrix in terms of data series the syntax is, for example, matrix A = { x1, x2, x3 } where the names of the variables are separated by commas. Besides names of existing variables, you can use expressions that evaluate to a series. For example, given a series x you could do matrix A = { x, x^2 } Each variable occupies a column (and there can only be one variable per column). You cannot use the semicolon as a row separator in this case: if you want the series arranged in rows, append the transpose symbol. The range of data values included in the matrix depends on the current setting of the sample range. By default, when you build a matrix from series that include missing values the data rows that contain NAs are skipped. But you can modify this behavior via the command set skip_missing off. In that case NAs 87 Chapter 12. Matrix manipulation 88 are converted to NaN (“Not a Number”). In the IEEE ﬂoating-point standard, arithmetic operations involving NaN always produce NaN. Instead of giving an explicit list of variables, you may instead provide the name of a saved list (see Chapter 11), as in list xlist = x1 x2 x3 matrix A = { xlist } When you provide a named list, the data series are by default placed in columns, as is natural in an econometric context: if you want them in rows, append the transpose symbol. As a special case of constructing a matrix from a list of variables, you can say matrix A = { dataset } This builds a matrix using all the series in the current dataset, apart from the constant (variable 0). When this dummy list is used, it must be the sole element in the matrix deﬁnition {...}. You can, however, create a matrix that includes the constant along with all other variables using horizontal concatenation (see below), as in matrix A = {const}~{dataset} The syntax matrix A = {} creates an empty matrix — a matrix with zero rows and zero columns. See section 12.2 for a discussion of this object. Names of matrices must satisfy the same requirements as names of gretl variables in general: the name can be no longer than 15 characters, must start with a letter, and must be composed of nothing but letters, numbers and the underscore character. 12.2 Empty matrices The main purpose of the concept of an empty matrix is to enable the user to deﬁne a starting point for subsequent concatenation operations. For instance, if X is an already deﬁned matrix of any size, the commands matrix A = {} matrix B = A ~ X result in a matrix B identical to X. From an algebraic point of view, one can make sense of the idea of an empty matrix in terms of vector spaces: if a matrix is an ordered set of vectors, then A={} is the empty set. As a consequence, operations involving addition and multiplications don’t have any clear meaning (arguably, they have none at all), but operations involving the cardinality of this set (that is, the dimension of the space spanned by A) are meaningful. Legal operations on empty matrices are listed in Table 12.1. (All other matrix operations generate an error when an empty matrix is given as an argument.) In line with the above interpretation, some matrix functions return an empty matrix under certain conditions: the functions diag, vec, vech, unvech when the arguments is an empty matrix; the functions I, ones, zeros, mnormal, muniform when one or more of the arguments is 0; and the function nullspace when its argument has full column rank. Chapter 12. Matrix manipulation Function A’, transp(A) rows(A) cols(A) rank(A) det(A) ldet(A) tr(A) onenorm(A) infnorm(A) rcond(A) Return value A 0 0 0 NA NA NA NA NA NA 89 Table 12.1: Valid functions on an empty matrix, A 12.3 Selecting sub-matrices You can select sub-matrices of a given matrix using the syntax A[rows,cols] where rows can take any of these forms: 1. 2. 3. 4. empty a single integer two integers separated by a colon the name of a matrix selects all rows selects the single speciﬁed row selects a range of rows selects the speciﬁed rows With regard to option 2, the integer value can be given numerically, as the name of an existing scalar variable, or as an expression that evaluates to a scalar. With the option 4, the index matrix given in the rows ﬁeld must be either p × 1 or 1 × p , and should contain integer values in the range 1 to n, where n is the number of rows in the matrix from which the selection is to be made. The cols speciﬁcation works in the same way, mutatis mutandis. Here are some examples. matrix matrix matrix matrix matrix B= B= B= idx B= A[1,] A[2:3,3:5] A[2,2] = { 1, 2, 6 } A[idx,] The ﬁrst example selects row 1 from matrix A; the second selects a 2 × 3 submatrix; the third selects a scalar; and the fourth selects rows 1, 2, and 6 from matrix A. In addition there is a pre-deﬁned index speciﬁcation, diag, which selects the principal diagonal of a square matrix, as in B[diag], where B is square. You can use selections of this sort on either the right-hand side of a matrix-generating formula or the left. Here is an example of use of a selection on the right, to extract a 2 × 2 submatrix B from a 3 × 3 matrix A: matrix A = { 1, 2, 3; 4, 5, 6; 7, 8, 9 } matrix B = A[1:2,2:3] And here are examples of selection on the left. The second line below writes a 2 × 2 identity matrix into the bottom right corner of the 3 × 3 matrix A. The fourth line replaces the diagonal of A with 1s. Chapter 12. Matrix manipulation matrix matrix matrix matrix A = { 1, 2, 3; 4, 5, 6; 7, 8, 9 } A[2:3,2:3] = I(2) d = { 1, 1, 1 } A[diag] = d 90 12.4 Matrix operators The following binary operators are available for matrices: + * ’ / ~ | ** = addition subtraction ordinary matrix multiplication pre-multiplication by transpose matrix “division” (see below) column-wise concatenation row-wise concatenation Kronecker product test for equality In addition, the following operators (“dot” operators) apply on an element-by-element basis: .+ ..* ./ .^ .= .> .< Here are explanations of the less obvious cases. For matrix addition and subtraction, in general the two matrices have to be of the same dimensions but an exception to this rule is granted if one of the operands is a 1 × 1 matrix or scalar. The scalar is implicitly promoted to the status of a matrix of the correct dimensions, all of whose elements are equal to the given scalar value. For example, if A is an m × n matrix and k a scalar, then the commands matrix C = A + k matrix D = A - k both produce m × n matrices, with elements cij = aij + k and dij = aij − k respectively. By “pre-multiplication by transpose” we mean, for example, that matrix C = X’Y produces the product of X -transpose and Y . In eﬀect, the expression X’Y is shorthand for X’*Y (which is also valid). In matrix “division”, the statement matrix C = A/B is interpreted as a request to ﬁnd the matrix C that solves BC = A. If B is a square matrix, this is treated as equivalent to B −1 A, which fails if B is singular; the numerical method employed here is the LU decomposition. If B is a T × k matrix with T > k, then C is the least-squares solution, C = (B B )−1 B A, which fails if B B is singular; the numerical method employed here is the QR decomposition. Otherwise, the operation necessarily fails. In “dot” operations a binary operation is applied element by element; the result of this operation is obvious if the matrices are of the same size. However, there are several other cases where such operators may be applied. For example, if we write Chapter 12. Matrix manipulation matrix C = A .- B 91 then the result C depends on the dimensions of A and B . Let A be an m × n matrix and let B be p × q; the result is as follows: Case Dimensions match (m = p and n = q) A is a column vector; rows match (m = p ; n = 1) B is a column vector; rows match (m = p ; q = 1) A is a row vector; columns match (m = 1; n = q) B is a row vector; columns match (m = p ; q = 1) A is a column vector; B is a row vector (n = 1; p = 1) A is a row vector; B is a column vector (m = 1; q = 1) A is a scalar (m = 1 and n = 1) B is a scalar (p = 1 and q = 1) Result cij = aij − bij cij = ai − bij cij = aij − bi cij = aj − bij cij = aij − bj cij = ai − bj cij = aj − bi cij = a − bij cij = aij − b If none of the above conditions are satisﬁed the result is undeﬁned and an error is ﬂagged. Note that this convention makes it unnecessary, in most cases, to use diagonal matrices to perform transformations by means of ordinary matrix multiplication: if Y = XV , where V is diagonal, it is computationally much more convenient to obtain Y via the instruction matrix Y = X .* v where v is a row vector containing the diagonal of V . In column-wise concatenation of an m×n matrix A and an m×p matrix B , the result is an m×(n+p) matrix. That is, matrix C = A ~ B produces C = A B . Row-wise concatenation of an m × n matrix A and an p × n matrix B produces an (m + p) × n matrix. That is, matrix C = A | B produces C = A B . 12.5 Matrix–scalar operators For matrix A and scalar k, the operators shown in Table 12.2 are available. (Addition and subtraction were discussed in section 12.4 but we include them in the table for completeness.) In addition, for square A and integer k ≥ 0, B = A^k produces a matrix B which is A raised to the power k. 12.6 Matrix functions Most of the gretl functions available for scalars and series also apply to matrices in an element-byelement fashion, and as such their behavior should be pretty obvious. This is the case for functions such as log, exp, sin, etc. These functions have the eﬀects documented in relation to the genr command. For example, if a matrix A is already deﬁned, then Chapter 12. Matrix manipulation Expression matrix B = A * k matrix B = A / k matrix B = k / A matrix B = A + k matrix B = A - k matrix B = k - A matrix B = A % k Eﬀect bij = kaij bij = aij /k bij = k/aij bij = aij + k bij = aij − k bij = k − aij bij = aij modulo k 92 Table 12.2: Matrix–scalar operators matrix B = sqrt(A) √ generates a matrix such that bij = aij . All such functions require a single matrix as argument, or an expression which evaluates to a single matrix.1 In this section, we review some aspects of genr functions that apply speciﬁcally to matrices. A full account of each function is available in the Gretl Command Reference. Matrix reshaping In addition to the methods discussed in sections 12.1 and 12.3, a matrix can also be created by re-arranging the elements of a pre-existing matrix. This is accomplished via the mshape function. It takes three arguments: the input matrix, A, and the rows and columns of the target matrix, r and c respectively. Elements are read from A and written to the target in column-major order. If A contains fewer elements than n = r × c , they are repeated cyclically; if A has more elements, only the ﬁrst n are used. For example: matrix a = mnormal(2,3) a matrix b = mshape(a,3,1) b matrix b = mshape(a,5,2) b produces ? a a 1.2323 0.54363 0.99714 0.43928 -0.39078 -0.48467 ? matrix b = mshape(a,3,1) Generated matrix b ? b b 1.2323 0.54363 1 Note that to ﬁnd the “matrix square root” you need the cholesky function (see below); moreover, the exp function computes the exponential element by element, and therefore does not return the matrix exponential unless the matrix is diagonal — to get the matrix exponential, use mexp. Chapter 12. Matrix manipulation Creation and I/O colnames mread upper 93 diag muniform vec I mwrite vech mreverse sort cmult ginv rank lower ones zeros mshape trimr det infnorm rcond makemask seq mnormal unvech Shape/size/arrangement cols dsort selifc Matrix algebra cdiv fft ldet qform tr selifr cholesky ffti mexp qrdecomp transp msortby rows eigengen inv polroots svd eigensym invpd psdroot toepsolv nullspace onenorm Statistics/transformations cdemean corrgm iminc mcovg mols resample Data utilities replace Filters filter Numerical methods BFGSmax Transformations lincomb kfilter fdjac iminr meanc mpols sdc cum maxc meanr mxtab sumc fcstats maxr minc pergm sumr imaxc mcorr minr princomp values imaxr mcov mlag quantile ksimul ksmooth Table 12.3: Matrix functions by category 0.99714 ? matrix b = mshape(a,5,2) Replaced matrix b ? b b 1.2323 0.54363 0.99714 0.43928 -0.39078 -0.48467 1.2323 0.54363 0.99714 0.43928 Complex multiplication and division Gretl has no native provision for complex numbers. However, basic operations can be performed on vectors of complex numbers by using the convention that a vector of n complex numbers is represented as a n × 2 matrix, where the ﬁrst column contains the real part and the second the imaginary part. Addition and subtraction are trivial; the functions cmult and cdiv compute the complex product Chapter 12. Matrix manipulation 94 and division, respectively, of two input matrices, A and B , representing complex numbers. These matrices must have the same number of rows, n, and either one or two columns. The ﬁrst column contains the real part and the second (if present) the imaginary part. The return value is an n × 2 matrix, or, if the result has no imaginary part, an n-vector. For example, suppose you have z1 = [1 + 2i, 3 + 4i] and z2 = [1, i] : ? z1 = {1,2;3,4} z1 = {1,2;3,4} Generated matrix z1 ? z2 = I(2) z2 = I(2) Generated matrix z2 ? conj_z1 = z1 .* {1,-1} conj_z1 = z1 .* {1,-1} Generated matrix conj_z1 ? eval cmult(z1,z2) eval cmult(z1,z2) 1 2 -4 3 ? eval cmult(z1,conj_z1) eval cmult(z1,conj_z1) 5 25 Multiple returns and the null keyword Some functions take one or more matrices as arguments and compute one or more matrices; these are: eigensym eigengen mols qrdecomp svd Eigen-analysis of symmetric matrix Eigen-analysis of general matrix Matrix OLS QR decomposition Singular value decomposition (SVD) The general rule is: the “main” result of the function is always returned as the result proper. Auxiliary returns, if needed, are retrieved using pre-existing matrices, which are passed to the function as pointers (see 10.4). If such values are not needed, the pointer may be substituted with the keyword null. The syntax for qrdecomp, eigensym and eigengen is of the form matrix B = func(A, &C) The ﬁrst argument, A, represents the input data, that is, the matrix whose decomposition or analysis is required. The second argument must be either the name of an existing matrix preceded by & (to indicate the “address” of the matrix in question), in which case an auxiliary result is written to that matrix, or the keyword null, in which case the auxiliary result is not produced, or is discarded. In case a non-null second argument is given, the speciﬁed matrix will be over-written with the auxiliary result. (It is not required that the existing matrix be of the right dimensions to receive the result.) The function eigensym computes the eigenvalues, and optionally the right eigenvectors, of a symmetric n × n matrix. The eigenvalues are returned directly in a column vector of length n; if the eigenvectors are required, they are returned in an n × n matrix. For example: Chapter 12. Matrix manipulation matrix V matrix E = eigensym(M, &V) matrix E = eigensym(M, null) 95 In the ﬁrst case E holds the eigenvalues of M and V holds the eigenvectors. In the second, E holds the eigenvalues but the eigenvectors are not computed. The function eigengen computes the eigenvalues, and optionally the eigenvectors, of a general n × n matrix. The eigenvalues are returned directly in an n × 2 matrix, the ﬁrst column holding the real components and the second column the imaginary components. If the eigenvectors are required (that is, if the second argument to eigengen is not null), they are returned in an n × n matrix. The column arrangement of this matrix is somewhat non-trivial: the eigenvectors are stored in the same order as the eigenvalues, but the real eigenvectors occupy one column, whereas complex eigenvectors take two (the real part comes ﬁrst); the total number of columns is still n, because the conjugate eigenvector is skipped. Example 12.1 provides a (hopefully) clarifying example (see also subsection 12.6). Example 12.1: Complex eigenvalues and eigenvectors set seed 34756 matrix v A = mnormal(3,3) /* do the eigen-analysis */ l = eigengen(A,&v) /* eigenvalue 1 is real, 2 and 3 are complex conjugates */ print l print v /* column 1 contains the first eigenvector (real) */ B = A*v[,1] c = l[1,1] * v[,1] /* B should equal c */ print B print c /* columns 2:3 contain the real and imaginary parts of eigenvector 2 */ B = A*v[,2:3] c = cmult(ones(3,1)*(l[2,]),v[,2:3]) /* B should equal c */ print B print c The qrdecomp function computes the QR decomposition of an m × n matrix A: A = QR , where Q is an m × n orthogonal matrix and R is an n × n upper triangular matrix. The matrix Q is returned directly, while R can be retrieved via the second argument. Here are two examples: Chapter 12. Matrix manipulation matrix R matrix Q = qrdecomp(M, &R) matrix Q = qrdecomp(M, null) 96 In the ﬁrst example, the triangular R is saved as R; in the second, R is discarded. The ﬁrst line above shows an example of a “simple declaration” of a matrix: R is declared to be a matrix variable but is not given any explicit value. In this case the variable is initialized as a 1 × 1 matrix whose single element equals zero. The syntax for svd is matrix B = func(A, &C, &D) The function svd computes all or part of the singular value decomposition of the real m × n matrix A. Let k = min(m, n). The decomposition is A = U ΣV where U is an m × k orthogonal matrix, Σ is an k × k diagonal matrix, and V is an k × n orthogonal matrix.2 The diagonal elements of Σ are the singular values of A; they are real and non-negative, and are returned in descending order. The ﬁrst k columns of U and V are the left and right singular vectors of A. The svd function returns the singular values, in a vector of length k. The left and/or right singular vectors may be obtained by supplying non-null values for the second and/or third arguments respectively. For example: matrix s = svd(A, &U, &V) matrix s = svd(A, null, null) matrix s = svd(A, null, &V) In the ﬁrst case both sets of singular vectors are obtained, in the second case only the singular values are obtained; and in the third, the right singular vectors are obtained but U is not computed. Please note: when the third argument is non-null, it is actually V that is provided. To reconstitute the original matrix from its SVD, one can do: matrix s = svd(A, &U, &V) matrix B = (U.*s)*V Finally, the syntax for mols is matrix B = mols(Y, X, &U) This function returns the OLS estimates obtained by regressing the T × n matrix Y on the T × k matrix X , that is, a k × n matrix holding (X X )−1 X Y . The Cholesky decomposition is used. The matrix U , if not null, is used to store the residuals. Reading and writing matrices from/to text ﬁles The two functions mread and mwrite can be used for basic matrix input/output. This can be useful to enable gretl to exchange data with other programs. The mread function accepts one string parameter: the name of the (plain text) ﬁle from which the matrix is to be read. The ﬁle in question must conform to the following rules: 1. The columns must be separated by spaces or tab characters. is not the only deﬁnition of the SVD: some writers deﬁne U as m × m, Σ as m × n (with k non-zero diagonal elements) and V as n × n. 2 This Chapter 12. Matrix manipulation 2. The decimal separator must be the dot “.” character. 97 3. The ﬁrst line in the ﬁle must contain two integers, separated by a space or a tab, indicating the number of rows and columns, respectively. Should an error occur (such as the ﬁle being badly formatted or inaccessible), an empty matrix (see section 12.2) is returned. The complementary function mwrite produces text ﬁles formatted as described above. The column separator is the tab character, so import into spreadsheets should be straightforward. Usage is illustrated in example 12.2. Matrices stored via the mwrite command can be easily read by other programs; the following table summarizes the appropriate commands for reading a matrix A from a ﬁle called a.mat in some widely-used programs.3 Program GAUSS Octave Sample code tmp = load a.mat; A = reshape(tmp[3:rows(tmp)],tmp[1],tmp[2]); fd = fopen("a.mat"); [r,c] = fscanf(fd, "%d %d", "C"); A = reshape(fscanf(fd, "%g", r*c),c,r)’; fclose(fd); Ox R decl A = loadmat("a.mat"); A <- as.matrix(read.table("a.mat", skip=1)) 12.7 Matrix accessors In addition to the matrix functions discussed above, various “accessor” strings allow you to create copies of internal matrices associated with models previously estimated. These are set out in Table 12.4.$coeff $compan$jalpha $jbeta$jvbeta $rho$sigma $stderr$uhat $vcv$yhat matrix of estimated coeﬃcients companion matrix (after VAR or VECM estimation) matrix α (loadings) from Johansen’s procedure matrix β (cointegration vectors) from Johansen’s procedure covariance matrix for the unrestricted elements of β from Johansen’s procedure autoregressive coeﬃcients for error process residual covariance matrix matrix of estimated standard errors matrix of residuals covariance matrix of parameter estimates matrix of ﬁtted values Table 12.4: Matrix accessors for model data Many of the accessors in Table 12.4 behave somewhat diﬀerently depending on the sort of model that is referenced, as follows: • Single-equation models: $sigma gets a scalar (the standard error of the regression);$coeff and $stderr get column vectors;$uhat and $yhat get series. 3 Matlab users may ﬁnd the Octave example helpful, since the two programs are mostly compatible with one another. Chapter 12. Matrix manipulation 98 Example 12.2: Matrix input/output via text ﬁles nulldata 64 scalar n = 3 string f1 = "a.csv" string f2 = "b.csv" matrix a = mnormal(n,n) matrix b = inv(a) err = mwrite(a, f1) if err != 0 fprintf "Failed to write %s\n", f1 else err = mwrite(b, f2) endif if err != 0 fprintf "Failed to write %s\n", f2 else c = mread(f1) d = mread(f2) a = c*d printf "The following matrix should be an identity matrix\n" print a endif • System estimators:$sigma gets the cross-equation residual covariance matrix; $uhat and$yhat get matrices with one column per equation. The format of $coeff and$stderr depends on the nature of the system: for VARs and VECMs (where the matrix of regressors is the same for all equations) these return matrices with one column per equation, but for other system estimators they return a big column vector. • VARs and VECMs: $vcv is not available, but X X −1 (where X is the common matrix of regressors) is available as$xtxinv. If the accessors are given without any preﬁx, they retrieve results from the last model estimated, if any. Alternatively, they may be preﬁxed with the name of a saved model plus a period (.), in which case they retrieve results from the speciﬁed model. Here are some examples: matrix u = $uhat matrix b = m1.$coeff matrix v2 = m1.$vcv[1:2,1:2] The ﬁrst command grabs the residuals from the last model; the second grabs the coeﬃcient vector from model m1; and the third (which uses the mechanism of sub-matrix selection described above) grabs a portion of the covariance matrix from model m1. If the model in question a VAR or VECM (only)$compan returns the companion matrix. After a vector error correction model is estimated via Johansen’s procedure, the matrices $jalpha and$jbeta are also available. These have a number of columns equal to the chosen cointegration rank; therefore, the product Chapter 12. Matrix manipulation matrix Pi = $jalpha *$jbeta’ 99 returns the reduced-rank estimate of A(1). Since β is automatically identiﬁed via the Phillips normalization (see section 22.5), its unrestricted elements do have a proper covariance matrix, which can be retrieved through the $jvbeta accessor. 12.8 Namespace issues Matrices share a common namespace with data series and scalar variables. In other words, no two objects of any of these types can have the same name. It is an error to attempt to change the type of an existing variable, for example: scalar x = 3 matrix x = ones(2,2) # wrong! It is possible, however, to delete or rename an existing variable then reuse the name for a variable of a diﬀerent type: scalar x = 3 delete x matrix x = ones(2,2) # OK 12.9 Creating a data series from a matrix Section 12.1 above describes how to create a matrix from a data series or set of series. You may sometimes wish to go in the opposite direction, that is, to copy values from a matrix into a regular data series. The syntax for this operation is series sname = mspec where sname is the name of the series to create and mspec is the name of the matrix to copy from, possibly followed by a matrix selection expression. Here are two examples. series s = x series u1 = U[,1] It is assumed that x and U are pre-existing matrices. In the second example the series u1 is formed from the ﬁrst column of the matrix U. For this operation to work, the matrix (or matrix selection) must be a vector with length equal to either the full length of the current dataset, n, or the length of the current sample range, n . If n < n then only n elements are drawn from the matrix; if the matrix or selection comprises n elements, the n values starting at element t1 are used, where t1 represents the starting observation of the sample range. Any values in the series that are not assigned from the matrix are set to the missing code. 12.10 Matrices and lists To facilitate the manipulation of named lists of variables (see Chapter 11), it is possible to convert between matrices and lists. In section 12.1 above we mentioned the facility for creating a matrix from a list of variables, as in matrix M = { listname } That formulation, with the name of the list enclosed in braces, builds a matrix whose columns hold the variables referenced in the list. What we are now describing is a diﬀerent matter: if we say Chapter 12. Matrix manipulation matrix M = listname 100 (without the braces), we get a row vector whose elements are the ID numbers of the variables in the list. This special case of matrix generation cannot be embedded in a compound expression. The syntax must be as shown above, namely simple assignment of a list to a matrix. To go in the other direction, you can include a matrix on the right-hand side of an expression that deﬁnes a list, as in list Xl = M where M is a matrix. The matrix must be suitable for conversion; that is, it must be a row or column vector containing non-negative whole-number values, none of which exceeds the highest ID number of a variable (series or scalar) in the current dataset. Example 12.3 illustrates the use of this sort of conversion to “normalize” a list, moving the constant (variable 0) to ﬁrst position. Example 12.3: Manipulating a list function void normalize_list (matrix *x) # If the matrix (representing a list) contains var 0, # but not in first position, move it to first position if (x[1] != 0) scalar k = cols(x) loop for (i=2; i<=k; i++) --quiet if (x[i] = 0) x[i] = x[1] x[1] = 0 break endif endloop endif end function open data9-7 list Xl = 2 3 0 4 matrix x = Xl normalize_list(&x) list Xl = x 12.11 Deleting a matrix To delete a matrix, just write delete M where M is the name of the matrix to be deleted. 12.12 Printing a matrix To print a matrix, the easiest way is to give the name of the matrix in question on a line by itself, which is equivalent to using the print command: Chapter 12. Matrix manipulation matrix M = mnormal(100,2) M print M 101 You can get ﬁner control on the formatting of output by using the printf command, as illustrated in the interactive session below: ? matrix Id = I(2) matrix Id = I(2) Generated matrix Id ? print Id print Id Id (2 x 2) 1 0 0 1 ? printf "%10.3f", Id 1.000 0.000 0.000 1.000 For presentation purposes you may wish to give titles to the columns of a matrix. For this you can use the colnames function: the ﬁrst argument is a matrix and the second is either a named list of variables, whose names will be used as headings, or a string that contains as many space-separated substrings as the matrix has columns. For example, ? ? ? M matrix M = mnormal(3,3) colnames(M, "foo bar baz") print M (3 x 3) foo 1.7102 -0.99780 -0.91762 bar -0.76072 -1.9003 -0.39237 baz 0.089406 -0.25123 -1.6114 12.13 Example: OLS using matrices Example 12.4 shows how matrix methods can be used to replicate gretl’s built-in OLS functionality. Chapter 12. Matrix manipulation 102 Example 12.4: OLS via matrix methods open data4-1 matrix X = { const, sqft } matrix y = { price } matrix b = invpd(X’X) * X’y print "estimated coefficient vector" b matrix u = y - X*b scalar SSR = u’u scalar s2 = SSR / (rows(X) - rows(b)) matrix V = s2 * inv(X’X) V matrix se = sqrt(diag(V)) print "estimated standard errors" se # compare with built-in function ols price const sqft --vcv Chapter 13 Cheat sheet This chapter explains how to perform some common — and some not so common — tasks in gretl’s scripting language. Some but not all of the techniques listed here are also available through the graphical interface. Although the graphical interface may be more intuitive and less intimidating at ﬁrst, we encourage users to take advantage of the power of gretl’s scripting language as soon as they feel comfortable with the program. 13.1 Dataset handling “Weird” periodicities Problem: You have data sampled each 3 minutes from 9am onwards; you’ll probably want to specify the hour as 20 periods. Solution: setobs 20 9:1 --special Comment: Now functions like sdiff() (“seasonal” diﬀerence) or estimation methods like seasonal ARIMA will work as expected. Help, my data are backwards! Problem: Gretl expects time series data to be in chronological order (most recent observation last), but you have imported third-party data that are in reverse order (most recent ﬁrst). Solution: setobs 1 1 --cross-section genr sortkey = -obs dataset sortby sortkey setobs 1 1950 --time-series Comment: The ﬁrst line is required only if the data currently have a time series interpretation: it removes that interpretation, because (for fairly obvious reasons) the dataset sortby operation is not allowed for time series data. The following two lines reverse the data, using the negative of the built-in index variable obs. The last line is just illustrative: it establishes the data as annual time series, starting in 1950. If you have a dataset that is mostly the right way round, but a particular variable is wrong, you can reverse that variable as follows: genr x = sortby(-obs, x) Dropping missing observations selectively Problem: You have a dataset with many variables and want to restrict the sample to those observations for which there are no missing observations for the variables x1, x2 and x3. 103 Chapter 13. Cheat sheet Solution: list X = x1 x2 x3 genr sel = ok(X) smpl sel --restrict 104 Comment: You can now save the ﬁle via a store command to preserve a subsampled version of the dataset. “By” operations Problem: You have a discrete variable d and you want to run some commands (for example, estimate a model) by splitting the sample according to the values of d. Solution: matrix vd = values(d) m = rows(vd) loop for i=1..m scalar sel = vd[i] smpl (d=sel) --restrict --replace ols y const x endloop smpl --full Comment: The main ingredient here is a loop. You can have gretl perform as many instructions as you want for each value of d, as long as they are allowed inside a loop. Adding a time series to a panel Problem: You have a panel dataset (comprising observations of n indidivuals in each of T periods) and you want to add a variable which is available in straight time-series form. For example, you want to add annual CPI data to a panel in order to deﬂate nominal income ﬁgures. In gretl a panel is represented in stacked time-series format, so in eﬀect the task is to create a new variable which holds n stacked copies of the original time series. Let’s say the panel comprises 500 individuals observed in the years 1990, 1995 and 2000 (n = 500, T = 3), and we have these CPI data in the ASCII ﬁle cpi.txt: date 1990 1995 2000 cpi 130.658 152.383 172.192 What we need is for the CPI variable in the panel to repeat these three values 500 times. Solution: Simple! With the panel dataset open in gretl, append cpi.txt Comment: If the length of the time series is the same as the length of the time dimension in the panel (3 in this example), gretl will perform the stacking automatically. Rather than using the append command you could use the “Append data” item under the File menu in the GUI program. For this to work, your main dataset must be recognized as a panel. This can be arranged via the setobs command or the “Dataset structure” item under the Data menu. Chapter 13. Cheat sheet 105 13.2 Creating/modifying variables Generating a dummy variable for a speciﬁc observation Problem: Generate dt = 0 for all observation but one, for which dt = 1. Solution: genr d = (t="1984:2") Comment: The internal variable t is used to refer to observations in string form, so if you have a cross-section sample you may just use d = (t="123"); of course, if the dataset has data labels, use the corresponding label. For example, if you open the dataset mrw.gdt, supplied with gretl among the examples, a dummy variable for Italy could be generated via genr DIta = (t="Italy") Note that this method does not require scripting at all. In fact, you might as well use the GUI Menu “Add/Deﬁne new variable” for the same purpose, with the same syntax. Generating an ARMA(1,1) Problem: Generate yt = 0.9yt −1 + εt − 0.5εt −1 , with εt ∼ NIID(0, 1). Solution: alpha = 0.9 theta = -0.5 series e = normal() series y = 0 series y = alpha * y(-1) + e + theta * e(-1) Comment: The statement series y = 0 is necessary because the next statement evaluates y recursively, so y[1] must be set. Note that you must use the keyword series here instead of writing genr y = 0 or simply y = 0, to ensure that y is a series and not a scalar. Recoding a variable Problem: You want to recode a variable by classes. For example, you have the age of a sample of individuals (xi ) and you need to compute age classes (yi ) as yi = 1 yi = 2 yi = 3 Solution: series y = 1 + (x >= 18) + (x >= 65) for for for xi < 18 18 ≤ xi < 65 xi ≥ 65 Comment: True and false expressions are evaluated as 1 and 0 respectively, so they can be manipulated algebraically as any other number. The same result could also be achieved by using the conditional assignment operator (see below), but in most cases it would probably lead to more convoluted constructs. Chapter 13. Cheat sheet Conditional assignment Problem: Generate yt via the following rule: yt = Solution: series y = (d > a) ? x : z 106 xt zt for dt > a for dt ≤ a Comment: There are several alternatives to the one presented above. One is a brute force solution using loops. Another one, more eﬃcient but still suboptimal, would be series y = (d>a)*x + (d<=a)*z However, the ternary conditional assignment operator is not only the most numerically eﬃcient way to accomplish what we want, it is also remarkably transparent to read when one gets used to it. Some readers may ﬁnd it helpful to note that the conditional assignment operator works exactly the same way as the =IF() function in spreadsheets. Generating a time index for panel datasets Problem: Gretl has a$unit accessor, but not the equivalent for time. What should I use? Solution: series x = time Comment: The special construct genr time and its variants are aware of whether a dataset is a panel. 13.3 Neat tricks Interaction dummies Problem: You want to estimate the model yi = xi β1 + zi β2 + di β3 + (di · zi )β4 + εt , where di is a dummy variable while xi and zi are vectors of explanatory variables. Solution: list X = x1 x2 list Z = z1 z2 list dZ = null loop foreach i series d$i = list dZ = dZ endloop ols y X Z d dZ x3 Z d *$i d$i Comment: It’s amazing what string substitution can do for you, isn’t it? Realized volatility Problem: Given data by the minute, you want to compute the “realized volatility” for the hour as 60 1 2 RVt = 60 τ =1 yt :τ . Imagine your sample starts at time 1:1. Solution: Chapter 13. Cheat sheet smpl --full genr time genr minute = int(time/60) + 1 genr second = time % 60 setobs minute second --panel genr rv = psd(y)^2 setobs 1 1 smpl second=1 --restrict store foo rv 107 Comment: Here we trick gretl into thinking that our dataset is a panel dataset, where the minutes are the “units” and the seconds are the “time”; this way, we can take advantage of the special function psd(), panel standard deviation. Then we simply drop all observations but one per minute and save the resulting data (store foo rv translates as “store in the gretl dataﬁle foo.gdt the series rv”). Looping over two paired lists Problem: Suppose you have two lists with the same number of elements, and you want to apply some command to corresponding elements over a loop. Solution: list L1 = a b c list L2 = x y z k1 = 1 loop foreach i L1 --quiet k2 = 1 loop foreach j L2 --quiet if k1=k2 ols$i 0 $j endif k2++ endloop k1++ endloop Comment: The simplest way to achieve the result is to loop over all possible combinations and ﬁlter out the unneeded ones via an if condition, as above. That said, in some cases variable names can help. For example, if list Lx = x1 x2 x3 list Ly = y1 y2 y3 looping over the integers is quite intuitive and certainly more elegant: loop for i=1..3 ols y$i const x$i endloop Part II Econometric methods 108 Chapter 14 Robust covariance matrix estimation 14.1 Introduction Consider (once again) the linear regression model y = Xβ + u (14.1) where y and u are T -vectors, X is a T × k matrix of regressors, and β is a k-vector of parameters. As is well known, the estimator of β given by Ordinary Least Squares (OLS) is ˆ β = (X X )−1 X y (14.2) If the condition E(u|X) = 0 is satisﬁed, this is an unbiased estimator; under somewhat weaker conditions the estimator is biased but consistent. It is straightforward to show that when the OLS ˆ estimator is unbiased (that is, when E(β − β) = 0), its variance is ˆ ˆ ˆ Var(β) = E (β − β)(β − β) = (X X )−1 X ΩX(X X )−1 (14.3) where Ω = E(uu ) is the covariance matrix of the error terms. Under the assumption that the error terms are independently and identically distributed (iid) we can write Ω = σ 2 I , where σ 2 is the (common) variance of the errors (and the covariances are zero). In that case (14.3) simpliﬁes to the “classical” formula, ˆ Var(β) = σ 2 (X X )−1 (14.4) If the iid assumption is not satisﬁed, two things follow. First, it is possible in principle to construct a more eﬃcient estimator than OLS — for instance some sort of Feasible Generalized Least Squares (FGLS). Second, the simple “classical” formula for the variance of the least squares estimator is no longer correct, and hence the conventional OLS standard errors — which are just the square roots of the diagonal elements of the matrix deﬁned by (14.4) — do not provide valid means of statistical inference. In the recent history of econometrics there are broadly two approaches to the problem of noniid errors. The “traditional” approach is to use an FGLS estimator. For example, if the departure from the iid condition takes the form of time-series dependence, and if one believes that this could be modeled as a case of ﬁrst-order autocorrelation, one might employ an AR(1) estimation method such as Cochrane–Orcutt, Hildreth–Lu, or Prais–Winsten. If the problem is that the error variance is non-constant across observations, one might estimate the variance as a function of the independent variables and then perform weighted least squares, using as weights the reciprocals of the estimated variances. While these methods are still in use, an alternative approach has found increasing favor: that is, use OLS but compute standard errors (or more generally, covariance matrices) that are robust with respect to deviations from the iid assumption. This is typically combined with an emphasis on using large datasets — large enough that the researcher can place some reliance on the (asymptotic) consistency property of OLS. This approach has been enabled by the availability of cheap computing power. The computation of robust standard errors and the handling of very large datasets were daunting tasks at one time, but now they are unproblematic. The other point favoring the newer 109 Chapter 14. Robust covariance matrix estimation 110 methodology is that while FGLS oﬀers an eﬃciency advantage in principle, it often involves making additional statistical assumptions which may or may not be justiﬁed, which may not be easy to test rigorously, and which may threaten the consistency of the estimator — for example, the “common factor restriction” that is implied by traditional FGLS “corrections” for autocorrelated errors. James Stock and Mark Watson’s Introduction to Econometrics illustrates this approach at the level of undergraduate instruction: many of the datasets they use comprise thousands or tens of thousands of observations; FGLS is downplayed; and robust standard errors are reported as a matter of course. In fact, the discussion of the classical standard errors (labeled “homoskedasticity-only”) is conﬁned to an Appendix. Against this background it may be useful to set out and discuss all the various options oﬀered by gretl in respect of robust covariance matrix estimation. The ﬁrst point to notice is that gretl produces “classical” standard errors by default (in all cases apart from GMM estimation). In script mode you can get robust standard errors by appending the --robust ﬂag to estimation commands. In the GUI program the model speciﬁcation dialog usually contains a “Robust standard errors” check box, along with a “conﬁgure” button that is activated when the box is checked. The conﬁgure button takes you to a conﬁguration dialog (which can also be reached from the main menu bar: Tools → Preferences → General → HCCME). There you can select from a set of possible robust estimation variants, and can also choose to make robust estimation the default. The speciﬁcs of the available options depend on the nature of the data under consideration — cross-sectional, time series or panel — and also to some extent the choice of estimator. (Although we introduced robust standard errors in the context of OLS above, they may be used in conjunction with other estimators too.) The following three sections of this chapter deal with matters that are speciﬁc to the three sorts of data just mentioned. Note that additional details regarding covariance matrix estimation in the context of GMM are given in chapter 18. We close this introduction with a brief statement of what “robust standard errors” can and cannot achieve. They can provide for asymptotically valid statistical inference in models that are basically correctly speciﬁed, but in which the errors are not iid. The “asymptotic” part means that they may be of little use in small samples. The “correct speciﬁcation” part means that they are not a magic bullet: if the error term is correlated with the regressors, so that the parameter estimates themselves are biased and inconsistent, robust standard errors will not save the day. 14.2 Cross-sectional data and the HCCME With cross-sectional data, the most likely departure from iid errors is heteroskedasticity (nonconstant variance).1 In some cases one may be able to arrive at a judgment regarding the likely form of the heteroskedasticity, and hence to apply a speciﬁc correction. The more common case, however, is where the heteroskedasticity is of unknown form. We seek an estimator of the covariance matrix of the parameter estimates that retains its validity, at least asymptotically, in face of unspeciﬁed heteroskedasticity. It is not obvious, a priori, that this should be possible, but White (1980) showed that ˆ ˆ Varh (β) = (X X )−1 X ΩX(X X )−1 (14.5) does the trick. (As usual in statistics, we need to say “under certain conditions”, but the conditions ˆ are not very restrictive.) Ω is in this context a diagonal matrix, whose non-zero elements may be estimated using squared OLS residuals. White referred to (14.5) as a heteroskedasticity-consistent covariance matrix estimator (HCCME). Davidson and MacKinnon (2004, chapter 5) oﬀer a useful discussion of several variants on White’s ˆ HCCME theme. They refer to the original variant of (14.5) — in which the diagonal elements of Ω ˆt are estimated directly by the squared OLS residuals, u2 — as HC0 . (The associated standard errors are often called “White’s standard errors”.) The various reﬁnements of White’s proposal share a 1 In some specialized contexts spatial autocorrelation may be an issue. Gretl does not have any built-in methods to handle this and we will not discuss it here. Chapter 14. Robust covariance matrix estimation 111 common point of departure, namely the idea that the squared OLS residuals are likely to be “too ˆ small” on average. This point is quite intuitive. The OLS parameter estimates, β, satisfy by design the criterion that the sum of squared residuals, ˆt u2 = ˆ yt − Xt β 2 ˆ is minimized for given X and y . Suppose that β ≠ β. This is almost certain to be the case: even if ˆ calculated from any ﬁnite sample were exactly equal OLS is not biased, it would be a miracle if the β to β. But in that case the sum of squares of the true, unobserved errors, u2 = (yt − Xt β)2 is t ˆt bound to be greater than u2 . The elaborated variants on HC0 take this point on board as follows: • HC1 : Applies a degrees-of-freedom correction, multiplying the HC0 matrix by T /(T − k). ˆ ˆt ˆt • HC2 : Instead of using u2 for the diagonal elements of Ω, uses u2 /(1 − ht ), where ht = −1 th Xt (X X ) Xt , the t diagonal element of the projection matrix, P , which has the property ˆ that P · y = y . The relevance of ht is that if the variance of all the ut is σ 2 , the expectation ˆt ˆt of u2 is σ 2 (1 − ht ), or in other words, the ratio u2 /(1 − ht ) has expectation σ 2 . As Davidson and MacKinnon show, 0 ≤ ht < 1 for all t , so this adjustment cannot reduce the the diagonal ˆ elements of Ω and in general revises them upward. ˆt • HC3 : Uses u2 /(1 − ht )2 . The additional factor of (1 − ht ) in the denominator, relative to HC2 , may be justiﬁed on the grounds that observations with large variances tend to exert a lot of inﬂuence on the OLS estimates, so that the corresponding residuals tend to be underestimated. See Davidson and MacKinnon for a fuller explanation. The relative merits of these variants have been explored by means of both simulations and theoretical analysis. Unfortunately there is not a clear consensus on which is “best”. Davidson and MacKinnon argue that the original HC0 is likely to perform worse than the others; nonetheless, “White’s standard errors” are reported more often than the more sophisticated variants and therefore, for reasons of comparability, HC0 is the default HCCME in gretl. If you wish to use HC1 , HC2 or HC3 you can arrange for this in either of two ways. In script mode, you can do, for example, set hc_version 2 In the GUI program you can go to the HCCME conﬁguration dialog, as noted above, and choose any of these variants to be the default. 14.3 Time series data and HAC covariance matrices Heteroskedasticity may be an issue with time series data too, but it is unlikely to be the only, or even the primary, concern. One form of heteroskedasticity is common in macroeconomic time series, but is fairly easily dealt with. That is, in the case of strongly trending series such as Gross Domestic Product, aggregate consumption, aggregate investment, and so on, higher levels of the variable in question are likely to be associated with higher variability in absolute terms. The obvious “ﬁx”, employed in many macroeconometric studies, is to use the logs of such series rather than the raw levels. Provided the proportional variability of such series remains roughly constant over time, the log transformation is eﬀective. Other forms of heteroskedasticity may resist the log transformation, but may demand a special treatment distinct from the calculation of robust standard errors. We have in mind here autoregressive conditional heteroskedasticity, for example in the behavior of asset prices, where large disturbances to the market may usher in periods of increased volatility. Such phenomena call for speciﬁc estimation strategies, such as GARCH (see chapter 20). Chapter 14. Robust covariance matrix estimation 112 Despite the points made above, some residual degree of heteroskedasticity may be present in time series data: the key point is that in most cases it is likely to be combined with serial correlation ˆ (autocorrelation), hence demanding a special treatment. In White’s approach, Ω, the estimated covariance matrix of the ut , remains conveniently diagonal: the variances, E(u2 ), may diﬀer by t t but the covariances, E(ut us ), are all zero. Autocorrelation in time series data means that at ˆ least some of the the oﬀ-diagonal elements of Ω should be non-zero. This introduces a substantial complication and requires another piece of terminology; estimates of the covariance matrix that are asymptotically valid in face of both heteroskedasticity and autocorrelation of the error process are termed HAC (heteroskedasticity and autocorrelation consistent). The issue of HAC estimation is treated in more technical terms in chapter 18. Here we try to convey some of the intuition at a more basic level. We begin with a general comment: residual autocorrelation is not so much a property of the data, as a symptom of an inadequate model. Data may be persistent though time, and if we ﬁt a model that does not take this aspect into account properly, we end up with a model with autocorrelated disturbances. Conversely, it is often possible to mitigate or even eliminate the problem of autocorrelation by including relevant lagged variables in a time series model, or in other words, by specifying the dynamics of the model more fully. HAC estimation should not be seen as the ﬁrst resort in dealing with an autocorrelated error process. That said, the “obvious” extension of White’s HCCME to the case of autocorrelated errors would ˆ seem to be this: estimate the oﬀ-diagonal elements of Ω (that is, the autocovariances, E(ut us )) ˆ ˆˆ using, once again, the appropriate OLS residuals: ωts = ut us . This is basically right, but demands an important amendment. We seek a consistent estimator, one that converges towards the true Ω as the sample size tends towards inﬁnity. This can’t work if we allow unbounded serial dependence. Bigger samples will enable us to estimate more of the true ωts elements (that is, for t and s more widely separated in time) but will not contribute ever-increasing information regarding the maximally separated ωts pairs, since the maximal separation itself grows with the sample size. To ensure consistency, we have to conﬁne our attention to processes exhibiting temporally limited ˆ dependence, or in other words cut oﬀ the computation of the ωts values at some maximum value of p = t − s (where p is treated as an increasing function of the sample size, T , although it cannot increase in proportion to T ). The simplest variant of this idea is to truncate the computation at some ﬁnite lag order p , where ˆ p grows as, say, T 1/4 . The trouble with this is that the resulting Ω may not be a positive deﬁnite matrix. In practical terms, we may end up with negative estimated variances. One solution to this problem is oﬀered by The Newey–West estimator (Newey and West, 1987), which assigns declining weights to the sample autocovariances as the temporal separation increases. To understand this point it is helpful to look more closely at the covariance matrix given in (14.5), namely, ˆ (X X )−1 (X ΩX)(X X )−1 This is known as a “sandwich” estimator. The bread, which appears on both sides, is (X X )−1 . This is a k × k matrix, and is also the key ingredient in the computation of the classical covariance matrix. The ﬁlling in the sandwich is ˆ Σ (k×k) = X (k×T ) ˆ Ω (T ×T ) X (T ×k) Since Ω = E(uu ), the matrix being estimated here can also be written as Σ = E(X u u X ) which expresses Σ as the long-run covariance of the random k-vector X u. From a computational point of view, it is not necessary or desirable to store the (potentially very ˆ large) T × T matrix Ω as such. Rather, one computes the sandwich ﬁlling by summation as p ˆΓ Σ = ˆ(0) + j =1 wj ˆ(j) + ˆ (j) Γ Γ Chapter 14. Robust covariance matrix estimation where the k × k sample autocovariance matrix ˆ(j), for j ≥ 0, is given by Γ ˆ(j) = Γ 1 T T 113 ˆˆ ut ut −j Xt Xt −j t =j +1 and wj is the weight given to the autocovariance at lag j > 0. This leaves two questions. How exactly do we determine the maximum lag length or “bandwidth”, p , of the HAC estimator? And how exactly are the weights wj to be determined? We will return to the (diﬃcult) question of the bandwidth shortly. As regards the weights, Gretl oﬀers three variants. The default is the Bartlett kernel, as used by Newey and West. This sets 1− j j≤p p +1 wj = 0 j>p so the weights decline linearly as j increases. The other two options are the Parzen kernel and the Quadratic Spectral (QS) kernel. For the Parzen kernel, 1 − 6a2 + 6a3 0 ≤ aj ≤ 0.5 j j 2(1 − aj )3 0.5 < aj ≤ 1 wj = 0 aj > 1 where aj = j/(p + 1), and for the QS kernel, wj = where dj = j/p and mj = 6π di /5. Figure 14.1 shows the weights generated by these kernels, for p = 4 and j = 1 to 9. Figure 14.1: Three HAC kernels 25 12π 2 d2 j sin mj − cos mj mj Bartlett Parzen QS In gretl you select the kernel using the set command with the hac_kernel parameter: set hac_kernel parzen set hac_kernel qs set hac_kernel bartlett Selecting the HAC bandwidth The asymptotic theory developed by Newey, West and others tells us in general terms how the HAC bandwidth, p , should grow with the sample size, T — that is, p should grow in proportion to some fractional power of T . Unfortunately this is of little help to the applied econometrician, working with a given dataset of ﬁxed size. Various rules of thumb have been suggested, and gretl implements two such. The default is p = 0.75T 1/3 , as recommended by Stock and Watson (2003). An alternative is p = 4(T /100)2/9 , as in Wooldridge (2002b). In each case one takes the integer part of the result. These variants are labeled nw1 and nw2 respectively, in the context of the set command with the hac_lag parameter. That is, you can switch to the version given by Wooldridge with Chapter 14. Robust covariance matrix estimation set hac_lag nw2 114 As shown in Table 14.1 the choice between nw1 and nw2 does not make a great deal of diﬀerence. T 50 100 150 200 300 400 p (nw1) 2 3 3 4 5 5 p (nw2) 3 4 4 4 5 5 Table 14.1: HAC bandwidth: two rules of thumb You also have the option of specifying a ﬁxed numerical value for p , as in set hac_lag 6 In addition you can set a distinct bandwidth for use with the Quadratic Spectral kernel (since this need not be an integer). For example, set qs_bandwidth 3.5 Prewhitening and data-based bandwidth selection An alternative approach is to deal with residual autocorrelation by attacking the problem from two sides. The intuition behind the technique known as VAR prewhitening (Andrews and Monahan, 1992) can be illustrated by a simple example. Let xt be a sequence of ﬁrst-order autocorrelated random variables xt = ρxt −1 + ut The long-run variance of xt can be shown to be VLR (xt ) = VLR (ut ) (1 − ρ)2 In most cases, ut is likely to be less autocorrelated than xt , so a smaller bandwidth should suﬃce. Estimation of VLR (xt ) can therefore proceed in three steps: (1) estimate ρ ; (2) obtain a HAC estimate ˆ ˆ of ut = xt − ρxt −1 ; and (3) divide the result by (1 − ρ)2 . The application of the above concept to our problem implies estimating a ﬁnite-order Vector Auˆ toregression (VAR) on the vector variables ξt = Xt ut . In general, the VAR can be of any order, but in most cases 1 is suﬃcient; the aim is not to build a watertight model for ξt , but just to “mop up” a substantial part of the autocorrelation. Hence, the following VAR is estimated ξt = Aξt −1 + εt Then an estimate of the matrix X ΩX can be recovered via ˆ ˆ ˆ (I − A)−1 Σε (I − A )−1 ˆ where Σε is any HAC estimator, applied to the VAR residuals. You can ask for prewhitening in gretl using set hac_prewhiten on Chapter 14. Robust covariance matrix estimation There is at present no mechanism for specifying an order other than 1 for the initial VAR. 115 A further reﬁnement is available in this context, namely data-based bandwidth selection. It makes intuitive sense that the HAC bandwidth should not simply be based on the size of the sample, but should somehow take into account the time-series properties of the data (and also the kernel chosen). A nonparametric method for doing this was proposed by Newey and West (1994); a good concise account of the method is given in Hall (2005). This option can be invoked in gretl via set hac_lag nw3 This option is the default when prewhitening is selected, but you can override it by giving a speciﬁc numerical value for hac_lag. Even the Newey–West data-based method does not fully pin down the bandwidth for any particular sample. The ﬁrst step involves calculating a series of residual covariances. The length of this series is given as a function of the sample size, but only up to a scalar multiple — for example, it is given as O(T 2/9 ) for the Bartlett kernel. Gretl uses an implied multiple of 1. 14.4 Special issues with panel data Since panel data have both a time-series and a cross-sectional dimension one might expect that, in general, robust estimation of the covariance matrix would require handling both heteroskedasticity and autocorrelation (the HAC approach). In addition, some special features of panel data require attention. • The variance of the error term may diﬀer across the cross-sectional units. • The covariance of the errors across the units may be non-zero in each time period. • If the “between” variation is not removed, the errors may exhibit autocorrelation, not in the usual time-series sense but in the sense that the mean error for unit i may diﬀer from that of unit j . (This is particularly relevant when estimation is by pooled OLS.) Gretl currently oﬀers two robust covariance matrix estimators speciﬁcally for panel data. These are available for models estimated via ﬁxed eﬀects, pooled OLS, and pooled two-stage least squares. The default robust estimator is that suggested by Arellano (2003), which is HAC provided the panel is of the “large n, small T ” variety (that is, many units are observed in relatively few periods). The Arellano estimator is ˆ ΣA = X X −1 n i=1 ˆˆ Xi ui ui Xi X X −1 where X is the matrix of regressors (with the group means subtracted, in the case of ﬁxed eﬀects) ˆ ui denotes the vector of residuals for unit i, and n is the number of cross-sectional units. Cameron and Trivedi (2005) make a strong case for using this estimator; they note that the ordinary White HCCME can produce misleadingly small standard errors in the panel context because it fails to take autocorrelation into account. In cases where autocorrelation is not an issue, however, the estimator proposed by Beck and Katz (1995) and discussed by Greene (2003, chapter 13) may be appropriate. This estimator, which takes into account contemporaneous correlation across the units and heteroskedasticity by unit, is ˆ ΣBK = X X ˆ The covariances σij are estimated via ˆ σij = ˆˆ ui uj T −1 n n i=1 j =1 ˆ σij Xi Xj X X −1 Chapter 14. Robust covariance matrix estimation 116 where T is the length of the time series for each unit. Beck and Katz call the associated standard errors “Panel-Corrected Standard Errors” (PCSE). This estimator can be invoked in gretl via the command set pcse on The Arellano default can be re-established via set pcse off (Note that regardless of the pcse setting, the robust estimator is not used unless the --robust ﬂag is given, or the “Robust” box is checked in the GUI program.) Chapter 15 Panel data 15.1 Estimation of panel models Pooled Ordinary Least Squares The simplest estimator for panel data is pooled OLS. In most cases this is unlikely to be adequate, but it provides a baseline for comparison with more complex estimators. If you estimate a model on panel data using OLS an additional test item becomes available. In the GUI model window this is the item “panel diagnostics” under the Tests menu; the script counterpart is the hausman command. To take advantage of this test, you should specify a model without any dummy variables representing cross-sectional units. The test compares pooled OLS against the principal alternatives, the ﬁxed eﬀects and random eﬀects models. These alternatives are explained in the following section. The ﬁxed and random eﬀects models In gretl version 1.6.0 and higher, the ﬁxed and random eﬀects models for panel data can be estimated in their own right. In the graphical interface these options are found under the menu item “Model/Panel/Fixed and random eﬀects”. In the command-line interface one uses the panel command, with or without the --random-effects option. This section explains the nature of these models and comments on their estimation via gretl. The pooled OLS speciﬁcation may be written as yit = Xit β + uit (15.1) where yit is the observation on the dependent variable for cross-sectional unit i in period t , Xit is a 1 × k vector of independent variables observed for unit i in period t , β is a k × 1 vector of parameters, and uit is an error or disturbance term speciﬁc to unit i in period t . The ﬁxed and random eﬀects models have in common that they decompose the unitary pooled error term, uit . For the ﬁxed eﬀects model we write uit = αi + εit , yielding yit = Xit β + αi + εit (15.2) That is, we decompose uit into a unit-speciﬁc and time-invariant component, αi , and an observationspeciﬁc error, εit .1 The αi s are then treated as ﬁxed parameters (in eﬀect, unit-speciﬁc y -intercepts), which are to be estimated. This can be done by including a dummy variable for each cross-sectional unit (and suppressing the global constant). This is sometimes called the Least Squares Dummy Variables (LSDV) method. Alternatively, one can subtract the group mean from each of variables and estimate a model without a constant. In the latter case the dependent variable may be written as ˜ ¯ yit = yit − yi ¯ The “group mean”, yi , is deﬁned as ¯ yi = 1 Ti Ti yit t =1 1 It is possible to break a third component out of u , namely w , a shock that is time-speciﬁc but common to all the t it units in a given period. In the interest of simplicity we do not pursue that option here. 117 Chapter 15. Panel data 118 where Ti is the number of observations for unit i. An exactly analogous formulation applies to the ˆ independent variables. Given parameter estimates, β, obtained using such de-meaned data we can recover estimates of the αi s using ˆ αi = 1 Ti Ti ˆ yit − Xit β t =1 These two methods (LSDV, and using de-meaned data) are numerically equivalent. Gretl takes the approach of de-meaning the data. If you have a small number of cross-sectional units, a large number of time-series observations per unit, and a large number of regressors, it is more economical in terms of computer memory to use LSDV. If need be you can easily implement this manually. For example, genr unitdum ols y x du_* (See Chapter 5 for details on unitdum). ˆ The αi estimates are not printed as part of the standard model output in gretl (there may be a large number of these, and typically they are not of much inherent interest). However you can retrieve them after estimation of the ﬁxed eﬀects model if you wish. In the graphical interface, go to the “Save” menu in the model window and select “per-unit constants”. In command-line mode, you can do genr newname =$ahat, where newname is the name you want to give the series. For the random eﬀects model we write uit = vi + εit , so the model becomes yit = Xit β + vi + εit (15.3) In contrast to the ﬁxed eﬀects model, the vi s are not treated as ﬁxed parameters, but as random drawings from a given probability distribution. The celebrated Gauss–Markov theorem, according to which OLS is the best linear unbiased estimator (BLUE), depends on the assumption that the error term is independently and identically distributed (IID). In the panel context, the IID assumption means that E(u2 ), in relation to equait 2 tion 15.1, equals a constant, σu , for all i and t , while the covariance E(uis uit ) equals zero for all s ≠ t and the covariance E(ujt uit ) equals zero for all j ≠ i. If these assumptions are not met — and they are unlikely to be met in the context of panel data — OLS is not the most eﬃcient estimator. Greater eﬃciency may be gained using generalized least squares (GLS), taking into account the covariance structure of the error term. Consider observations on a given unit i at two diﬀerent times s and t . From the hypotheses above 2 2 it can be worked out that Var(uis ) = Var(uit ) = σv + σε , while the covariance between uis and uit 2 is given by E(uis uit ) = σv . In matrix notation, we may group all the Ti observations for unit i into the vector yi and write it as yi = Xi β + ui (15.4) The vector ui , which includes all the disturbances for individual i, has a variance–covariance matrix given by 2 2 Var(ui ) = Σi = σε I + σv J (15.5) where J is a square matrix with all elements equal to 1. It can be shown that the matrix Ki = I − where θ = 1 − 2 σε 2 2, σε +Ti σv θ J, Ti has the property 2 Ki ΣKi = σε I Chapter 15. Panel data It follows that the transformed system Ki yi = Ki Xi β + Ki ui 119 (15.6) satisﬁes the Gauss–Markov conditions, and OLS estimation of (15.6) provides eﬃcient inference. But since ¯ Ki y i = y i − θ y i GLS estimation is equivalent to OLS using “quasi-demeaned” variables; that is, variables from which 2 2 we subtract a fraction θ of their average. Notice that for σε → 0, θ → 1, while for σv → 0, θ → 0. This means that if all the variance is attributable to the individual eﬀects, then the ﬁxed eﬀects estimator is optimal; if, on the other hand, individual eﬀects are negligible, then pooled OLS turns out, unsurprisingly, to be the optimal estimator. To implement the GLS approach we need to calculate θ , which in turn requires estimates of the 2 2 variances σε and σv . (These are often referred to as the “within” and “between” variances respectively, since the former refers to variation within each cross-sectional unit and the latter to variation between the units). Several means of estimating these magnitudes have been suggested in the liter2 ature (see Baltagi, 1995); gretl uses the method of Swamy and Arora (1972): σε is estimated by the 2 2 residual variance from the ﬁxed eﬀects model, and the sum σε + Ti σv is estimated as Ti times the residual variance from the “between” estimator, ¯ ¯ yi = Xi β + ei The latter regression is implemented by constructing a data set consisting of the group means of all the relevant variables. Choice of estimator Which panel method should one use, ﬁxed eﬀects or random eﬀects? One way of answering this question is in relation to the nature of the data set. If the panel comprises observations on a ﬁxed and relatively small set of units of interest (say, the member states of the European Union), there is a presumption in favor of ﬁxed eﬀects. If it comprises observations on a large number of randomly selected individuals (as in many epidemiological and other longitudinal studies), there is a presumption in favor of random eﬀects. Besides this general heuristic, however, various statistical issues must be taken into account. 1. Some panel data sets contain variables whose values are speciﬁc to the cross-sectional unit but which do not vary over time. If you want to include such variables in the model, the ﬁxed eﬀects option is simply not available. When the ﬁxed eﬀects approach is implemented using dummy variables, the problem is that the time-invariant variables are perfectly collinear with the per-unit dummies. When using the approach of subtracting the group means, the issue is that after de-meaning these variables are nothing but zeros. 2. A somewhat analogous prohibition applies to the random eﬀects estimator. This estimator is in eﬀect a matrix-weighted average of pooled OLS and the “between” estimator. Suppose we have observations on n units or individuals and there are k independent variables of interest. If k > n, the “between” estimator is undeﬁned — since we have only n eﬀective observations — and hence so is the random eﬀects estimator. If one does not fall foul of one or other of the prohibitions mentioned above, the choice between ﬁxed eﬀects and random eﬀects may be expressed in terms of the two econometric desiderata, eﬃciency and consistency. From a purely statistical viewpoint, we could say that there is a tradeoﬀ between robustness and eﬃciency. In the ﬁxed eﬀects approach, we do not make any hypotheses on the “group eﬀects” (that is, the time-invariant diﬀerences in mean between the groups) beyond the fact that they exist Chapter 15. Panel data 120 — and that can be tested; see below. As a consequence, once these eﬀects are swept out by taking deviations from the group means, the remaining parameters can be estimated. On the other hand, the random eﬀects approach attempts to model the group eﬀects as drawings from a probability distribution instead of removing them. This requires that individual eﬀects are representable as a legitimate part of the disturbance term, that is, zero-mean random variables, uncorrelated with the regressors. As a consequence, the ﬁxed-eﬀects estimator “always works”, but at the cost of not being able to estimate the eﬀect of time-invariant regressors. The richer hypothesis set of the random-eﬀects estimator ensures that parameters for time-invariant regressors can be estimated, and that estimation of the parameters for time-varying regressors is carried out more eﬃciently. These advantages, though, are tied to the validity of the additional hypotheses. If, for example, there is reason to think that individual eﬀects may be correlated with some of the explanatory variables, then the random-eﬀects estimator would be inconsistent, while ﬁxed-eﬀects estimates would still be valid. It is precisely on this principle that the Hausman test is built (see below): if the ﬁxed- and randomeﬀects estimates agree, to within the usual statistical margin of error, there is no reason to think the additional hypotheses invalid, and as a consequence, no reason not to use the more eﬃcient RE estimator. Testing panel models If you estimate a ﬁxed eﬀects or random eﬀects model in the graphical interface, you may notice that the number of items available under the “Tests” menu in the model window is relatively limited. Panel models carry certain complications that make it diﬃcult to implement all of the tests one expects to see for models estimated on straight time-series or cross-sectional data. Nonetheless, various panel-speciﬁc tests are printed along with the parameter estimates as a matter of course, as follows. When you estimate a model using ﬁxed eﬀects, you automatically get an F -test for the null hypothesis that the cross-sectional units all have a common intercept. That is to say that all the αi s are equal, in which case the pooled model (15.1), with a column of 1s included in the X matrix, is adequate. When you estimate using random eﬀects, the Breusch–Pagan and Hausman tests are presented automatically. The Breusch–Pagan test is the counterpart to the F -test mentioned above. The null hypothesis is that the variance of vi in equation (15.3) equals zero; if this hypothesis is not rejected, then again we conclude that the simple pooled model is adequate. The Hausman test probes the consistency of the GLS estimates. The null hypothesis is that these estimates are consistent — that is, that the requirement of orthogonality of the vi and the Xi is satisﬁed. The test is based on a measure, H , of the “distance” between the ﬁxed-eﬀects and random-eﬀects estimates, constructed such that under the null it follows the χ 2 distribution with degrees of freedom equal to the number of time-varying regressors in the matrix X . If the value of H is “large” this suggests that the random eﬀects estimator is not consistent and the ﬁxed-eﬀects model is preferable. There are two ways of calculating H , the matrix-diﬀerence method and the regression method. The procedure for the matrix-diﬀerence method is this: ˜ • Collect the ﬁxed-eﬀects estimates in a vector β and the corresponding random-eﬀects estiˆ ˜ˆ mates in β, then form the diﬀerence vector (β − β). ˜ˆ ˜ ˆ • Form the covariance matrix of the diﬀerence vector as Var(β − β) = Var(β) − Var(β) = Ψ , ˜ and Var(β) are estimated by the sample variance matrices of the ﬁxed- and ˆ where Var(β) random-eﬀects models respectively.2 2 Hausman ˆ (1978) showed that the covariance of the diﬀerence takes this simple form when β is an eﬃcient estimator Chapter 15. Panel data ˜ˆ • Compute H = β − β ˜ˆ Ψ −1 β − β . 121 ˜ ˆ Given the relative eﬃciencies of β and β, the matrix Ψ “should be” positive deﬁnite, in which case H is positive, but in ﬁnite samples this is not guaranteed and of course a negative χ 2 value is not admissible. The regression method avoids this potential problem. The procedure is: • Treat the random-eﬀects model as the restricted model, and record its sum of squared residuals as SSRr . • Estimate via OLS an unrestricted model in which the dependent variable is quasi-demeaned y and the regressors include both quasi-demeaned X (as in the RE model) and the de-meaned variants of all the time-varying variables (i.e. the ﬁxed-eﬀects regressors); record the sum of squared residuals from this model as SSRu . • Compute H = n (SSRr − SSRu ) /SSRu , where n is the total number of observations used. On this variant H cannot be negative, since adding additional regressors to the RE model cannot raise the SSR. By default gretl computes the Hausman test via the regression method, but it uses the matrixdiﬀerence method if you pass the option --matrix-diff to the panel command. Robust standard errors For most estimators, gretl oﬀers the option of computing an estimate of the covariance matrix that is robust with respect to heteroskedasticity and/or autocorrelation (and hence also robust standard errors). In the case of panel data, robust covariance matrix estimators are available for the pooled and ﬁxed eﬀects model but not currently for random eﬀects. Please see section 14.4 for details. 15.2 Dynamic panel models Special problems arise when a lag of the dependent variable is included among the regressors in a panel model. Consider a dynamic variant of the pooled model (15.1): yit = Xit β + ρyit −1 + uit (15.7) First, if the error uit includes a group eﬀect, vi , then yit −1 is bound to be correlated with the error, since the value of vi aﬀects yi at all t . That means that OLS applied to (15.7) will be inconsistent as well as ineﬃcient. The ﬁxed-eﬀects model sweeps out the group eﬀects and so overcomes this particular problem, but a subtler issue remains, which applies to both ﬁxed and random eﬀects estimation. Consider the de-meaned representation of ﬁxed eﬀects, as applied to the dynamic model, ˜ ˜ ˜ yit = Xit β + ρ yi,t −1 + εit ˜ ¯ ¯ where yit = yit − yi and εit = uit − ui (or uit − αi , using the notation of equation 15.2). The trouble ˜ ¯ is that yi,t −1 will be correlated with εit via the group mean, yi . The disturbance εit inﬂuences yit ¯ ˜ directly, which inﬂuences yi , which, by construction, aﬀects the value of yit for all t . The same issue arises in relation to the quasi-demeaning used for random eﬀects. Estimators which ignore this correlation will be consistent only as T → ∞ (in which case the marginal eﬀect of εit on the group mean of y tends to vanish). One strategy for handling this problem, and producing consistent estimates of β and ρ , was proposed by Anderson and Hsiao (1981). Instead of de-meaning the data, they suggest taking the ﬁrst diﬀerence of (15.7), an alternative tactic for sweeping out the group eﬀects: ∆yit = ∆Xit β + ρ ∆yi,t −1 + ηit ˜ and β is ineﬃcient. (15.8) Chapter 15. Panel data 122 where ηit = ∆uit = ∆(vi + εit ) = εit − εi,t −1 . We’re not in the clear yet, given the structure of the error ηit : the disturbance εi,t −1 is an inﬂuence on both ηit and ∆yi,t −1 = yit − yi,t −1 . The next step is then to ﬁnd an instrument for the “contaminated” ∆yi,t −1 . Anderson and Hsiao suggest using either yi,t −2 or ∆yi,t −2 , both of which will be uncorrelated with ηit provided that the underlying errors, εit , are not themselves serially correlated. The Anderson–Hsiao estimator is not provided as a built-in function in gretl, since gretl’s sensible handling of lags and diﬀerences for panel data makes it a simple application of regression with instrumental variables — see Example 15.1, which is based on a study of country growth rates by Nerlove (1999).3 Example 15.1: The Anderson–Hsiao estimator for a dynamic panel model # Penn World Table data as used by Nerlove open penngrow.gdt # Fixed effects (for comparison) panel Y 0 Y(-1) X # Random effects (for comparison) panel Y 0 Y(-1) X --random-effects # take differences of all variables diff Y X # Anderson-Hsiao, using Y(-2) as instrument tsls d_Y d_Y(-1) d_X ; 0 d_X Y(-2) # Anderson-Hsiao, using d_Y(-2) as instrument tsls d_Y d_Y(-1) d_X ; 0 d_X d_Y(-2) Although the Anderson–Hsiao estimator is consistent, it is not most eﬃcient: it does not make the fullest use of the available instruments for ∆yi,t −1 , nor does it take into account the diﬀerenced structure of the error ηit . It is improved upon by the methods of Arellano and Bond (1991) and Blundell and Bond (1998). Gretl implements natively the Arellano–Bond estimator. The rationale behind it is, strictly speaking, that of a GMM estimator, but it can be illustrated brieﬂy as follows (see Arellano (2003) for a comprehensive exposition). Consider again equation (15.8): if for each individual we have observations dated from 1 to T , we may write the following system: ∆yi,3 ∆yi,4 . . . ∆yi,T = = ∆Xi,3 β + ρ ∆yi,2 + ηi,3 ∆Xi,4 β + ρ ∆yi,4 + ηi,4 (15.9) (15.10) = ∆Xi,T β + ρ ∆yi,T + ηi,T (15.11) Following the same logic as for the Anderson–Hsiao estimator, we see that the only possible instrument for ∆yi,2 in equation (15.9) is yi,1 , but for equation (15.10) we can use both yi,1 and yi,2 as instruments for ∆yi,3 , thereby gaining eﬃciency. Likewise, for the ﬁnal period T we can use as instruments all values of yi,t up to t = T − 2. The Arellano–Bond technique estimates the above system, with an increasing number of instruments for each equation. Estimation is typically carried out in two steps: in step 1 the parameters are estimated on the 3 Also see Clint Cummins’ benchmarks page, http://www.stanford.edu/~clint/bench/. Chapter 15. Panel data assumption that the covariance matrix of the ηi,t terms is proportional to 2 −1 2 −1 . . . 0 0 −1 2 ··· ··· ··· .. . ··· 0 123 −1 0 0 0 0 0 . . . 2 as should be the case if the disturbances in the original model ui,t were homoskedastic and uncorrelated. This yields a consistent, but not necessarily eﬃcient, estimator. Step 2 uses the parameters estimated in step 1 to compute an estimate of the covariance of the ηi,t , and re-estimates the parameters based on that. This procedure has the double eﬀect of handling heteroskedasticity and/or serial correlation, plus producing estimators that are asymptotically efﬁcient. One-step estimators have sometimes been preferred on the grounds that they are more robust. Moreover, computing the covariance matrix of the 2-step estimator via the standard GMM formulae has been shown to produce grossly biased results in ﬁnite samples. Gretl, however, implements the ﬁnite-sample correction devised by Windmeijer (2005), so standard errors for the 2-step estimator can be considered relatively accurate. By default, gretl’s arbond command estimates the parameters in A(L)yi,t = Xi,t β + vi + ui,t via the 1-step procedure. The dependent variable is automatically diﬀerenced (but note that the right-hand side variables are not automatically diﬀerenced), and all available instruments are used. However, these choices (plus some others) can be overridden: please see the documentation for the arbond command in the Gretl Command Reference and the arbond91 example ﬁle supplied with gretl. 15.3 Panel illustration: the Penn World Table The Penn World Table (homepage at pwt.econ.upenn.edu) is a rich macroeconomic panel dataset, spanning 152 countries over the years 1950–1992. The data are available in gretl format; please see the gretl data site (this is a free download, although it is not included in the main gretl package). Example 15.2 opens pwt56_60_89.gdt, a subset of the PWT containing data on 120 countries, 1960–89, for 20 variables, with no missing observations (the full data set, which is also supplied in the pwt package for gretl, has many missing observations). Total growth of real GDP, 1960–89, is calculated for each country and regressed against the 1960 level of real GDP, to see if there is evidence for “convergence” (i.e. faster growth on the part of countries starting from a low base). Chapter 15. Panel data 124 Example 15.2: Use of the Penn World Table open pwt56_60_89.gdt # for 1989 (the last obs), lag 29 gives 1960, the first obs genr gdp60 = RGDPL(-29) # find total growth of real GDP over 30 years genr gdpgro = (RGDPL - gdp60)/gdp60 # restrict the sample to a 1989 cross-section smpl --restrict YEAR=1989 # convergence: did countries with a lower base grow faster? ols gdpgro const gdp60 # result: No! Try an inverse relationship? genr gdp60inv = 1/gdp60 ols gdpgro const gdp60inv # no again. Try treating Africa as special? genr afdum = (CCODE = 1) genr afslope = afdum * gdp60 ols gdpgro const afdum gdp60 afslope Chapter 16 Nonlinear least squares 16.1 Introduction and examples Gretl supports nonlinear least squares (NLS) using a variant of the Levenberg–Marquardt algorithm. The user must supply a speciﬁcation of the regression function; prior to giving this speciﬁcation the parameters to be estimated must be “declared” and given initial values. Optionally, the user may supply analytical derivatives of the regression function with respect to each of the parameters. If derivatives are not given, the user must instead give a list of the parameters to be estimated (separated by spaces or commas), preceded by the keyword params. The tolerance (criterion for terminating the iterative estimation procedure) can be adjusted using the set command. The syntax for specifying the function to be estimated is the same as for the genr command. Here are two examples, with accompanying derivatives. # Consumption function from Greene nls C = alpha + beta * Y^gamma deriv alpha = 1 deriv beta = Y^gamma deriv gamma = beta * Y^gamma * log(Y) end nls # Nonlinear function from Russell Davidson nls y = alpha + beta * x1 + (1/beta) * x2 deriv alpha = 1 deriv beta = x1 - x2/(beta*beta) end nls --vcv Note the command words nls (which introduces the regression function), deriv (which introduces the speciﬁcation of a derivative), and end nls, which terminates the speciﬁcation and calls for estimation. If the --vcv ﬂag is appended to the last line the covariance matrix of the parameter estimates is printed. 16.2 Initializing the parameters The parameters of the regression function must be given initial values prior to the nls command. This can be done using the genr command (or, in the GUI program, via the menu item “Variable, Deﬁne new variable”). In some cases, where the nonlinear function is a generalization of (or a restricted form of) a linear model, it may be convenient to run an ols and initialize the parameters from the OLS coeﬃcient estimates. In relation to the ﬁrst example above, one might do: ols C 0 Y genr alpha = $coeff(0) genr beta =$coeff(Y) genr gamma = 1 And in relation to the second example one might do: 125 Chapter 16. Nonlinear least squares ols y 0 x1 x2 genr alpha = $coeff(0) genr beta =$coeff(x1) 126 16.3 NLS dialog window It is probably most convenient to compose the commands for NLS estimation in the form of a gretl script but you can also do so interactively, by selecting the item “Nonlinear Least Squares” under the “Model, Nonlinear models” menu. This opens a dialog box where you can type the function speciﬁcation (possibly prefaced by genr lines to set the initial parameter values) and the derivatives, if available. An example of this is shown in Figure 16.1. Note that in this context you do not have to supply the nls and end nls tags. Figure 16.1: NLS dialog box 16.4 Analytical and numerical derivatives If you are able to ﬁgure out the derivatives of the regression function with respect to the parameters, it is advisable to supply those derivatives as shown in the examples above. If that is not possible, gretl will compute approximate numerical derivatives. However, the properties of the NLS algorithm may not be so good in this case (see section 16.7). This is done by using the params statement, which should be followed by a list of identiﬁers containing the parameters to be estimated. In this case, the examples above would read as follows: # Greene nls C = alpha + beta * Y^gamma params alpha beta gamma end nls # Davidson nls y = alpha + beta * x1 + (1/beta) * x2 params alpha beta end nls If analytical derivatives are supplied, they are checked for consistency with the given nonlinear function. If the derivatives are clearly incorrect estimation is aborted with an error message. If the Chapter 16. Nonlinear least squares 127 derivatives are “suspicious” a warning message is issued but estimation proceeds. This warning may sometimes be triggered by incorrect derivatives, but it may also be triggered by a high degree of collinearity among the derivatives. Note that you cannot mix analytical and numerical derivatives: you should supply expressions for all of the derivatives or none. 16.5 Controlling termination The NLS estimation procedure is an iterative process. Iteration is terminated when the criterion for convergence is met or when the maximum number of iterations is reached, whichever comes ﬁrst. Let k denote the number of parameters being estimated. The maximum number of iterations is 100 × (k + 1) when analytical derivatives are given, and 200 × (k + 1) when numerical derivatives are used. Let denote a small number. The iteration is deemed to have converged if at least one of the following conditions is satisﬁed: • Both the actual and predicted relative reductions in the error sum of squares are at most . • The relative error between two consecutive iterates is at most . This default value of is the machine precision to the power 3/4,1 but it can be adjusted using the set command with the parameter nls_toler. For example set nls_toler .0001 will relax the value of to 0.0001. 16.6 Details on the code The underlying engine for NLS estimation is based on the minpack suite of functions, available from netlib.org. Speciﬁcally, the following minpack functions are called: lmder chkder lmdif fdjac2 dpmpar Levenberg–Marquardt algorithm with analytical derivatives Check the supplied analytical derivatives Levenberg–Marquardt algorithm with numerical derivatives Compute ﬁnal approximate Jacobian when using numerical derivatives Determine the machine precision On successful completion of the Levenberg–Marquardt iteration, a Gauss–Newton regression is used to calculate the covariance matrix for the parameter estimates. If the --robust ﬂag is given a robust variant is computed. The documentation for the set command explains the speciﬁc options available in this regard. Since NLS results are asymptotic, there is room for debate over whether or not a correction for degrees of freedom should be applied when calculating the standard error of the regression (and the standard errors of the parameter estimates). For comparability with OLS, and in light of the reasoning given in Davidson and MacKinnon (1993), the estimates shown in gretl do use a degrees of freedom correction. 1 On a 32-bit Intel Pentium machine a likely value for this parameter is 1.82 × 10−12 . Chapter 16. Nonlinear least squares 128 16.7 Numerical accuracy Table 16.1 shows the results of running the gretl NLS procedure on the 27 Statistical Reference Datasets made available by the U.S. National Institute of Standards and Technology (NIST) for testing nonlinear regression software.2 For each dataset, two sets of starting values for the parameters are given in the test ﬁles, so the full test comprises 54 runs. Two full tests were performed, one using all analytical derivatives and one using all numerical approximations. In each case the default tolerance was used.3 Out of the 54 runs, gretl failed to produce a solution in 4 cases when using analytical derivatives, and in 5 cases when using numeric approximation. Of the four failures in analytical derivatives mode, two were due to non-convergence of the Levenberg–Marquardt algorithm after the maximum number of iterations (on MGH09 and Bennett5, both described by NIST as of “Higher diﬃculty”) and two were due to generation of range errors (out-of-bounds ﬂoating point values) when computing the Jacobian (on BoxBOD and MGH17, described as of “Higher diﬃculty” and “Average diﬃculty” respectively). The additional failure in numerical approximation mode was on MGH10 (“Higher diﬃculty”, maximum number of iterations reached). The table gives information on several aspects of the tests: the number of outright failures, the average number of iterations taken to produce a solution and two sorts of measure of the accuracy of the estimates for both the parameters and the standard errors of the parameters. For each of the 54 runs in each mode, if the run produced a solution the parameter estimates obtained by gretl were compared with the NIST certiﬁed values. We deﬁne the “minimum correct ﬁgures” for a given run as the number of signiﬁcant ﬁgures to which the least accurate gretl estimate agreed with the certiﬁed value, for that run. The table shows both the average and the worst case value of this variable across all the runs that produced a solution. The same information is shown for the estimated standard errors.4 The second measure of accuracy shown is the percentage of cases, taking into account all parameters from all successful runs, in which the gretl estimate agreed with the certiﬁed value to at least the 6 signiﬁcant ﬁgures which are printed by default in the gretl regression output. Using analytical derivatives, the worst case values for both parameters and standard errors were improved to 6 correct ﬁgures on the test machine when the tolerance was tightened to 1.0e−14. Using numerical derivatives, the same tightening of the tolerance raised the worst values to 5 correct ﬁgures for the parameters and 3 ﬁgures for standard errors, at a cost of one additional failure of convergence. Note the overall superiority of analytical derivatives: on average solutions to the test problems were obtained with substantially fewer iterations and the results were more accurate (most notably for the estimated standard errors). Note also that the six-digit results printed by gretl are not 100 percent reliable for diﬃcult nonlinear problems (in particular when using numerical derivatives). Having registered this caveat, the percentage of cases where the results were good to six digits or better seems high enough to justify their printing in this form. a discussion of gretl’s accuracy in the estimation of linear models, see Appendix D. data shown in the table were gathered from a pre-release build of gretl version 1.0.9, compiled with gcc 3.3, linked against glibc 2.3.2, and run under Linux on an i686 PC (IBM ThinkPad A21m). 4 For the standard errors, I excluded one outlier from the statistics shown in the table, namely Lanczos1. This is an odd case, using generated data with an almost-exact ﬁt: the standard errors are 9 or 10 orders of magnitude smaller than the coeﬃcients. In this instance gretl could reproduce the certiﬁed standard errors to only 3 ﬁgures (analytical derivatives) and 2 ﬁgures (numerical derivatives). 3 The 2 For Chapter 16. Nonlinear least squares 129 Table 16.1: Nonlinear regression: the NIST tests Analytical derivatives Failures in 54 tests Average iterations Mean of min. correct ﬁgures, parameters Worst of min. correct ﬁgures, parameters Mean of min. correct ﬁgures, standard errors Worst of min. correct ﬁgures, standard errors Percent correct to at least 6 ﬁgures, parameters Percent correct to at least 6 ﬁgures, standard errors 97.7 96.5 5 8.000 4 4 32 8.120 Numerical derivatives 5 127 6.980 3 5.673 2 91.9 77.3 Chapter 17 Maximum likelihood estimation 17.1 Generic ML estimation with gretl Maximum likelihood estimation is a cornerstone of modern inferential procedures. Gretl provides a way to implement this method for a wide range of estimation problems, by use of the mle command. We give here a few examples. To give a foundation for the examples that follow, we start from a brief reminder on the basics of ML estimation. Given a sample of size T , it is possible to deﬁne the density function1 for the whole sample, namely the joint distribution of all the observations f (Y; θ), where Y = y1 , . . . , yT . Its shape is determined by a k-vector of unknown parameters θ , which we assume is contained in a set Θ, and which can be used to evaluate the probability of observing a sample with any given characteristics. After observing the data, the values Y are given, and this function can be evaluated for any legitimate value of θ . In this case, we prefer to call it the likelihood function; the need for another name stems from the fact that this function works as a density when we use the yt s as arguments and θ as parameters, whereas in this context θ is taken as the function’s argument, and the data Y only have the role of determining its shape. In standard cases, this function has a unique maximum. The location of the maximum is unaﬀected if we consider the logarithm of the likelihood (or log-likelihood for short): this function will be denoted as (θ) = log f (Y; θ) The log-likelihood functions that gretl can handle are those where (θ) can be written as T (θ) = t =1 t (θ) which is true in most cases of interest. The functions tions. t (θ) are called the log-likelihood contribu- Moreover, the location of the maximum is obviously determined by the data Y. This means that the value ˆ θ(Y) =Argmax (θ) (17.1) θ ∈Θ is some function of the observed data (a statistic), which has the property, under mild conditions, of being a consistent, asymptotically normal and asymptotically eﬃcient estimator of θ . ˆ Sometimes it is possible to write down explicitly the function θ(Y); in general, it need not be so. In these circumstances, the maximum can be found by means of numerical techniques. These often rely on the fact that the log-likelihood is a smooth function of θ , and therefore on the maximum its partial derivatives should all be 0. The gradient vector, or score vector, is a function that enjoys many interesting statistical properties in its own right; it will be denoted here as g(θ). It is a 1 We are supposing here that our data are a realization of continuous random variables. For discrete random variables, everything continues to apply by referring to the probability function instead of the density. In both cases, the distribution may be conditional on some exogenous variables. 130 Chapter 17. Maximum likelihood estimation k-vector with typical element gi (θ) = ∂ t (θ) ∂ (θ) = ∂θi ∂θi t =1 T 131 Gradient-based methods can be shortly illustrated as follows: 1. pick a point θ0 ∈ Θ; 2. evaluate g(θ0 ); 3. if g(θ0 ) is “small”, stop. Otherwise, compute a direction vector d(g(θ0 )); 4. evaluate θ1 = θ0 + d(g(θ0 )); 5. substitute θ0 with θ1 ; 6. restart from 2. Many algorithms of this kind exist; they basically diﬀer from one another in the way they compute the direction vector d(g(θ0 )), to ensure that (θ1 ) > (θ0 ) (so that we eventually end up on the maximum). The method gretl uses to maximize the log-likelihood is a gradient-based algorithm known as the BFGS (Broyden, Fletcher, Goldfarb and Shanno) method. This technique is used in most econometric and statistical packages, as it is well-established and remarkably powerful. Clearly, in order to make this technique operational, it must be possible to compute the vector g(θ) for any value of θ . In some cases this vector can be written explicitly as a function of Y. If this is not possible or too diﬃcult the gradient may be evaluated numerically. The choice of the starting value, θ0 , is crucial in some contexts and inconsequential in others. In general, however, it is advisable to start the algorithm from “sensible” values whenever possible. If a consistent estimator is available, this is usually a safe and eﬃcient choice: this ensures that in ˆ large samples the starting point will be likely close to θ and convergence can be achieved in few iterations. The maxmimum number of iterations allowed for the BFGS procedure, and the relative tolerance for assessing convergence, can be adjusted using the set command: the relevant variables are bfgs_maxiter (default value 500) and bfgs_toler (default value, the machine precision to the power 3/4). Covariance matrix and standard errors By default the covariance matrix of the parameter estimates is based on the Outer Product of the Gradient. That is, −1 ˆ ˆ ˆ VarOPG (θ) = G (θ)G(θ) ˆ where G(θ) is the T × k matrix of contributions to the gradient. Two other options are available. If the --hessian ﬂag is given, the covariance matrix is computed from a numerical approximation to the Hessian at convergence. If the --robust option is selected, the quasi-ML “sandwich” estimator is used: ˆ ˆ ˆ ˆ ˆ VarQML (θ) = H(θ)−1 G (θ)G(θ)H(θ)−1 where H denotes the numerical approximation to the Hessian. Chapter 17. Maximum likelihood estimation 132 17.2 Gamma estimation Suppose we have a sample of T independent and identically distributed observations from a Gamma distribution. The density function for each observation xt is f (xt ) = α p p −1 exp (−αxt ) x Γ (p) t (17.2) The log-likelihood for the entire sample can be written as the logarithm of the joint density of all the observations. Since these are independent and identical, the joint density is the product of the individual densities, and hence its log is T (α, p) = t =1 log α p p −1 exp (−αxt ) = x Γ (p) t t =1 T t (17.3) where t = p · log(αxt ) − γ(p) − log xt − αxt and γ(·) is the log of the gamma function. In order to estimate the parameters α and p via ML, we need to maximize (17.3) with respect to them. The corresponding gretl code snippet is scalar alpha = 1 scalar p = 1 mle logl = p*ln(alpha * x) - lngamma(p) - ln(x) - alpha * x params alpha p end mle The ﬁrst two statements alpha = 1 p=1 are necessary to ensure that the variables alpha and p exist before the computation of logl is attempted. Inside the mle block these variables are identiﬁed as the parameters that should be adjusted to maximize the likelihood via the params keyword. Their values will be changed by the execution of the mle command; upon successful completion, they will be replaced by the ML estimates. The starting value is 1 for both; this is arbitrary and does not matter much in this example (more on this later). The above code can be made more readable, and marginally more eﬃcient, by deﬁning a variable to hold α · xt . This command can be embedded in the mle block as follows: mle logl = p*ln(ax) - lngamma(p) - ln(x) - ax series ax = alpha*x params alpha p end mle The variable ax is not added to the params list, of course, since it is just an auxiliary variable to facilitate the calculations. You can insert as many such auxiliary lines as you require before the params line, with the restriction that they must contain either (a) commands to generate series, scalars or matrices or (b) print commands (which may be used to aid in debugging). In a simple example like this, the choice of the starting values is almost inconsequential; the algorithm is likely to converge no matter what the starting values are. However, consistent method-ofmoments estimators of p and α can be simply recovered from the sample mean m and variance V : since it can be shown that E(xt ) = p/α V (xt ) = p/α2 Chapter 17. Maximum likelihood estimation it follows that the following estimators ¯ α ¯ p = = m/V ¯ m·α 133 are consistent, and therefore suitable to be used as starting point for the algorithm. The gretl script code then becomes scalar m = mean(x) scalar alpha = m/var(x) scalar p = m*alpha mle logl = p*ln(ax) - lngamma(p) - ln(x) - ax series ax = alpha*x params alpha p end mle Another thing to note is that sometimes parameters are constrained within certain boundaries: in this case, for example, both α and p must be positive numbers. Gretl does not check for this: it is the user’s responsibility to ensure that the function is always evaluated at an admissible point in the parameter space during the iterative search for the maximum. An eﬀective technique is to deﬁne a variable for checking that the parameters are admissible and setting the log-likelihood as undeﬁned if the check fails. An example, which uses the conditional assignment operator, follows: scalar m = mean(x) scalar alpha = m/var(x) scalar p = m*alpha mle logl series scalar params end mle = check ? p*ln(ax) - lngamma(p) - ln(x) - ax : NA ax = alpha*x check = (alpha>0) && (p>0) alpha p 17.3 Stochastic frontier cost function When modeling a cost function, it is sometimes worthwhile to incorporate explicitly into the statistical model the notion that ﬁrms may be ineﬃcient, so that the observed cost deviates from the theoretical ﬁgure not only because of unobserved heterogeneity between ﬁrms, but also because two ﬁrms could be operating at a diﬀerent eﬃciency level, despite being identical under all other respects. In this case we may write ∗ Ci = Ci + ui + vi ∗ where Ci is some variable cost indicator, Ci is its “theoretical” value, ui is a zero-mean disturbance term and vi is the ineﬃciency term, which is supposed to be nonnegative by its very nature. ∗ A linear speciﬁcation for Ci is often chosen. For example, the Cobb–Douglas cost function arises ∗ when Ci is a linear function of the logarithms of the input prices and the output quantities. The stochastic frontier model is a linear model of the form yi = xi β + εi in which the error term 2 2 εi is the sum of ui and vi . A common postulate is that ui ∼ N(0, σu ) and vi ∼ N (0, σv ) . If independence between ui and vi is also assumed, then it is possible to show that the density function of εi has the form: 2 λεi 1 εi f (εi ) = Φ φ (17.4) π σ σ σ where Φ(·) and φ(·) are, respectively, the distribution and density function of the standard normal, 2 2 σ = σu + σv and λ = σu σv . Chapter 17. Maximum likelihood estimation 134 As a consequence, the log-likelihood for one observation takes the form (apart form an irrelevant constant) ε2 λεi = log Φ − log(σ ) + i 2 t σ 2σ Therefore, a Cobb–Douglas cost function with stochastic frontier is the model described by the following equations: log Ci ∗ log Ci = = = ∼ ∼ ∗ log Ci + εi m n c+ j =1 βj log yij + j =1 αj log pij εi ui vi ui + vi 2 N(0, σu ) 2 N (0, σv ) In most cases, one wants to ensure that the homogeneity of the cost function with respect to n the prices holds by construction. Since this requirement is equivalent to j =1 αj = 1, the above ∗ equation for Ci can be rewritten as m n log Ci − log pin = c + j =1 βj log yij + j =2 αj (log pij − log pin ) + εi (17.5) The above equation could be estimated by OLS, but it would suﬀer from two drawbacks: ﬁrst, the OLS estimator for the intercept c is inconsistent because the disturbance term has a non-zero expected value; second, the OLS estimators for the other parameters are consistent, but ineﬃcient in view of the non-normality of εi . Both issues can be addressed by estimating (17.5) by maximum likelihood. Nevertheless, OLS estimation is a quick and convenient way to provide starting values for the MLE algorithm. Example 17.1 shows how to implement the model described so far. The banks91 ﬁle contains part of the data used in Lucchetti, Papi and Zazzaro (2001). 17.4 GARCH models GARCH models are handled by gretl via a native function. However, it is instructive to see how they can be estimated through the mle command. The following equations provide the simplest example of a GARCH(1,1) model: yt εt ut ht = = ∼ = µ + εt ut · σt N(0, 1) 2 ω + αεt −1 + βht −1 . Since the variance of yt depends on past values, writing down the log-likelihood function is not simply a matter of summing the log densities for individual observations. As is common in time series models, yt cannot be considered independent of the other observations in our sample, and consequently the density function for the whole sample (the joint density for all observations) is not just the product of the marginal densities. Maximum likelihood estimation, in these cases, is achieved by considering conditional densities, so what we maximize is a conditional likelihood function. If we deﬁne the information set at time t as Ft = yt , yt −1 , . . . , Chapter 17. Maximum likelihood estimation 135 Example 17.1: Estimation of stochastic frontier cost function open banks91 # Cobb-Douglas cost function ols cost const y p1 p2 p3 # Cobb-Douglas cost function with homogeneity restrictions genr rcost = cost - p3 genr rp1 = p1 - p3 genr rp2 = p2 - p3 ols rcost const y rp1 rp2 # Cobb-Douglas cost function with homogeneity restrictions # and inefficiency scalar scalar scalar scalar b0 b1 b2 b3 = = = = $coeff(const)$coeff(y) $coeff(rp1)$coeff(rp2) scalar su = 0.1 scalar sv = 0.1 mle logl scalar scalar series params end mle = ln(cnorm(e*lambda/ss)) - (ln(ss) + 0.5*(e/ss)^2) ss = sqrt(su^2 + sv^2) lambda = su/sv e = rcost - b0*const - b1*y - b2*rp1 - b3*rp2 b0 b1 b2 b3 su sv then the density of yt conditional on Ft −1 is normal: yt |Ft −1 ∼ N [µ, ht ] . By means of the properties of conditional distributions, the joint density can be factorized as follows T f (yt , yt −1 , . . .) = t =1 f (yt |Ft −1 ) · f (y0 ) If we treat y0 as ﬁxed, then the term f (y0 ) does not depend on the unknown parameters, and therefore the conditional log-likelihood can then be written as the sum of the individual contributions as T (µ, ω, α, β) = t =1 t (17.6) where t = log 1 φ ht yt − µ ht =− 1 (yt − µ)2 log(ht ) + 2 ht The following script shows a simple application of this technique, which uses the data ﬁle djclose; Chapter 17. Maximum likelihood estimation 136 it is one of the example dataset supplied with gretl and contains daily data from the Dow Jones stock index. open djclose series y = 100*ldiff(djclose) scalar scalar scalar scalar mu = 0.0 omega = 1 alpha = 0.4 beta = 0.0 -0.5*(log(h) + (e^2)/h) e = y - mu h = var(y) h = omega + alpha*(e(-1))^2 + beta*h(-1) mu omega alpha beta mle ll = series series series params end mle 17.5 Analytical derivatives Computation of the score vector is essential for the working of the BFGS method. In all the previous examples, no explicit formula for the computation of the score was given, so the algorithm was fed numerically evaluated gradients. Numerical computation of the score for the i-th parameter is performed via a ﬁnite approximation of the derivative, namely ∂ (θ1 , . . . , θn ) ∂θi where h is a small number. In many situations, this is rather eﬃcient and accurate. However, one might want to avoid the approximation and specify an exact function for the derivatives. As an example, consider the following script: nulldata 1000 genr x1 = normal() genr x2 = normal() genr x3 = normal() genr ystar = x1 + x2 + x3 + normal() genr y = (ystar > 0) scalar scalar scalar scalar b0 b1 b2 b3 = = = = 0 0 0 0 (θ1 , . . . , θi + h, . . . , θn ) − (θ1 , . . . , θi − h, . . . , θn ) 2h mle logl = y*ln(P) + (1-y)*ln(1-P) series ndx = b0 + b1*x1 + b2*x2 + b3*x3 series P = cnorm(ndx) params b0 b1 b2 b3 end mle --verbose Here, 1000 data points are artiﬁcially generated for an ordinary probit model:2 yt is a binary ∗ variable, which takes the value 1 if yt = β1 x1t + β2 x2t + β3 x3t + εt > 0 and 0 otherwise. Therefore, 2 Again, gretl does provide a native probit command (see section 24.1), but a probit model makes for a nice example here. Chapter 17. Maximum likelihood estimation 137 yt = 1 with probability Φ(β1 x1t + β2 x2t + β3 x3t ) = πt . The probability function for one observation can be written as y P (yt ) = πt t (1 − πt )1−yt Since the observations are independent and identically distributed, the log-likelihood is simply the sum of the individual contributions. Hence T = t =1 yt log(πt ) + (1 − yt ) log(1 − πt ) The --verbose switch at the end of the end mle statement produces a detailed account of the iterations done by the BFGS algorithm. In this case, numerical diﬀerentiation works rather well; nevertheless, computation of the analytical ∂ score is straightforward, since the derivative ∂βi can be written as ∂ ∂ ∂πt = · ∂βi ∂πt ∂βi via the chain rule, and it is easy to see that ∂ ∂πt ∂πt ∂βi = = yt 1 − yt − πt 1 − πt φ(β1 x1t + β2 x2t + β3 x3t ) · xit The mle block in the above script can therefore be modiﬁed as follows: mle logl = y*ln(P) + (1-y)*ln(1-P) series ndx = b0 + b1*x1 + b2*x2 + b3*x3 series P = cnorm(ndx) series tmp = dnorm(ndx)*(y/P - (1-y)/(1-P)) deriv b0 = tmp deriv b1 = tmp*x1 deriv b2 = tmp*x2 deriv b3 = tmp*x3 end mle --verbose Note that the params statement has been replaced by a series of deriv statements; these have the double function of identifying the parameters over which to optimize and providing an analytical expression for their respective score elements. 17.6 Debugging ML scripts We have discussed above the main sorts of statements that are permitted within an mle block, namely • auxiliary commands to generate helper variables; • deriv statements to specify the gradient with respect to each of the parameters; and • a params statement to identify the parameters in case analytical derivatives are not given. For the purpose of debugging ML estimators one additional sort of statement is allowed: you can print the value of a relevant variable at each step of the iteration. This facility is more restricted then the regular print command. The command word print should be followed by the name of just one variable (a scalar, series or matrix). Chapter 17. Maximum likelihood estimation 138 In the last example above a key variable named tmp was generated, forming the basis for the analytical derivatives. To track the progress of this variable one could add a print statement within the ML block, as in series tmp = dnorm(ndx)*(y/P - (1-y)/(1-P)) print tmp 17.7 Using functions The mle command allows you to estimate models that gretl does not provide natively: in some cases, it may be a good idea to wrap up the mle block in a user-deﬁned function (see Chapter 10), so as to extend gretl’s capabilities in a modular and ﬂexible way. As an example, we will take a simple case of a model that gretl does not yet provide natively: the zero-inﬂated Poisson model, or ZIP for short.3 In this model, we assume that we observe a mixed population: for some individuals, the variable yt is (conditionally on a vector of exogenous covariates xt ) distributed as a Poisson random variate; for some others, yt is identically 0. The trouble is, we don’t know which category a given individual belongs to. For instance, suppose we have a sample of women, and the variable yt represents the number of children that woman t has. There may be a certain proportion, α, of women for whom yt = 0 with certainty (maybe out of a personal choice, or due to physical impossibility). But there may be other women for whom yt = 0 just as a matter of chance — they haven’t happened to have any children at the time of observation. In formulae: P (yt = k|xt ) µt dt = = = αdt + (1 − α) e−µt exp(xt β) 1 0 for yt = 0 for yt > 0 µt t yt ! y Writing a mle block for this model is not diﬃcult: mle ll = logprob series xb = exp(b0 + b1 * x) series d = (y=0) series poiprob = exp(-xb) * xb^y / gamma(y+1) series logprob = (alpha>0) && (alpha<1) ? \ log(alpha*d + (1-alpha)*poiprob) : NA params alpha b0 b1 end mle -v However, the code above has to be modiﬁed each time we change our speciﬁcation by, say, adding an explanatory variable. Using functions, we can simplify this task considerably and eventually be able to write something easy like list X = const x zip(y, X) Let’s see how this can be done. First we need to deﬁne a function called zip() that will take two arguments: a dependent variable y and a list of explanatory variables X. An example of such function can be seen in script 17.2. By inspecting the function code, you can see that the actual estimation does not happen here: rather, the zip() function merely uses the built-in modprint command to print out the results coming from another user-written function, namely zip_estimate(). 3 The actual ZIP model is in fact a bit more general than the one presented here. The specialized version discussed in this section was chosen for the sake of simplicity. For futher details, see Greene (2003). Chapter 17. Maximum likelihood estimation 139 Example 17.2: Zero-inﬂated Poisson Model — user-level function /* user-level function: estimate the model and print out the results */ function void zip(series y, list X) matrix coef_stde = zip_estimate(y, X) printf "\nZero-inflated Poisson model:\n" string parnames = "alpha," string parnames += varname(X) modprint coef_stde parnames end function Example 17.3: Zero-inﬂated Poisson Model — internal functions /* compute log probabilities for the plain Poisson model */ function series ln_poi_prob(series y, list X, matrix beta) series xb = lincomb(X, beta) return -exp(xb) + y*xb - lngamma(y+1) end function /* compute log probabilities for the zero-inflated Poisson model */ function series ln_zip_prob(series y, list X, matrix beta, scalar p0) # check if the probability is in [0,1]; otherwise, return NA if (p0>1) || (p0<0) series ret = NA else series ret = ln_poi_prob(y, X, beta) + ln(1-p0) series ret = (y=0) ? ln(p0 + exp(ret)) : ret endif return ret end function /* do the actual estimation (silently) */ function matrix zip_estimate(series y, list X) # initialize alpha to a "sensible" value: half the frequency # of zeros in the sample scalar alpha = mean(y=0)/2 # initialize the coeffs (we assume the first explanatory # variable is the constant here) matrix coef = zeros(nelem(X), 1) coef[1] = mean(y) / (1-alpha) # do the actual ML estimation mle ll = ln_zip_prob(y, X, coef, alpha) params alpha coef end mle --hessian --quiet return $coeff ~$stderr end function Chapter 17. Maximum likelihood estimation 140 The function zip_estimate() is not meant to be executed directly; it just contains the numbercrunching part of the job, whose results are then picked up by the end function zip(). In turn, zip_estimate() calls other user-written functions to perform other tasks. The whole set of “internal” functions is shown in the panel 17.3. All the functions shown in 17.2 and 17.3 can be stored in a separate inp ﬁle and executed once, at the beginning of our job, by means of the include command. Assuming the name of this script ﬁle is zip_est.inp, the following is an example script which (a) includes the script ﬁle, (b) generates a simulated dataset, and (c) performs the estimation of a ZIP model on the artiﬁcial data. set echo off set messages off # include the user-written functions include zip_est.inp # generate the artificial data nulldata 1000 set seed 732237 scalar truep = 0.2 scalar b0 = 0.2 scalar b1 = 0.5 series x = normal() series y = (uniform()<truep) ? 0 : genpois(exp(b0 + b1*x)) list X = const x # estimate the zero-inflated Poisson model zip(y, X) The results are as follows: Zero-inflated Poisson model: coefficient std. error z-stat p-value ------------------------------------------------------alpha 0.203069 0.0238035 8.531 1.45e-17 *** const 0.257014 0.0417129 6.161 7.21e-10 *** x 0.466657 0.0321235 14.53 8.17e-48 *** A further step may then be creating a function package for accessing your new zip() function via gretl’s graphical interface. For details on how to do this, see section 10.6. Chapter 18 GMM estimation 18.1 Introduction and terminology The Generalized Method of Moments (GMM) is a very powerful and general estimation method, which encompasses practically all the parametric estimation techniques used in econometrics. It was introduced in Hansen (1982) and Hansen and Singleton (1982); an excellent and thorough treatment is given in Davidson and MacKinnon (1993), chapter 17. The basic principle on which GMM is built is rather straightforward. Suppose we wish to estimate a scalar parameter θ based on a sample x1 , x2 , . . . , xT . Let θ0 indicate the “true” value of θ . Theoretical considerations (either of statistical or economic nature) may suggest that a relationship like the following holds: E xt − g(θ) = 0 θ = θ0 , (18.1) with g(·) a continuous and invertible function. That is to say, there exists a function of the data and the parameter, with the property that it has expectation zero if and only if it is evaluated at the true parameter value. For example, economic models with rational expectations lead to expressions like (18.1) quite naturally. If the sampling model for the xt s is such that some version of the Law of Large Numbers holds, then T 1 p ¯ X= xt − g(θ0 ); → T t =1 hence, since g(·) is invertible, the statistic ˆ ¯p θ = g −1 (X) − θ0 , → ˆ so θ is a consistent estimator of θ . A diﬀerent way to obtain the same outcome is to choose, as an estimator of θ , the value that minimizes the objective function 1 F (θ) = T T t =1 2 ¯ (xt − g(θ)) = X − g(θ) 2 ; (18.2) ˆ ¯ the minimum is trivially reached at θ = g −1 (X), since the expression in square brackets equals 0. The above reasoning can be generalized as follows: suppose θ is an n-vector and we have m relations like E [fi (xt , θ)] = 0 for i = 1 . . . m, (18.3) where E[·] is a conditional expectation on a set of p variables zt , called the instruments. In the above simple example, m = 1 and f (xt , θ) = xt − g(θ), and the only instrument used is zt = 1. Then, it must also be true that E fi (xt , θ) · zj,t = E fi,j,t (θ) = 0 for i = 1 . . . m and j = 1 . . . p ; (18.4) equation (18.4) is known as an orthogonality condition, or moment condition. The GMM estimator is deﬁned as the minimum of the quadratic form F (θ, W ) = ¯ W ¯, ff 141 (18.5) Chapter 18. GMM estimation 142 where ¯ is a (1 × m · p) vector holding the average of the orthogonality conditions and W is some f symmetric, positive deﬁnite matrix, known as the weights matrix. A necessary condition for the minimum to exist is the order condition n ≤ m · p . The statistic ˆ θ =Argmin F (θ, W ) θ (18.6) is a consistent estimator of θ whatever the choice of W . However, to achieve maximum asymptotic eﬃciency W must be proportional to the inverse of the long-run covariance matrix of the orthogonality conditions; if W is not known, a consistent estimator will suﬃce. These considerations lead to the following empirical strategy: ˆ 1. Choose a positive deﬁnite W and compute the one-step GMM estimator θ1 . Customary choices for W are Im·p or Im ⊗ (Z Z )−1 . ˆ 2. Use θ1 to estimate V (fi,j,t (θ)) and use its inverse as the weights matrix. The resulting estiˆ mator θ2 is called the two-step estimator. ˆ ˆ 3. Re-estimate V (fi,j,t (θ)) by means of θ2 and obtain θ3 ; iterate until convergence. Asymptotically, these extra steps are unnecessary, since the two-step estimator is consistent and eﬃcient; however, the iterated estimator often has better small-sample properties and should be independent of the choice of W made at step 1. In the special case when the number of parameters n is equal to the total number of orthogonality ˆ conditions m · p , the GMM estimator θ is the same for any choice of the weights matrix W , so the ﬁrst step is suﬃcient; in this case, the objective function is 0 at the minimum. If, on the contrary, n < m · p , the second step (or successive iterations) is needed to achieve eﬃciency, and the estimator so obtained can be very diﬀerent, in ﬁnite samples, from the onestep estimator. Moreover, the value of the objective function at the minimum, suitably scaled by the number of observations, yields Hansen’s J statistic ; this statistic can be interpreted as a test statistic that has a χ 2 distribution with m · p − n degrees of freedom under the null hypothesis of correct speciﬁcation. See Davidson and MacKinnon (1993), section 17.6 for details. In the following sections we will show how these ideas are implemented in gretl through some examples. 18.2 OLS as GMM It is instructive to start with a somewhat contrived example: consider the linear model yt = xt β + ut . Although most of us are used to read it as the sum of a hazily deﬁned “systematic part” plus an equally hazy “disturbance”, a more rigorous interpretation of this familiar expression comes from the hypothesis that the conditional mean E(yt |xt ) is linear and the deﬁnition of ut as yt − E(yt |xt ). From the deﬁnition of ut , it follows that E(ut |xt ) = 0. The following orthogonality condition is therefore available: E [f (β)] = 0, (18.7) where f (β) = (yt − xt β)xt . The deﬁnitions given in the previous section therefore specialize here to: • θ is β; • the instrument is xt ; • fi,j,t (θ) is (yt − xt β)xt = ut xt ; the orthogonality condition is interpretable as the requirement that the regressors should be uncorrelated with the disturbances; Chapter 18. GMM estimation 143 • W can be any symmetric positive deﬁnite matrix, since the number of parameters equals the number of orthogonality conditions. Let’s say we choose I . • The function F (θ, W ) is in this case 1 F (θ, W ) = T T t =1 2 ˆ (ut xt ) and it is easy to see why OLS and GMM coincide here: the GMM objective function has the same minimizer as the objective function of OLS, the residual sum of squares. Note, however, that the two functions are not equal to one another: at the minimum, F (θ, W ) = 0 while the minimized sum of squared residuals is zero only in the special case of a perfect linear ﬁt. The code snippet contained in Example 18.1 uses gretl’s gmm command to make the above operational. Example 18.1: OLS via GMM /* initialize stuff */ series e = 0 scalar beta = 0 matrix V = I(1) /* proceed with estimation */ gmm series e = y - x*beta orthog e ; x weights V params beta end gmm We feed gretl the necessary ingredients for GMM estimation in a command block, starting with gmm and ending with end gmm. After the end gmm statement two mutually exclusive options can be speciﬁed: --two-step or --iterate, whose meaning should be obvious. Three elements are compulsory within a gmm block: 1. one or more orthog statements 2. one weights statement 3. one params statement The three elements should be given in the stated order. The orthog statements are used to specify the orthogonality conditions. They must follow the syntax orthog x ; Z where x may be a series, matrix or list of series and Z may also be a series, matrix or list. In example 18.1, the series e holds the “residuals” and the series x holds the regressor. If x had been a list (a matrix), the orthog statement would have generated one orthogonality condition for each element (column) of x. Note the structure of the orthogonality condition: it is assumed that the term to the left of the semicolon represents a quantity that depends on the estimated parameters (and so must be updated in the process of iterative estimation), while the term on the right is a constant function of the data. Chapter 18. GMM estimation 144 The weights statement is used to specify the initial weighting matrix and its syntax is straightforward. Note, however, that when more than one step is required that matrix will contain the ﬁnal weight matrix, which most likely will be diﬀerent from its initial value. The params statement speciﬁes the parameters with respect to which the GMM criterion should be minimized; it follows the same logic and rules as in the mle and nls commands. The minimum is found through numerical minimization via BFGS (see section 5.9 and chapter 17). The progress of the optimization procedure can be observed by appending the --verbose switch to the end gmm line. (In this example GMM estimation is clearly a rather silly thing to do, since a closed form solution is easily given by OLS.) 18.3 TSLS as GMM Moving closer to the proper domain of GMM, we now consider two-stage least squares (TSLS) as a case of GMM. TSLS is employed in the case where one wishes to estimate a linear model of the form yt = Xt β + ut , but where one or more of the variables in the matrix X are potentially endogenous — correlated with the error term, u. We proceed by identifying a set of instruments, Zt , which are explanatory for the endogenous variables in X but which are plausibly uncorrelated with u. The classic twostage procedure is (1) regress the endogenous elements of X on Z ; then (2) estimate the equation of interest, with the endogenous elements of X replaced by their ﬁtted values from (1). ˆ ˆ An alternative perspective is given by GMM. We deﬁne the residual ut as yt − Xt β, as usual. But instead of relying on E(u|X) = 0 as in OLS, we base estimation on the condition E(u|Z) = 0. In this case it is natural to base the initial weighting matrix on the covariance matrix of the instruments. Example 18.2 presents a model from Stock and Watson’s Introduction to Econometrics. The demand for cigarettes is modeled as a linear function of the logs of price and income; income is treated as exogenous while price is taken to be endogenous and two measures of tax are used as instruments. Since we have two instruments and one endogenous variable the model is over-identiﬁed and therefore the weights matrix will inﬂuence the solution. Partial output from this script is shown in 18.3. The estimated standard errors from GMM are robust by default; if we supply the --robust option to the tsls command we get identical results.1 18.4 Covariance matrix options ˆ Σ = (J W J)−1 J W ΩW J(J W J)−1 The covariance matrix of the estimated parameters depends on the choice of W through (18.8) where J is a Jacobian term Jij = ¯ ∂ fi ∂θj and Ω is the long-run covariance matrix of the orthogonality conditions. Gretl computes J by numeric diﬀerentiation (there is no provision for specifying a user-supplied analytical expression for J at the moment). As for Ω, a consistent estimate is needed. The simplest choice is the sample covariance matrix of the ft s: 1 ˆ Ω0 (θ) = T T ft (θ)ft (θ) t =1 (18.9) This estimator is robust with respect to heteroskedasticity, but not with respect to autocorrelation. A heteroskedasticity- and autocorrelation-consistent (HAC) variant can be obtained using the 1 The data ﬁle used in this example is available in the Stock and Watson package for gretl. See http://gretl. sourceforge.net/gretl_data.html. Chapter 18. GMM estimation 145 Example 18.2: TSLS via GMM open cig_ch10.gdt # real avg price including sales tax genr ravgprs = avgprs / cpi # real avg cig-specific tax genr rtax = tax / cpi # real average total tax genr rtaxs = taxs / cpi # real average sales tax genr rtaxso = rtaxs - rtax # logs of consumption, price, income genr lpackpc = log(packpc) genr lravgprs = log(ravgprs) genr perinc = income / (pop*cpi) genr lperinc = log(perinc) # restrict sample to 1995 observations smpl --restrict year=1995 # Equation (10.16) by tsls list xlist = const lravgprs lperinc list zlist = const rtaxso rtax lperinc tsls lpackpc xlist ; zlist --robust # setup for gmm matrix Z = { zlist } matrix W = inv(Z’Z) series e = 0 scalar b0 = 1 scalar b1 = 1 scalar b2 = 1 gmm e = lpackpc - b0 - b1*lravgprs - b2*lperinc orthog e ; Z weights W params b0 b1 b2 end gmm Bartlett kernel or similar. A univariate version of this is used in the context of the lrvar() function — see equation (5.1). The multivariate version is set out in equation (18.10). 1 ˆ Ωk (θ) = T T −k t =k k wi ft (θ)ft −i (θ) , (18.10) i=−k Gretl computes the HAC covariance matrix by default when a GMM model is estimated on time series data. You can control the kernel and the bandwidth (that is, the value of k in 18.10) using the set command. See chapter 14 for further discussion of HAC estimation. You can also ask gretl not to use the HAC version by saying set force_hc on Chapter 18. GMM estimation 146 Example 18.3: TSLS via GMM: partial output Model 1: TSLS estimates using the 48 observations 1-48 Dependent variable: lpackpc Instruments: rtaxso rtax Heteroskedasticity-robust standard errors, variant HC0 VARIABLE const lravgprs lperinc COEFFICIENT 9.89496 -1.27742 0.280405 STDERROR 0.928758 0.241684 0.245828 T STAT 10.654 -5.286 1.141 P-VALUE <0.00001 *** <0.00001 *** 0.25401 Model 2: 1-step GMM estimates using the 48 observations 1-48 e = lpackpc - b0 - b1*lravgprs - b2*lperinc PARAMETER b0 b1 b2 ESTIMATE 9.89496 -1.27742 0.280405 STDERROR 0.928758 0.241684 0.245828 T STAT 10.654 -5.286 1.141 P-VALUE <0.00001 *** <0.00001 *** 0.25401 GMM criterion = 0.0110046 18.5 A real example: the Consumption Based Asset Pricing Model To illustrate gretl’s implementation of GMM, we will replicate the example given in chapter 3 of Hall (2005). The model to estimate is a classic application of GMM, and provides an example of a case when orthogonality conditions do not stem from statistical considerations, but rather from economic theory. A rational individual who must allocate his income between consumption and investment in a ﬁnancial asset must in fact choose the consumption path of his whole lifetime, since investment translates into future consumption. It can be shown that an optimal consumption path should satisfy the following condition: pU (ct ) = δk E rt +k U (ct +k )|Ft , (18.11) where p is the asset price, U(·) is the individual’s utility function, δ is the individual’s subjective discount rate and rt +k is the asset’s rate of return between time t and time t + k. Ft is the information set at time t ; equation (18.11) says that the utility “lost” at time t by purchasing the asset instead of consumption goods must be matched by a corresponding increase in the (discounted) future utility of the consumption ﬁnanced by the asset’s return. Since the future is uncertain, the individual considers his expectation, conditional on what is known at the time when the choice is made. We have said nothing about the nature of the asset, so equation (18.11) should hold whatever asset we consider; hence, it is possible to build a system of equations like (18.11) for each asset whose price we observe. If we are willing to believe that • the economy as a whole can be represented as a single gigantic and immortal representative individual, and • the function U(x) = x α −1 α is a faithful representation of the individual’s preferences, Chapter 18. GMM estimation then, setting k = 1, equation (18.11) implies the following for any asset j : Eδ rj,t +1 pj,t Ct +1 Ct α−1 147 Ft = 1, (18.12) where Ct is aggregate consumption and α and δ are the risk aversion and discount rate of the representative individual. In this case, it is easy to see that the “deep” parameters α and δ can be estimated via GMM by using rj,t +1 Ct +1 α−1 −1 et = δ pj,t Ct as the moment condition, while any variable known at time t may serve as an instrument. Example 18.4: Estimation of the Consumption Based Asset Pricing Model open hall.gdt set force_hc on scalar alpha = 0.5 scalar delta = 0.5 series e = 0 list inst = const consrat(-1) consrat(-2) ewr(-1) ewr(-2) matrix V0 = 100000*I(nelem(inst)) matrix Z = { inst } matrix V1 = $nobs*inv(Z’Z) gmm e = delta*ewr*consrat^(alpha-1) - 1 orthog e ; inst weights V0 params alpha delta end gmm gmm e = delta*ewr*consrat^(alpha-1) - 1 orthog e ; inst weights V1 params alpha delta end gmm gmm e = delta*ewr*consrat^(alpha-1) - 1 orthog e ; inst weights V0 params alpha delta end gmm --iterate gmm e = delta*ewr*consrat^(alpha-1) - 1 orthog e ; inst weights V1 params alpha delta end gmm --iterate In the example code given in 18.4, we replicate selected portions of table 3.7 in Hall (2005). The variable consrat is deﬁned as the ratio of monthly consecutive real per capita consumption (services and nondurables) for the US, and ewr is the return–price ratio of a ﬁctitious asset constructed Chapter 18. GMM estimation 148 by averaging all the stocks in the NYSE. The instrument set contains the constant and two lags of each variable. The command set force_hc on on the second line of the script has the sole purpose of replicating the given example: as mentioned above, it forces gretl to compute the long-run variance of the orthogonality conditions according to equation (18.9) rather than (18.10). We run gmm four times: one-step estimation for each of two initial weights matrices, then iterative estimation starting from each set of initial weights. Since the number of orthogonality conditions (5) is greater than the number of estimated parameters (2), the choice of initial weights should make a diﬀerence, and indeed we see fairly substantial diﬀerences between the one-step estimates (Models 1 and 2). On the other hand, iteration reduces these diﬀerences almost to the vanishing point (Models 3 and 4). Part of the output is given in 18.5. It should be noted that the J test leads to a rejection of the hypothesis of correct speciﬁcation. This is perhaps not surprising given the heroic assumptions required to move from the microeconomic principle in equation (18.11) to the aggregate system that is actually estimated. 18.6 Caveats A few words of warning are in order: despite its ingenuity, GMM is possibly the most fragile estimation method in econometrics. The number of non-obvious choices one has to make when using GMM is high, and in ﬁnite samples each of these can have dramatic consequences on the eventual output. Some of the factors that may aﬀect the results are: 1. Orthogonality conditions can be written in more than one way: for example, if E(xt − µ) = 0, then E(xt /µ − 1) = 0 holds too. It is possible that a diﬀerent speciﬁcation of the moment conditions leads to diﬀerent results. 2. As with all other numerical optimization algorithms, weird things may happen when the objective function is nearly ﬂat in some directions or has multiple minima. BFGS is usually quite good, but there is no guarantee that it always delivers a sensible solution, if one at all. 3. The 1-step and, to a lesser extent, the 2-step estimators may be sensitive to apparently trivial details, like the re-scaling of the instruments. Diﬀerent choices for the initial weights matrix can also have noticeable consequences. 4. With time-series data, there is no hard rule on the appropriate number of lags to use when computing the long-run covariance matrix (see section 18.4). Our advice is to go by trial and error, since results may be greatly inﬂuenced by a poor choice. Future versions of gretl will include more options on covariance matrix estimation. One of the consequences of this state of things is that replicating various well-known published studies may be extremely diﬃcult. Any non-trivial result is virtually impossible to reproduce unless all details of the estimation procedure are carefully recorded. Chapter 18. GMM estimation 149 Example 18.5: Estimation of the Consumption Based Asset Pricing Model — output Model 1: 1-step GMM estimates using the 465 observations 1959:04-1997:12 e = d*ewr*consrat^(alpha-1) - 1 PARAMETER alpha d ESTIMATE -3.14475 0.999215 STDERROR 6.84439 0.0121044 T STAT -0.459 82.549 P-VALUE 0.64590 <0.00001 *** GMM criterion = 2778.08 Model 2: 1-step GMM estimates using the 465 observations 1959:04-1997:12 e = d*ewr*consrat^(alpha-1) - 1 PARAMETER alpha d GMM criterion = 14.247 Model 3: Iterated GMM estimates using the 465 observations 1959:04-1997:12 e = d*ewr*consrat^(alpha-1) - 1 PARAMETER alpha d ESTIMATE -0.344325 0.991566 STDERROR 2.21458 0.00423620 T STAT -0.155 234.070 P-VALUE 0.87644 <0.00001 *** ESTIMATE 0.398194 0.993180 STDERROR 2.26359 0.00439367 T STAT 0.176 226.048 P-VALUE 0.86036 <0.00001 *** GMM criterion = 5491.78 J test: Chi-square(3) = 11.8103 (p-value 0.0081) Model 4: Iterated GMM estimates using the 465 observations 1959:04-1997:12 e = d*ewr*consrat^(alpha-1) - 1 PARAMETER alpha d ESTIMATE -0.344315 0.991566 STDERROR 2.21359 0.00423469 T STAT -0.156 234.153 P-VALUE 0.87639 <0.00001 *** GMM criterion = 5491.78 J test: Chi-square(3) = 11.8103 (p-value 0.0081) Chapter 19 Model selection criteria 19.1 Introduction In some contexts the econometrician chooses between alternative models based on a formal hypothesis test. For example, one might choose a more general model over a more restricted one if the restriction in question can be formulated as a testable null hypothesis, and the null is rejected on an appropriate test. In other contexts one sometimes seeks a criterion for model selection that somehow measures the balance between goodness of ﬁt or likelihood, on the one hand, and parsimony on the other. The balancing is necessary because the addition of extra variables to a model cannot reduce the degree of ﬁt or likelihood, and is very likely to increase it somewhat even if the additional variables are not truly relevant to the data-generating process. The best known such criterion, for linear models estimated via least squares, is the adjusted R 2 , ¯ R2 = 1 − SSR/(n − k) TSS/(n − 1) where n is the number of observations in the sample, k denotes the number of parameters estimated, and SSR and TSS denote the sum of squared residuals and the total sum of squares for the dependent variable, respectively. Compared to the ordinary coeﬃcient of determination or unadjusted R 2 , SSR R2 = 1 − TSS the “adjusted” calculation penalizes the inclusion of additional parameters, other things equal. 19.2 Information criteria A more general criterion in a similar spirit is Akaike’s (1974) “Information Criterion” (AIC). The original formulation of this measure is ˆ AIC = −2 (θ) + 2k (19.1) ˆ where (θ) represents the maximum loglikelihood as a function of the vector of parameter estiˆ, and k (as above) denotes the number of “independently adjusted parameters within the mates, θ model.” In this formulation, with AIC negatively related to the likelihood and positively related to the number of parameters, the researcher seeks the minimum AIC. The AIC can be confusing, in that several variants of the calculation are “in circulation.” For example, Davidson and MacKinnon (2004) present a simpliﬁed version, ˆ AIC = (θ) − k which is just −2 times the original: in this case, obviously, one wants to maximize AIC. In the case of models estimated by least squares, the loglikelihood can be written as ˆ (θ) = − n n (1 + log 2π − log n) − log SSR 2 2 150 (19.2) Chapter 19. Model selection criteria Substituting (19.2) into (19.1) we get AIC = n(1 + log 2π − log n) + n log SSR + 2k which can also be written as AIC = n log SSR + 2k + n(1 + log 2π ) n 151 (19.3) Some authors simplify the formula for the case of models estimated via least squares. For instance, William Greene writes 2k SSR + (19.4) AIC = log n n This variant can be derived from (19.3) by dividing through by n and subtracting the constant 1 + log 2π . That is, writing AICG for the version given by Greene, we have AICG = 1 AIC − (1 + log 2π ) n Finally, Ramanathan gives a further variant: AICR = SSR 2k/n e n which is the exponential of the one given by Greene. Gretl began by using the Ramanathan variant, but since version 1.3.1 the program has used the original Akaike formula (19.1), and more speciﬁcally (19.3) for models estimated via least squares. Although the Akaike criterion is designed to favor parsimony, arguably it does not go far enough in that direction. For instance, if we have two nested models with k − 1 and k parameters respectively, and if the null hypothesis that parameter k equals 0 is true, in large samples the AIC will nonetheless tend to select the less parsimonious model about 16 percent of the time (see Davidson and MacKinnon, 2004, chapter 15). An alternative to the AIC which avoids this problem is the Schwarz (1978) “Bayesian information criterion” (BIC). The BIC can be written (in line with Akaike’s formulation of the AIC) as ˆ BIC = −2 (θ) + k log n The multiplication of k by log n in the BIC means that the penalty for adding extra parameters grows with the sample size. This ensures that, asymptotically, one will not select a larger model over a correctly speciﬁed parsimonious model. A further alternative to AIC, which again tends to select more parsimonious models than AIC, is the Hannan–Quinn criterion or HQC (Hannan and Quinn, 1979). Written consistently with the formulations above, this is ˆ HQC = −2 (θ) + 2k log log n The Hannan–Quinn calculation is based on the law of the iterated logarithm (note that the last term is the log of the log of the sample size). The authors argue that their procedure provides a “strongly consistent estimation procedure for the order of an autoregression”, and that “compared to other strongly consistent procedures this procedure will underestimate the order to a lesser degree.” Gretl reports the AIC, BIC and HQC (calculated as explained above) for most sorts of models. The key point in interpreting these values is to know whether they are calculated such that smaller values are better, or such that larger values are better. In gretl, smaller values are better: one wants to minimize the chosen criterion. Chapter 20 Time series models 20.1 Introduction Time series models are discussed in this chapter and the next. In this chapter we concentrate on ARIMA models, unit root tests, and GARCH. The following chapter deals with cointegration and error correction. 20.2 ARIMA models Representation and syntax The arma command performs estimation of AutoRegressive, Integrated, Moving Average (ARIMA) models. These are models that can be written in the form φ(L)yt = θ(L) t (20.1) where φ(L), and θ(L) are polynomials in the lag operator, L, deﬁned such that Ln xt = xt −n , and t is a white noise process. The exact content of yt , of the AR polynomial φ(), and of the MA polynomial θ(), will be explained in the following. Mean terms The process yt as written in equation (20.1) has, without further qualiﬁcations, mean zero. If the model is to be applied to real data, it is necessary to include some term to handle the possibility that yt has non-zero mean. There are two possible ways to represent processes with nonzero mean: one is to deﬁne µt as the unconditional mean of yt , namely the central value of its marginal ˜ ˜ distribution. Therefore, the series yt = yt − µt has mean 0, and the model (20.1) applies to yt . In practice, assuming that µt is a linear function of some observable variables xt , the model becomes φ(L)(yt − xt β) = θ(L) t (20.2) This is sometimes known as a “regression model with ARMA errors”; its structure may be more apparent if we represent it using two equations: yt φ(L)ut = = xt β + ut θ(L) t The model just presented is also sometimes known as “ARMAX” (ARMA + eXogenous variables). It seems to us, however, that this label is more appropriately applied to a diﬀerent model: another way to include a mean term in (20.1) is to base the representation on the conditional mean of yt , that is the central value of the distribution of yt given its own past. Assuming, again, that this can be represented as a linear combination of some observable variables zt , the model would expand to φ(L)yt = zt γ + θ(L) t (20.3) The formulation (20.3) has the advantage that γ can be immediately interpreted as the vector of marginal eﬀects of the zt variables on the conditional mean of yt . And by adding lags of zt to 152 Chapter 20. Time series models 153 this speciﬁcation one can estimate Transfer Function models (which generalize ARMA by adding the eﬀects of exogenous variable distributed across time). Gretl provides a way to estimate both forms. Models written as in (20.2) are estimated by maximum likelihood; models written as in (20.3) are estimated by conditional maximum likelihood. (For more on these options see the section on “Estimation” below.) In the special case when xt = zt = 1 (that is, the models include a constant but no exogenous variables) the two speciﬁcations discussed above reduce to φ(L)(yt − µ) = θ(L) and φ(L)yt = α + θ(L) t t (20.4) (20.5) respectively. These formulations are essentially equivalent, but if they represent one and the same process µ and α are, fairly obviously, not numerically identical; rather α = 1 − φ1 − . . . − φp µ The gretl syntax for estimating (20.4) is simply arma p q ; y The AR and MA lag orders, p and q, can be given either as numbers or as pre-deﬁned scalars. The parameter µ can be dropped if necessary by appending the option --nc (“no constant”) to the command. If estimation of (20.5) is needed, the switch --conditional must be appended to the command, as in arma p q ; y --conditional Generalizing this principle to the estimation of (20.2) or (20.3), you get that arma p q ; y const x1 x2 would estimate the following model: yt − xt β = φ1 yt −1 − xt −1 β + . . . + φp yt −p − xt −p β + t + θ1 t −1 + . . . + θq t −q where in this instance xt β = β0 + xt,1 β1 + xt,2 β2 . Appending the --conditional switch, as in arma p q ; y const x1 x2 --conditional would estimate the following model: yt = xt γ + φ1 yt −1 + . . . + φp yt −p + t + θ1 t −1 + . . . + θq t −q Ideally, the issue broached above could be made moot by writing a more general speciﬁcation that nests the alternatives; that is φ(L) yt − xt β = zt γ + θ(L) t ; (20.6) we would like to generalize the arma command so that the user could specify, for any estimation method, whether certain exogenous variables should be treated as xt s or zt s, but we’re not yet at that point (and neither are most other software packages). Chapter 20. Time series models Seasonal models 154 A more ﬂexible lag structure is desirable when analyzing time series that display strong seasonal patterns. Model (20.1) can be expanded to φ(L)Φ(Ls )yt = θ(L)Θ(Ls ) t . For such cases, a fuller form of the syntax is available, namely, arma p q ; P Q ; y (20.7) where p and q represent the non-seasonal AR and MA orders, and P and Q the seasonal orders. For example, arma 1 1 ; 1 1 ; y would be used to estimate the following model: (1 − φL)(1 − ΦLs )(yt − µ) = (1 + θL)(1 + ΘLs ) t If yt is a quarterly series (and therefore s = 4), the above equation can be written more explicitly as yt − µ = φ(yt −1 − µ) + Φ(yt −4 − µ) − (φ · Φ)(yt −5 − µ) + t +θ t −1 +Θ t −4 + (θ · Θ) t −5 Such a model is known as a “multiplicative seasonal ARMA model”. Gaps in the lag structure The standard way to specify an ARMA model in gretl is via the AR and MA orders, p and q respectively. In this case all lags from 1 to the given order are included. In some cases one may wish to include only certain speciﬁc AR and/or MA lags. This can be done in either of two ways. • One can construct a matrix containing the desired lags (positive integer values) and supply the name of this matrix in place of p or q. • One can give a space-separated list of lags, enclosed in braces, in place of p or q. The following code illustrates these options: matrix pvec = {1, 4} arma pvec 1 ; y arma {1 4} 1 ; y Both forms above specify an ARMA model in which AR lags 1 and 4 are used (but not 2 and 3). This facility is available only for the non-seasonal component of the ARMA speciﬁcation. Diﬀerencing and ARIMA The above discussion presupposes that the time series yt has already been subjected to all the transformations deemed necessary for ensuring stationarity (see also section 20.3). Diﬀerencing is the most common of these transformations, and gretl provides a mechanism to include this step into the arma command: the syntax arma p d q ; y would estimate an ARMA(p, q) model on ∆d yt . It is functionally equivalent to Chapter 20. Time series models series tmp = y loop for i=1..d tmp = diff(tmp) endloop arma p q ; tmp 155 except with regard to forecasting after estimation (see below). When the series yt is diﬀerenced before performing the analysis the model is known as ARIMA (“I” for Integrated); for this reason, gretl provides the arima command as an alias for arma. Seasonal diﬀerencing is handled similarly, with the syntax arma p d q ; P D Q ; y where D is the order for seasonal diﬀerencing. Thus, the command arma 1 0 0 ; 1 1 1 ; y would produce the same parameter estimates as genr dsy = sdiff(y) arma 1 0 ; 1 1 ; dsy where we use the sdiff function to create a seasonal diﬀerence (e.g. for quarterly data, yt − yt −4 ). In specifying an ARIMA model with exogenous regressors we face a choice which relates back to the discussion of the variant models (20.2) and (20.3) above. If we choose model (20.2), the “regression model with ARMA errors”, how should this be extended to the case of ARIMA? The issue is whether or not the diﬀerencing that is applied to the dependent variable should also be applied to the regressors. Consider the simplest case, ARIMA with non-seasonal diﬀerencing of order 1. We may estimate either φ(L)(1 − L)(yt − Xt β) = θ(L) t (20.8) or φ(L) (1 − L)yt − Xt β = θ(L) t (20.9) The ﬁrst of these formulations can be described as a regression model with ARIMA errors, while the second preserves the levels of the X variables. As of gretl version 1.8.6, the default model is (20.8), in which diﬀerencing is applied to both yt and Xt . However, when using the default estimation method (native exact ML, see below), the option --y-diff-only may be given, in which case gretl estimates (20.9).1 Estimation The default estimation method for ARMA models is exact maximum likelihood estimation (under the assumption that the error term is normally distributed), using the Kalman ﬁlter in conjunction with the BFGS maximization algorithm. The gradient of the log-likelihood with respect to the parameter estimates is approximated numerically. This method produces results that are directly comparable with many other software packages. The constant, and any exogenous variables, are treated as in equation (20.2). The covariance matrix for the parameters is computed using a numerical approximation to the Hessian at convergence. The alternative method, invoked with the --conditional switch, is conditional maximum likelihood (CML), also known as “conditional sum of squares” — see Hamilton (1994, p. 132). This method was exempliﬁed in the script 9.3, and only a brief description will be given here. Given a sample of size T , the CML method minimizes the sum of squared one-step-ahead prediction errors 1 Prior to gretl 1.8.6, the default model was (20.9). We changed this for the sake of consistency with other software. Chapter 20. Time series models 156 generated by the model for the observations t0 , . . . , T . The starting point t0 depends on the orders of the AR polynomials in the model. The numerical maximization method used is BHHH, and the covariance matrix is computed using a Gauss–Newton regression. The CML method is nearly equivalent to maximum likelihood under the hypothesis of normality; the diﬀerence is that the ﬁrst (t0 − 1) observations are considered ﬁxed and only enter the likelihood function as conditioning variables. As a consequence, the two methods are asymptotically equivalent under standard conditions — except for the fact, discussed above, that our CML implementation treats the constant and exogenous variables as per equation (20.3). The two methods can be compared as in the following example open data10-1 arma 1 1 ; r arma 1 1 ; r --conditional which produces the estimates shown in Table 20.1. As you can see, the estimates of φ and θ are quite similar. The reported constants diﬀer widely, as expected — see the discussion following equations (20.4) and (20.5). However, dividing the CML constant by 1 − φ we get 7.38, which is not far from the ML estimate of 6.93. Table 20.1: ML and CML estimates Parameter µ φ θ 6.93042 0.855360 0.588056 ML (0.923882) (0.0511842) (0.0986096) 1.07322 0.852772 0.591838 CML (0.488661) (0.0450252) (0.0456662) Convergence and initialization The numerical methods used to maximize the likelihood for ARMA models are not guaranteed to converge. Whether or not convergence is achieved, and whether or not the true maximum of the likelihood function is attained, may depend on the starting values for the parameters. Gretl employs one of the following two initialization mechanisms, depending on the speciﬁcation of the model and the estimation method chosen. 1. Estimate a pure AR model by Least Squares (nonlinear least squares if the model requires it, otherwise OLS). Set the AR parameter values based on this regression and set the MA parameters to a small positive value (0.0001). 2. The Hannan–Rissanen method: First estimate an autoregressive model by OLS and save the residuals. Then in a second OLS pass add appropriate lags of the ﬁrst-round residuals to the model, to obtain estimates of the MA parameters. To see the details of the ARMA estimation procedure, add the --verbose option to the command. This prints a notice of the initialization method used, as well as the parameter values and loglikelihood at each iteration. Besides the built-in initialization mechanisms, the user has the option of specifying a set of starting values manually. This is done via the set command: the ﬁrst argument should be the keyword initvals and the second should be the name of a pre-speciﬁed matrix containing starting values. For example matrix start = { 0, 0.85, 0.34 } set initvals start arma 1 1 ; y Chapter 20. Time series models 157 The speciﬁed matrix should have just as many parameters as the model: in the example above there are three parameters, since the model implicitly includes a constant. The constant, if present, is always given ﬁrst; otherwise the order in which the parameters are expected is the same as the order of speciﬁcation in the arma or arima command. In the example the constant is set to zero, φ1 to 0.85, and θ1 to 0.34. You can get gretl to revert to automatic initialization via the command set initvals auto. Two variants of the BFGS algorithm are available in gretl. In general we recommend the default variant, which is based on an implementation by J. C. Nash (1990), but for some problems the alternative, limited-memory version (L-BFGS-B, see Byrd et al., 1995) may increase the chances of convergence on the ML solution. This can be selected via the --lbfgs option to the arma command. Estimation via X-12-ARIMA As an alternative to estimating ARMA models using “native” code, gretl oﬀers the option of using the external program X-12-ARIMA. This is the seasonal adjustment software produced and maintained by the U.S. Census Bureau; it is used for all oﬃcial seasonal adjustments at the Bureau. Gretl includes a module which interfaces with X-12-ARIMA: it translates arma commands using the syntax outlined above into a form recognized by X-12-ARIMA, executes the program, and retrieves the results for viewing and further analysis within gretl. To use this facility you have to install X-12-ARIMA separately. Packages for both MS Windows and GNU/Linux are available from the gretl website, http://gretl.sourceforge.net/. To invoke X-12-ARIMA as the estimation engine, append the ﬂag --x-12-arima, as in arma p q ; y --x-12-arima As with native estimation, the default is to use exact ML but there is the option of using conditional ML with the --conditional ﬂag. However, please note that when X-12-ARIMA is used in conditional ML mode, the comments above regarding the variant treatments of the mean of the process yt do not apply. That is, when you use X-12-ARIMA the model that is estimated is (20.2), regardless of whether estimation is by exact ML or conditional ML. In addition, the treatment of exogenous regressors in the context of ARIMA diﬀerencing is always that shown in equation (20.8). Forecasting ARMA models are often used for forecasting purposes. The autoregressive component, in particular, oﬀers the possibility of forecasting a process “out of sample” over a substantial time horizon. Gretl supports forecasting on the basis of ARMA models using the method set out by Box and Jenkins (1976).2 The Box and Jenkins algorithm produces a set of integrated AR coeﬃcients which take into account any diﬀerencing of the dependent variable (seasonal and/or non-seasonal) in the ARIMA context, thus making it possible to generate a forecast for the level of the original variable. By contrast, if you ﬁrst diﬀerence a series manually and then apply ARMA to the diﬀerenced series, forecasts will be for the diﬀerenced series, not the level. This point is illustrated in Example 20.1. The parameter estimates are identical for the two models. The forecasts diﬀer but are mutually consistent: the variable fcdiff emulates the ARMA forecast (static, one step ahead within the sample range, and dynamic out of sample). 2 See in particular their “Program 4” on p. 505ﬀ. Chapter 20. Time series models 158 20.3 Unit root tests The ADF test The Augmented Dickey–Fuller (ADF) test is, as implemented in gretl, the t -statistic on ϕ in the following regression: p ∆yt = µt + ϕyt −1 + i=1 γi ∆yt −i + t. (20.10) This test statistic is probably the best-known and most widely used unit root test. It is a one-sided test whose null hypothesis is ϕ = 0 versus the alternative ϕ < 0. Under the null, yt must be diﬀerenced at least once to achieve stationarity; under the alternative, yt is already stationary and no diﬀerencing is required. Hence, large negative values of the test statistic lead to the rejection of the null. One peculiar aspect of this test is that its limit distribution is non-standard under the null hypothesis: moreover, the shape of the distribution, and consequently the critical values for the test, depends on the form of the µt term. A full analysis of the various cases is inappropriate here: Hamilton (1994) contains an excellent discussion, but any recent time series textbook covers this topic. Suﬃce it to say that gretl allows the user to choose the speciﬁcation for µt among four diﬀerent alternatives: µt 0 µ0 µ0 + µ1 t µ0 + µ1 t + µ1 t 2 command option --nc --c --ct --ctt These options are not mutually exclusive; when they are used together the statistic will be reported separately for each case. By default, gretl uses by default the combination --c --ct --ctt. For each case, approximate p-values are calculated by means of the algorithm developed in MacKinnon (1996). The gretl command used to perform the test is adf; for example adf 4 x1 --c --ct would compute the test statistic as the t-statistic for ϕ in equation 20.10 with p = 4 in the two cases µt = µ0 and µt = µ0 + µ1 t . The number of lags (p in equation 20.10) should be chosen as to ensure that (20.10) is a parametrization ﬂexible enough to represent adequately the short-run persistence of ∆yt . Setting p too low results in size distortions in the test, whereas setting p too high would lead to low power. As a convenience to the user, the parameter p can be automatically determined. Setting p to a negative number triggers a sequential procedure that starts with p lags and decrements p until the t -statistic for the parameter γp exceeds 1.645 in absolute value. The KPSS test The KPSS test (Kwiatkowski, Phillips, Schmidt and Shin, 1992) is a unit root test in which the null hypothesis is opposite to that in the ADF test: under the null, the series in question is stationary; the alternative is that the series is I(1). The basic intuition behind this test statistic is very simple: if yt can be written as yt = µ + ut , where ut is some zero-mean stationary process, then not only does the sample average of the yt ’s provide a consistent estimator of µ , but the long-run variance of ut is a well-deﬁned, ﬁnite number. Neither of these properties hold under the alternative. Chapter 20. Time series models The test itself is based on the following statistic: η= t T 2 i=1 St ¯ T 2σ 2 159 (20.11) ¯ ¯ where St = s =1 es and σ 2 is an estimate of the long-run variance of et = (yt − y). Under the null, this statistic has a well-deﬁned (nonstandard) asymptotic distribution, which is free of nuisance parameters and has been tabulated by simulation. Under the alternative, the statistic diverges. As a consequence, it is possible to construct a one-sided test based on η, where H0 is rejected if η is bigger than the appropriate critical value; gretl provides the 90%, 95%, 97.5% and 99% quantiles. Usage example: kpss m y where m is an integer representing the bandwidth or window size used in the formula for estimating the long run variance: m |i| ¯ ˆ σ2 = 1− γi m+1 i=−m ˆ The γi terms denote the empirical autocovariances of et from order −m through m. For this estimator to be consistent, m must be large enough to accommodate the short-run persistence of et , but not too large compared to the sample size T . In the GUI interface of gretl, this value defaults to the integer part of 4 1/4 T . 100 The above concept can be generalized to the case where yt is thought to be stationary around a deterministic trend. In this case, formula (20.11) remains unchanged, but the series et is deﬁned as the residuals from an OLS regression of yt on a constant and a linear trend. This second form of the test is obtained by appending the --trend option to the kpss command: kpss n y --trend Note that in this case the asymptotic distribution of the test is diﬀerent and the critical values reported by gretl diﬀer accordingly. Cointegration tests FIXME discuss Engle—Granger here, and refer forward to the next chapter for the Johansen tests. 20.4 ARCH and GARCH Heteroskedasticity means a non-constant variance of the error term in a regression model. Autoregressive Conditional Heteroskedasticity (ARCH) is a phenomenon speciﬁc to time series models, whereby the variance of the error displays autoregressive behavior; for instance, the time series exhibits successive periods where the error variance is relatively large, and successive periods where it is relatively small. This sort of behavior is reckoned to be quite common in asset markets: an unsettling piece of news can lead to a period of increased volatility in the market. An ARCH error process of order q can be represented as q ut = σt εt ; σt2 ≡ E(u2 |Ωt −1 ) = α0 + t i=1 αi u2−i t where the εt s are independently and identically distributed (iid) with mean zero and variance 1, and where σt is taken to be the positive square root of σt2 . Ωt −1 denotes the information set as of Chapter 20. Time series models 160 time t − 1 and σt2 is the conditional variance: that is, the variance conditional on information dated t − 1 and earlier. It is important to notice the diﬀerence between ARCH and an ordinary autoregressive error process. The simplest (ﬁrst-order) case of the latter can be written as ut = ρut −1 + εt ; −1 < ρ < 1 where the εt s are independently and identically distributed with mean zero and variance σ 2 . With an AR(1) error, if ρ is positive then a positive value of ut will tend to be followed, with probability greater than 0.5, by a positive ut +1 . With an ARCH error process, a disturbance ut of large absolute value will tend to be followed by further large absolute values, but with no presumption that the successive values will be of the same sign. ARCH in asset prices is a “stylized fact” and is consistent with market eﬃciency; on the other hand autoregressive behavior of asset prices would violate market eﬃciency. One can test for ARCH of order q in the following way: ˆt 1. Estimate the model of interest via OLS and save the squared residuals, u2 . 2. Perform an auxiliary regression in which the current squared residual is regressed on a constant and q lags of itself. 3. Find the T R 2 value (sample size times unadjusted R 2 ) for the auxiliary regression. 4. Refer the T R 2 value to the χ 2 distribution with q degrees of freedom, and if the p-value is “small enough” reject the null hypothesis of homoskedasticity in favor of the alternative of ARCH(q). This test is implemented in gretl via the arch command. This command may be issued following the estimation of a time-series model by OLS, or by selection from the “Tests” menu in the model window (again, following OLS estimation). The result of the test is reported and if the T R 2 from the auxiliary regression has a p-value less than 0.10, ARCH estimates are also reported. These estimates take the form of Generalized Least Squares (GLS), speciﬁcally weighted least squares, using weights ˆ that are inversely proportional to the predicted variances of the disturbances, σt , derived from the auxiliary regression. In addition, the ARCH test is available after estimating a vector autoregression (VAR). In this case, however, there is no provision to re-estimate the model via GLS. GARCH The simple ARCH(q) process is useful for introducing the general concept of conditional heteroskedasticity in time series, but it has been found to be insuﬃcient in empirical work. The dynamics of the error variance permitted by ARCH(q) are not rich enough to represent the patterns found in ﬁnancial data. The generalized ARCH or GARCH model is now more widely used. The representation of the variance of a process in the GARCH model is somewhat (but not exactly) analogous to the ARMA representation of the level of a time series. The variance at time t is allowed to depend on both past values of the variance and past values of the realized squared disturbance, as shown in the following system of equations: yt ut σt2 = = = Xt β + ut σt εt q p (20.12) (20.13) α0 + i=1 αi u2−i + t j =1 δi σt2 j − (20.14) As above, εt is an iid sequence with unit variance. Xt is a matrix of regressors (or in the simplest case, just a vector of 1s allowing for a non-zero mean of yt ). Note that if p = 0, GARCH collapses to Chapter 20. Time series models 161 ARCH(q): the generalization is embodied in the δi terms that multiply previous values of the error variance. In principle the underlying innovation, εt , could follow any suitable probability distribution, and besides the obvious candidate of the normal or Gaussian distribution the t distribution has been used in this context. Currently gretl only handles the case where εt is assumed to be Gaussian. However, when the --robust option to the garch command is given, the estimator gretl uses for the covariance matrix can be considered Quasi-Maximum Likelihood even with non-normal disturbances. See below for more on the options regarding the GARCH covariance matrix. Example: garch p q ; y const x where p ≥ 0 and q > 0 denote the respective lag orders as shown in equation (20.14). These values can be supplied in numerical form or as the names of pre-deﬁned scalar variables. GARCH estimation Estimation of the parameters of a GARCH model is by no means a straightforward task. (Consider equation 20.14: the conditional variance at any point in time, σt2 , depends on the conditional variance in earlier periods, but σt2 is not observed, and must be inferred by some sort of Maximum Likelihood procedure.) Gretl uses the method proposed by Fiorentini, Calzolari and Panattoni (1996),3 which was adopted as a benchmark in the study of GARCH results by McCullough and Renfro (1998). It employs analytical ﬁrst and second derivatives of the log-likelihood, and uses a mixed-gradient algorithm, exploiting the information matrix in the early iterations and then switching to the Hessian in the neighborhood of the maximum likelihood. (This progress can be observed if you append the --verbose option to gretl’s garch command.) Several options are available for computing the covariance matrix of the parameter estimates in connection with the garch command. At a ﬁrst level, one can choose between a “standard” and a “robust” estimator. By default, the Hessian is used unless the --robust option is given, in which case the QML estimator is used. A ﬁner choice is available via the set command, as shown in Table 20.2. Table 20.2: Options for the GARCH covariance matrix command set garch_vcv hessian set garch_vcv im set garch_vcv op set garch_vcv qml set garch_vcv bw Use the Hessian eﬀect Use the Information Matrix Use the Outer Product of the Gradient QML estimator Bollerslev–Wooldridge “sandwich” estimator It is not uncommon, when one estimates a GARCH model for an arbitrary time series, to ﬁnd that the iterative calculation of the estimates fails to converge. For the GARCH model to make sense, there are strong restrictions on the admissible parameter values, and it is not always the case that there exists a set of values inside the admissible parameter space for which the likelihood is maximized. The restrictions in question can be explained by reference to the simplest (and much the most common) instance of the GARCH model, where p = q = 1. In the GARCH(1, 1) model the conditional algorithm is based on Fortran code deposited in the archive of the Journal of Applied Econometrics by the authors, and is used by kind permission of Professor Fiorentini. 3 The Chapter 20. Time series models variance is σt2 = α0 + α1 u2−1 + δ1 σt2 1 t − Taking the unconditional expectation of (20.15) we get σ 2 = α0 + α1 σ 2 + δ1 σ 2 so that σ2 = α0 1 − α1 − δ1 162 (20.15) For this unconditional variance to exist, we require that α1 + δ1 < 1, and for it to be positive we require that α0 > 0. A common reason for non-convergence of GARCH estimates (that is, a common reason for the nonexistence of αi and δi values that satisfy the above requirements and at the same time maximize the likelihood of the data) is misspeciﬁcation of the model. It is important to realize that GARCH, in itself, allows only for time-varying volatility in the data. If the mean of the series in question is not constant, or if the error process is not only heteroskedastic but also autoregressive, it is necessary to take this into account when formulating an appropriate model. For example, it may be necessary to take the ﬁrst diﬀerence of the variable in question and/or to add suitable regressors, Xt , as in (20.12). Chapter 20. Time series models 163 Example 20.1: ARIMA forecasting open greene18_2.gdt # log of quarterly U.S. nominal GNP, 1950:1 to 1983:4 genr y = log(Y) # and its first difference genr dy = diff(y) # reserve 2 years for out-of-sample forecast smpl ; 1981:4 # Estimate using ARIMA arima 1 1 1 ; y # forecast over full period smpl --full fcast fc1 # Return to sub-sample and run ARMA on the first difference of y smpl ; 1981:4 arma 1 1 ; dy smpl --full fcast fc2 genr fcdiff = (t<=1982:1)? (fc1 - y(-1)) : (fc1 - fc1(-1)) # compare the forecasts over the later period smpl 1981:1 1983:4 print y fc1 fc2 fcdiff --byobs The output from the last command is: 1981:1 1981:2 1981:3 1981:4 1982:1 1982:2 1982:3 1982:4 1983:1 1983:2 1983:3 1983:4 y 7.964086 7.978654 8.009463 8.015625 8.014997 8.026562 8.032717 8.042249 8.062685 8.091627 8.115700 8.140811 fc1 7.940930 7.997576 7.997503 8.033695 8.029698 8.046037 8.063636 8.081935 8.100623 8.119528 8.138554 8.157646 fc2 0.02668 0.03349 0.01885 0.02423 0.01407 0.01634 0.01760 0.01830 0.01869 0.01891 0.01903 0.01909 fcdiff 0.02668 0.03349 0.01885 0.02423 0.01407 0.01634 0.01760 0.01830 0.01869 0.01891 0.01903 0.01909 Chapter 21 Forecasting 21.1 Introduction In some econometric contexts forecasting is the prime objective: one wants estimates of the future values of certain variables to reduce the uncertainty attaching to current decision making. In other contexts where real-time forecasting is not the focus prediction may nonetheless be an important moment in the analysis. For example, out-of-sample prediction can provide a useful check on the validity of an econometric model. In other cases we are interested in questions of “what if”: for example, how might macroeconomic outcomes have diﬀered over a certain period if a diﬀerent policy had been pursued? In the latter cases “prediction” need not be a matter of actually projecting into the future but in any case it involves generating ﬁtted values from a given model. The term “postdiction” might be more accurate but it is not commonly used; we tend to talk of prediction even when there is no true forecast in view. This chapter oﬀers an overview of the methods available within gretl for forecasting or prediction (whether forward in time or not) and explicates some of the ﬁner points of the relevant commands. 21.2 Saving and inspecting ﬁtted values In the simplest case, the “predictions” of interest are just the (within sample) ﬁtted values from an ˆ ˆ econometric model. For the single-equation linear model, yt = Xt β + ut , these are yt = Xt β. ˆ In command-line mode, the y series can be retrieved, after estimating a model, using the accessor$yhat, as in series yh = $yhat If the model in question takes the form of a system of equations,$yhat returns a matrix, each column of which contains the ﬁtted values for a particular dependent variable. To extract the ﬁtted series for, e.g., the dependent variable in the second equation, do matrix Yh = $yhat series yh2 = Yh[,2] Having obtained a series of ﬁtted values, you can use the fcstats function to produce a vector of statistics that characterize the accuracy of the predictions (see section 21.4 below). The gretl GUI oﬀers several ways of accessing and examining within-sample predictions. In the model display window the Save menu contains an item for saving ﬁtted values, the Graphs menu allows plotting of ﬁtted versus actual values, and the Analysis menu oﬀers a display of actual, ﬁtted and residual values. 21.3 The fcast command The fcast command generates predictions based on the last estimated model. Several questions arise here: How to control the range over which predictions are generated? How to control the forecasting method (where a choice is available)? How to control the printing and/or saving of the 164 Chapter 21. Forecasting 165 results? Basic answers can be found in the Gretl Command Reference; we add some more details here. The forecast range The range defaults to the currently deﬁned sample range. If this remains unchanged following estimation of the model in question, the forecast will be “within sample” and (with some qualiﬁcations noted below) it will essentially duplicate the information available via the retrieval of ﬁtted values (see section 21.2 above). A common situation is that a model is estimated over a given sample and then forecasts are wanted for a subsequent out-of-sample range. The simplest way to accomplish this is via the --out-of-sample option to fcast. For example, assuming we have a quarterly time-series dataset containing observations from 1980:1 to 2008:4, four of which are to be reserved for forecasting: # reserve the last 4 observations smpl 1980:1 2007:4 ols y 0 xlist fcast --out-of-sample This will generate a forecast from 2008:1 to 2008:4. There are two other ways of adjusting the forecast range, oﬀering ﬁner control: • Use the smpl command to adjust the sample range prior to invoking fcast. • Use the optional startobs and endobs arguments to fcast (which should come right after the command word). These values set the forecast range independently of the sample range. What if one wants to generate a true forecast that goes beyond the available data? In that case one can use the dataset command with the addobs parameter to add extra observations before forecasting. For example: # use the entire dataset, which ends in 2008:4 ols y 0 xlist dataset addobs 4 fcast 2009:1 2009:4 But this will work as stated only if the set of regressors in xlist does not contain any stochastic regressors other than lags of y. The dataset addobs command attempts to detect and extrapolate certain common deterministic variables (e.g., time trend, periodic dummy variables). In addition, lagged values of the dependent variable can be supported via a dynamic forecast (see below for discussion of the static/dynamic distinction). But “future” values of any other included regressors must be supplied before such a forecast is possible. Note that speciﬁc values in a series can be set directly by date, for example: x1[2009:1] = 120.5. Or, if the assumption of no change in the regressors is warranted, one can do something like this: loop t=2009:1..2009:4 loop foreach i xlist$i[t] = $i[2008:4] endloop endloop Static, dynamic and rolling forecasts The distinction between static and dynamic forecasts applies only to dynamic models, i.e., those that feature one or more lags of the dependent variable. The simplest case is the AR(1) model, yt = α0 + α1 yt −1 + t (21.1) Chapter 21. Forecasting 166 In some cases the presence of a lagged dependent variable is implicit in the dynamics of the error term, for example yt = β + ut ut = ρut −1 + which implies that yt = (1 − ρ)β + ρyt −1 + t t Suppose we want to forecast y for period s using a dynamic model, say (21.1) for example. If ˆ we have data on y available for period s − 1 we could form a ﬁtted value in the usual way: ys = ˆ ˆ α0 + α1 ys −1 . But suppose that data are available only up to s − 2. In that case we can apply the chain rule of forecasting: ˆ ˆ ˆ ys −1 = α0 + α1 ys −2 ˆ ˆ ˆˆ ys = α0 + α1 ys −1 This is what is called a dynamic forecast. A static forecast, on the other hand, is simply a ﬁtted value (even if it happens to be computed out-of-sample). Printing and saving forecasts To be written. 21.4 Univariate forecast evaluation statistics Let yt be the value of a variable of interest at time t and let ft be a forecast of yt . We deﬁne the forecast error as et = yt − ft . Given a series of T observations and associated forecasts we can construct several measures of the overall accuracy of the forecasts. Some commonly used measures are the Mean Error (ME), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Percentage Error (MPE) and Mean Absolute Percentage Error (MAPE). These are deﬁned as follows. 1 T T ME = et t =1 MSE = 1 T T T 2 et t =1 RMSE = 1 T 1 T T 2 et t =1 T MAE = 1 T T |et | t =1 MPE = 1 T 100 t =1 et yt MAPE = 100 t =1 |et | yt A further relevant statistic is Theil’s U (Theil, 1966), deﬁned as the positive square root of 1 U= T 2 T −1 t =1 ft +1 − yt +1 yt 2 1 · T T −1 t =1 yt +1 − yt yt 2 −1 The more accurate the forecasts, the lower the value of Theil’s U , which has a minimum of 0.1 This measure can be interpreted as the ratio of the RMSE of the proposed forecasting model to the RMSE of a naïve model which simply predicts yt +1 = yt for all t . The naïve model yields U = 1; values less than 1 indicate an improvement relative to this benchmark and values greater than 1 a deterioration. 1 This statistic is sometimes called U , to distinguish it from a related but diﬀerent U deﬁned in an earlier work by 2 Theil (1961). It seems to be generally accepted that the later version of Theil’s U is a superior statistic, so we ignore the earlier version here. Chapter 21. Forecasting 167 In addition, Theil (1966, pp. 33–36) proposed a decomposition of the MSE which can be useful in evaluating a set of forecasts. He showed that the MSE could be broken down into three non-negative components as follows 2 2 ¯¯ MSE = f − y + sf − r sy + 1 − r 2 s 2 y ¯ ¯ where f and y are the sample means of the forecasts and the observations, sf and sy are the respective standard deviations (using T in the denominator), and r is the sample correlation between y and f . Dividing through by MSE we get ¯¯ f −y MSE 2 + sf − r sy MSE 2 + 2 1 − r 2 sy MSE =1 (21.2) Theil labeled the three terms on the left-hand side of (21.2) the bias proportion (U M ), regression proportion (U R ) and disturbance proportion (U D ), respectively. If y and f represent the in-sample observations of the dependent variable and the ﬁtted values from a linear regression then the ﬁrst two components, U M and U R , will be zero (apart from rounding error), and the entire MSE will be accounted for by the unsystematic part, U D . In the case of out-of-sample prediction, however (or “prediction” over a sub-sample of the data used in the regression), U M and U R are not necessarily close to zero, although this is a desirable property for a forecast to have. U M diﬀers from zero if and only if the mean of the forecasts diﬀers from the mean of the realizations, and U R is non-zero if and only if the slope of a simple regression of the realizations on the forecasts diﬀers from 1. The above-mentioned statistics are printed as part of the output of the fcast command. They can also be retrieved in the form of a column vector using the function fcstats, which takes two series arguments corresponding to y and f . The vector returned is ME MSE MAE MPE MAPE U UM UR UD (Note that the RMSE is not included since it can easily be obtained given the MSE.) The series given as arguments to fcstats must not contain any missing values in the currently deﬁned sample range; use the smpl command to adjust the range if needed. 21.5 Forecasts based on VAR models To be written. 21.6 Forecasting from simultaneous systems To be written. Chapter 22 Cointegration and Vector Error Correction Models 22.1 Introduction The twin concepts of cointegration and error correction have drawn a good deal of attention in macroeconometrics over recent years. The attraction of the Vector Error Correction Model (VECM) is that it allows the researcher to embed a representation of economic equilibrium relationships within a relatively rich time-series speciﬁcation. This approach overcomes the old dichotomy between (a) structural models that faithfully represented macroeconomic theory but failed to ﬁt the data, and (b) time-series models that were accurately tailored to the data but diﬃcult if not impossible to interpret in economic terms. The basic idea of cointegration relates closely to the concept of unit roots (see section 20.3). Suppose we have a set of macroeconomic variables of interest, and we ﬁnd we cannot reject the hypothesis that some of these variables, considered individually, are non-stationary. Speciﬁcally, suppose we judge that a subset of the variables are individually integrated of order 1, or I(1). That is, while they are non-stationary in their levels, their ﬁrst diﬀerences are stationary. Given the statistical problems associated with the analysis of non-stationary data (for example, the threat of spurious regression), the traditional approach in this case was to take ﬁrst diﬀerences of all the variables before proceeding with the analysis. But this can result in the loss of important information. It may be that while the variables in question are I(1) when taken individually, there exists a linear combination of the variables that is stationary without diﬀerencing, or I(0). (There could be more than one such linear combination.) That is, while the ensemble of variables may be “free to wander” over time, nonetheless the variables are “tied together” in certain ways. And it may be possible to interpret these ties, or cointegrating vectors, as representing equilibrium conditions. For example, suppose we ﬁnd some or all of the following variables are I(1): money stock, M , the price level, P , the nominal interest rate, R , and output, Y . According to standard theories of the demand for money, we would nonetheless expect there to be an equilibrium relationship between real balances, interest rate and output; for example m − p = γ0 + γ1 y + γ2 r γ1 > 0, γ2 < 0 where lower-case variable names denote logs. In equilibrium, then, m − p − γ1 y − γ2 r = γ0 Realistically, we should not expect this condition to be satisﬁed each period. We need to allow for the possibility of short-run disequilibrium. But if the system moves back towards equilibrium following a disturbance, it follows that the vector x = (m, p, y, r ) is bound by a cointegrating vector β = (β1 , β2 , β3 , β4 ), such that β x is stationary (with a mean of γ0 ). Furthermore, if equilibrium is correctly characterized by the simple model above, we have β2 = −β1 , β3 < 0 and β4 > 0. These things are testable within the context of cointegration analysis. There are typically three steps in this sort of analysis: 1. Test to determine the number of cointegrating vectors, the cointegrating rank of the system. 2. Estimate a VECM with the appropriate rank, but subject to no further restrictions. 168 Chapter 22. Cointegration and Vector Error Correction Models 169 3. Probe the interpretation of the cointegrating vectors as equilibrium conditions by means of restrictions on the elements of these vectors. The following sections expand on each of these points, giving further econometric details and explaining how to implement the analysis using gretl. 22.2 Vector Error Correction Models as representation of a cointegrated system Consider a VAR of order p with a deterministic part given by µt (typically, a polynomial in time). One can write the n-variate process yt as yt = µt + A1 yt −1 + A2 yt −2 + · · · + Ap yt −p + t (22.1) But since yt −i ≡ yt −1 − (∆yt −1 + ∆yt −2 + · · · + ∆yt −i+1 ), we can re-write the above as p −1 ∆yt = µt + Πyt −1 + i=1 Γi ∆yt −i + t, (22.2) where Π = p i=1 Ai − I and Γi = − p j =i+1 Aj . This is the VECM representation of (22.1). The interpretation of (22.2) depends crucially on r , the rank of the matrix Π. • If r = 0, the processes are all I(1) and not cointegrated. • If r = n, then Π is invertible and the processes are all I(0). • Cointegration occurs in between, when 0 < r < n and Π can be written as αβ . In this case, yt is I(1), but the combination zt = β yt is I(0). If, for example, r = 1 and the ﬁrst element of β was −1, then one could write zt = −y1,t + β2 y2,t + · · · + βn yn,t , which is equivalent to saying that y1t = β2 y2,t + · · · + βn yn,t − zt is a long-run equilibrium relationship: the deviations zt may not be 0 but they are stationary. In this case, (22.2) can be written as p −1 ∆yt = µt + αβ yt −1 + i=1 Γi ∆yt −i + t. (22.3) If β were known, then zt would be observable and all the remaining parameters could be estimated via OLS. In practice, the procedure estimates β ﬁrst and then the rest. The rank of Π is investigated by computing the eigenvalues of a closely related matrix whose rank is the same as Π: however, this matrix is by construction symmetric and positive semideﬁnite. As a consequence, all its eigenvalues are real and non-negative, and tests on the rank of Π can therefore be carried out by testing how many eigenvalues are 0. If all the eigenvalues are signiﬁcantly diﬀerent from 0, then all the processes are stationary. If, on the contrary, there is at least one zero eigenvalue, then the yt process is integrated, although some linear combination β yt might be stationary. At the other extreme, if no eigenvalues are signiﬁcantly diﬀerent from 0, then not only is the process yt non-stationary, but the same holds for any linear combination β yt ; in other words, no cointegration occurs. Estimation typically proceeds in two stages: ﬁrst, a sequence of tests is run to determine r , the cointegration rank. Then, for a given rank the parameters in equation (22.3) are estimated. The two commands that gretl oﬀers for estimating these systems are coint2 and vecm, respectively. The syntax for coint2 is Chapter 22. Cointegration and Vector Error Correction Models coint2 p ylist [ ; xlist [ ; zlist ] ] 170 where p is the number of lags in (22.1); ylist is a list containing the yt variables; xlist is an optional list of exogenous variables; and zlist is another optional list of exogenous variables whose eﬀects are assumed to be conﬁned to the cointegrating relationships. The syntax for vecm is vecm p r ylist [ ; xlist [ ; zlist ] ] where p is the number of lags in (22.1); r is the cointegration rank; and the lists ylist, xlist and zlist have the same interpretation as in coint2. Both commands can be given speciﬁc options to handle the treatment of the deterministic component µt . These are discussed in the following section. 22.3 Interpretation of the deterministic components Statistical inference in the context of a cointegrated system depends on the hypotheses one is willing to make on the deterministic terms, which leads to the famous “ﬁve cases.” In equation (22.2), the term µt is usually understood to take the form µt = µ0 + µ1 · t. In order to have the model mimic as closely as possible the features of the observed data, there is a preliminary question to settle. Do the data appear to follow a deterministic trend? If so, is it linear or quadratic? Once this is established, one should impose restrictions on µ0 and µ1 that are consistent with this judgement. For example, suppose that the data do not exhibit a discernible trend. This means that ∆yt is on average zero, so it is reasonable to assume that its expected value is also zero. Write equation (22.2) as Γ (L)∆yt = µ0 + µ1 · t + αzt −1 + t , (22.4) where zt = β yt is assumed to be stationary and therefore to possess ﬁnite moments. Taking unconditional expectations, we get 0 = µ0 + µ1 · t + αmz . Since the left-hand side does not depend on t , the restriction µ1 = 0 is a safe bet. As for µ0 , there are just two ways to make the above expression true: either µ0 = 0 with mz = 0, or µ0 equals −αmz . The latter possibility is less restrictive in that the vector µ0 may be non-zero, but is constrained to be a linear combination of the columns of α. In that case, µ0 can be written as α · c , and one may write (22.4) as yt −1 Γ (L)∆yt = α β c + t. 1 The long-run relationship therefore contains an intercept. This type of restriction is usually written α ⊥ µ0 = 0 , where α⊥ is the left null space of the matrix α. An intuitive understanding of the issue can be gained by means of a simple example. Consider a series xt which behaves as follows xt = m + xt −1 + εt where m is a real number and εt is a white noise process: xt is then a random walk with drift m. In the special case m = 0, the drift disappears and xt is a pure random walk. Chapter 22. Cointegration and Vector Error Correction Models Consider now another process yt , deﬁned by yt = k + xt + ut 171 where, again, k is a real number and ut is a white noise process. Since ut is stationary by deﬁnition, xt and yt cointegrate: that is, their diﬀerence zt = yt − xt = k + ut is a stationary process. For k = 0, zt is simple zero-mean white noise, whereas for k = 0 the process zt is white noise with a non-zero mean. After some simple substitutions, the two equations above can be represented jointly as a VAR(1) system yt k+m 01 yt −1 ut + εt + + = xt m 01 xt −1 εt or in VECM form ∆yt ∆xt = k+m m k+m m µ0 + αβ + −1 0 −1 0 1 0 1 yt −1 x t −1 −1 + yt −1 xt −1 ut + εt εt + = ut + εt εt = + yt −1 xt −1 = = + ηt = µ0 + αzt −1 + ηt , where β is the cointegration vector and α is the “loadings” or “adjustments” vector. We are now ready to consider three possible cases: 1. m = 0: In this case xt is trended, as we just saw; it follows that yt also follows a linear trend because on average it keeps at a ﬁxed distance k from xt . The vector µ0 is unrestricted. 2. m = 0 and k = 0: In this case, xt is not trended and as a consequence neither is yt . However, the mean distance between yt and xt is non-zero. The vector µ0 is given by µ0 = k 0 which is not null and therefore the VECM shown above does have a constant term. The constant, however, is subject to the restriction that its second element must be 0. More generally, µ0 is a multiple of the vector α. Note that the VECM could also be written as yt −1 −1 u t + εt ∆yt 1 −1 −k xt −1 + = εt ∆xt 0 1 which incorporates the intercept into the cointegration vector. This is known as the “restricted constant” case. 3. m = 0 and k = 0: This case is the most restrictive: clearly, neither xt nor yt are trended, and the mean distance between them is zero. The vector µ0 is also 0, which explains why this case is referred to as “no constant.” In most cases, the choice between these three possibilities is based on a mix of empirical observation and economic reasoning. If the variables under consideration seem to follow a linear trend Chapter 22. Cointegration and Vector Error Correction Models 172 then we should not place any restriction on the intercept. Otherwise, the question arises of whether it makes sense to specify a cointegration relationship which includes a non-zero intercept. One example where this is appropriate is the relationship between two interest rates: generally these are not trended, but the VAR might still have an intercept because the diﬀerence between the two (the “interest rate spread”) might be stationary around a non-zero mean (for example, because of a risk or liquidity premium). The previous example can be generalized in three directions: 1. If a VAR of order greater than 1 is considered, the algebra gets more convoluted but the conclusions are identical. 2. If the VAR includes more than two endogenous variables the cointegration rank r can be greater than 1. In this case, α is a matrix with r columns, and the case with restricted constant entails the restriction that µ0 should be some linear combination of the columns of α. 3. If a linear trend is included in the model, the deterministic part of the VAR becomes µ0 + µ1 t . The reasoning is practically the same as above except that the focus now centers on µ1 rather than µ0 . The counterpart to the “restricted constant” case discussed above is a “restricted trend” case, such that the cointegration relationships include a trend but the ﬁrst diﬀerences of the variables in question do not. In the case of an unrestricted trend, the trend appears in both the cointegration relationships and the ﬁrst diﬀerences, which corresponds to the presence of a quadratic trend in the variables themselves (in levels). In order to accommodate the ﬁve cases, gretl provides the following options to the coint2 and vecm commands: µt 0 µ0 , α⊥ µ0 = 0 µ0 µ0 + µ1 t, α⊥ µ1 = 0 µ0 + µ1 t option ﬂag --nc --rc default --crt --ct description no constant restricted constant unrestricted constant constant + restricted trend constant + unrestricted trend Note that for this command the above options are mutually exclusive. In addition, you have the option of using the --seasonal options, for augmenting µt with centered seasonal dummies. In each case, p-values are computed via the approximations by Doornik (1998). 22.4 The Johansen cointegration tests The two Johansen tests for cointegration are used to establish the rank of β; in other words, how many cointegration vectors the system has. These are the “λ-max” test, for hypotheses on individual eigenvalues, and the “trace” test, for joint hypotheses. Suppose that the eigenvalues λi are sorted from largest to smallest. The null hypothesis for the “λ-max” test on the i-th eigenvalue is that λi = 0. The corresponding trace test, instead, considers the hypothesis λj = 0 for all j ≥ i. The gretl command coint2 performs these two tests. The corresponding menu entry in the GUI is “Model, Time Series, Cointegration Test, Johansen”. As in the ADF test, the asymptotic distribution of the tests varies with the deterministic component µt one includes in the VAR (see section 22.3 above). The following code uses the denmark data ﬁle, supplied with gretl, to replicate Johansen’s example found in his 1995 book. open denmark coint2 2 LRM LRY IBO IDE --rc --seasonal Chapter 22. Cointegration and Vector Error Correction Models 173 In this case, the vector yt in equation (22.2) comprises the four variables LRM, LRY, IBO, IDE. The number of lags equals p in (22.2) (that is, the number of lags of the model written in VAR form). Part of the output is reported below: Johansen test: Number of equations = 4 Lag order = 2 Estimation period: 1974:3 - 1987:3 (T = 53) Case 2: Restricted constant Rank Eigenvalue Trace test p-value 0 0.43317 49.144 [0.1284] 1 0.17758 19.057 [0.7833] 2 0.11279 8.6950 [0.7645] 3 0.043411 2.3522 [0.7088] Lmax test 30.087 10.362 6.3427 2.3522 p-value [0.0286] [0.8017] [0.7483] [0.7076] Both the trace and λ-max tests accept the null hypothesis that the smallest eigenvalue is 0 (see the last row of the table), so we may conclude that the series are in fact non-stationary. However, some linear combination may be I(0), since the λ-max test rejects the hypothesis that the rank of Π is 0 (though the trace test gives less clear-cut evidence for this, with a p-value of 0.1284). 22.5 Identiﬁcation of the cointegration vectors The core problem in the estimation of equation (22.2) is to ﬁnd an estimate of Π that has by construction rank r , so it can be written as Π = αβ , where β is the matrix containing the cointegration vectors and α contains the “adjustment” or “loading” coeﬃcients whereby the endogenous variables respond to deviation from equilibrium in the previous period. Without further speciﬁcation, the problem has multiple solutions (in fact, inﬁnitely many). The parameters α and β are under-identiﬁed: if all columns of β are cointegration vectors, then any arbitrary linear combinations of those columns is a cointegration vector too. To put it diﬀerently, if Π = α0 β0 for speciﬁc matrices α0 and β0 , then Π also equals (α0 Q)(Q−1 β0 ) for any conformable non-singular matrix Q. In order to ﬁnd a unique solution, it is therefore necessary to impose some restrictions on α and/or β. It can be shown that the minimum number of restrictions that is necessary to guarantee identiﬁcation is r 2 . Normalizing one coeﬃcient per column to 1 (or −1, according to taste) is a trivial ﬁrst step, which also helps in that the remaining coeﬃcients can be interpreted as the parameters in the equilibrium relations, but this only suﬃces when r = 1. The method that gretl uses by default is known as the “Phillips normalization”, or “triangular representation”.1 The starting point is writing β in partitioned form as in β= β1 β2 , where β1 is an r × r matrix and β2 is (n − r ) × r . Assuming that β1 has full rank, β can be post-multiplied by β−1 , giving 1 I I ˆ β= = , −1 β2 β1 −B ˆ The coeﬃcients that gretl produces are β, with B known as the matrix of unrestricted coeﬃcients. In terms of the underlying equilibrium relationship, the Phillips normalization expresses the system 1 For comparison with other studies, you may wish to normalize β diﬀerently. Using the set command you can do set vecm_norm diag to select a normalization that simply scales the columns of the original β such that βij = 1 for i = j and i ≤ r , as used in the empirical section of Boswijk and Doornik (2004). Another alternative is set vecm_norm first, which scales β such that the elements on the ﬁrst row equal 1. To suppress normalization altogether, use set vecm_norm none. (To return to the default: set vecm_norm phillips.) Chapter 22. Cointegration and Vector Error Correction Models of r equilibrium relations as y1,t y2,t = = . . . = b1,r +1 yr +1,t + . . . + b1,n yn,t b2,r +1 yr +1,t + . . . + b2,n yn,t 174 yr ,t br ,r +1 yr +1,t + . . . + br ,n yr ,t where the ﬁrst r variables are expressed as functions of the remaining n − r . Although the triangular representation ensures that the statistical problem of estimating β is solved, the resulting equilibrium relationships may be diﬃcult to interpret. In this case, the user may want to achieve identiﬁcation by specifying manually the system of r 2 constraints that gretl will use to produce an estimate of β. As an example, consider the money demand system presented in section 9.6 of Verbeek (2004). The variables used are m (the log of real money stock M1), infl (inﬂation), cpr (the commercial paper rate), y (log of real GDP) and tbr (the Treasury bill rate).2 Estimation of β can be performed via the commands open money.gdt smpl 1954:1 1994:4 vecm 6 2 m infl cpr y tbr --rc and the relevant portion of the output reads Maximum likelihood estimates, observations 1954:1-1994:4 (T = 164) Cointegration rank = 2 Case 2: Restricted constant beta (cointegrating vectors, standard errors in parentheses) m infl cpr y tbr const 1.0000 (0.0000) 0.0000 (0.0000) 0.56108 (0.10638) -0.40446 (0.10277) -0.54293 (0.10962) -3.7483 (0.78082) 0.0000 (0.0000) 1.0000 (0.0000) -24.367 (4.2113) -0.91166 (4.0683) 24.786 (4.3394) 16.751 (30.909) Interpretation of the coeﬃcients of the cointegration matrix β would be easier if a meaning could be attached to each of its columns. This is possible by hypothesizing the existence of two long-run relationships: a money demand equation m = c1 + β1 infl + β2 y + β3 tbr and a risk premium equation cpr = c2 + β4 infl + β5 y + β6 tbr 2 This data set is available in the verbeek data package; see http://gretl.sourceforge.net/gretl_data.html. Chapter 22. Cointegration and Vector Error Correction Models which imply that the cointegration matrix can be normalized as −1 0 β1 β4 0 −1 β= β2 β5 β3 β6 c1 c2 175 This renormalization can be accomplished by means of the restrict command, to be given after the vecm command or, in the graphical interface, by selecting the “Test, Linear Restrictions” menu entry. The syntax for entering the restrictions should be fairly obvious:3 restrict b[1,1] = -1 b[1,3] = 0 b[2,1] = 0 b[2,3] = -1 end restrict which produces Cointegrating vectors (standard errors in parentheses) m infl cpr y tbr const -1.0000 (0.0000) -0.023026 (0.0054666) 0.0000 (0.0000) 0.42545 (0.033718) -0.027790 (0.0045445) 3.3625 (0.25318) 0.0000 (0.0000) 0.041039 (0.027790) -1.0000 (0.0000) -0.037414 (0.17140) 1.0172 (0.023102) 0.68744 (1.2870) 22.6 Over-identifying restrictions One purpose of imposing restrictions on a VECM system is simply to achieve identiﬁcation. If these restrictions are simply normalizations, they are not testable and should have no eﬀect on the maximized likelihood. In addition, however, one may wish to formulate constraints on β and/or α that derive from the economic theory underlying the equilibrium relationships; substantive restrictions of this sort are then testable via a likelihood-ratio statistic. Gretl is capable of testing general linear restrictions of the form Rb vec(β) = q and/or Ra vec(α) = 0 (22.6) Note that the β restriction may be non-homogeneous (q ≠ 0) but the α restriction must be homogeneous. Nonlinear restrictions are not supported, and neither are restrictions that cross between 3 Note that in this context we are bending the usual matrix indexation convention, using the leading index to refer to the column of β (the particular cointegrating vector). This is standard practice in the literature, and defensible insofar as it is the columns of β (the cointegrating relations or equilibrium errors) that are of primary interest. (22.5) Chapter 22. Cointegration and Vector Error Correction Models 176 β and α. In the case where r > 1 such restrictions may be in common across all the columns of β (or α) or may be speciﬁc to certain columns of these matrices. This is the case discussed in Boswijk (1995) and Boswijk and Doornik (2004, section 4.4). The restrictions (22.5) and (22.6) may be written in explicit form as vec(β) = Hφ + h0 and vec(α ) = Gψ (22.8) respectively, where φ and ψ are the free parameter vectors associated with β and α respectively. We may refer to the free parameters collectively as θ (the column vector formed by concatenating φ and ψ). Gretl uses this representation internally when testing the restrictions. If the list of restrictions that is passed to the restrict command contains more constraints than necessary to achieve identiﬁcation, then an LR test is performed; moreover, the restrict command can be given the --full switch, in which case full estimates for the restricted system are printed (including the Γi terms), and the system thus restricted becomes the “current model” for the purposes of further tests. Thus you are able to carry out cumulative tests, as in Chapter 7 of Johansen (1995). Syntax The full syntax for specifying the restriction is an extension of the one exempliﬁed in the previous section. Inside a restrict. . . end restrict block, valid statements are of the form parameter linear combination = scalar where a parameter linear combination involves a weighted sum of individual elements of β or α (but not both in the same combination); the scalar on the right-hand side must be 0 for combinations involving α, but can be any real number for combinations involving β. Below, we give a few examples of valid restrictions: b[1,1] b[1,4] a[1,3] a[1,1] = + = 1.618 2*b[2,5] = 0 0 a[1,2] = 0 (22.7) A special syntax is reserved for the case when a certain constraint should be applied to all columns of β: in this case, one index is given for each b term, and the square brackets are dropped. Hence, the following syntax restrict b1 + b2 = 0 end restrict corresponds to β11 β21 −β11 β= β 13 β14 −β21 β23 β24 The same convention is used for α: when only one index is given for each a term, the restriction is presumed to apply to all r rows of α, or in other words the given variables are weakly exogenous. For instance, the formulation Chapter 22. Cointegration and Vector Error Correction Models restrict a3 = 0 a4 = 0 end restrict 177 speciﬁes that variables 3 and 4 do not respond to the deviation from equilibrium in the previous period. Finally, a short-cut is available for setting up complex restrictions (but currently only in relation to β): you can specify Rb and q, as in Rb vec(β) = q, by giving the names of previously deﬁned matrices. For example, matrix I4 = I(4) matrix vR = I4**(I4~zeros(4,1)) matrix vq = mshape(I4,16,1) restrict R = vR q = vq end restrict which manually imposes Phillips normalization on the β estimates for a system with cointegrating rank 4. An example Brand and Cassola (2004) propose a money demand system for the Euro area, in which they postulate three long-run equilibrium relationships: money demand Fisher equation Expectation theory of interest rates m = βl l + βy y π = φl l=s where m is real money demand, l and s are long- and short-term interest rates, y is output and π is inﬂation.4 (The names for these variables in the gretl data ﬁle are m_p, rl, rs, y and infl, respectively.) The cointegration rank assumed by the authors is 3 and there are 5 variables, giving 15 elements in the β matrix. 3 × 3 = 9 restrictions are required for identiﬁcation, and a just-identiﬁed system would have 15 − 9 = 6 free parameters. However, the postulated long-run relationships feature only three free parameters, so the over-identiﬁcation rank is 3. Example 22.1 replicates Table 4 on page 824 of the Brand and Cassola article.5 Note that we use the$lnl accessor after the vecm command to store the unrestricted log-likelihood and the $rlnl accessor after restrict for its restricted counterpart. The example continues in script 22.2, where we perform further testing to check whether (a) the income elasticity in the money demand equation is 1 (βy = 1) and (b) the Fisher relation is homogeneous (φ = 1). Since the --full switch was given to the initial restrict command, additional restrictions can be applied without having to repeat the previous ones. (The second script contains a few printf commands, which are not strictly necessary, to format the output nicely.) It turns out that both of the additional hypotheses are rejected by the data, with p-values of 0.002 and 0.004. 4 A traditional formulation of the Fisher equation would reverse the roles of the variables in the second equation, but this detail is immaterial in the present context; moreover, the expectation theory of interest rates implies that the third equilibrium relationship should include a constant for the liquidity premium. However, since in this example the system is estimated with the constant term unrestricted, the liquidity premium gets merged in the system intercept and disappears from zt . 5 Modulo what appear to be a few typos in the article. Chapter 22. Cointegration and Vector Error Correction Models 178 Example 22.1: Estimation of a money demand system with constraints on β Input: open brand_cassola.gdt # perform a few transformations m_p = m_p*100 y = y*100 infl = infl/4 rs = rs/4 rl = rl/4 # replicate table 4, page 824 vecm 2 3 m_p infl rl rs y -q genr ll0 =$lnl restrict --full b[1,1] = 1 b[1,2] = 0 b[1,4] = 0 b[2,1] = 0 b[2,2] = 1 b[2,4] = 0 b[2,5] = 0 b[3,1] = 0 b[3,2] = 0 b[3,3] = 1 b[3,4] = -1 b[3,5] = 0 end restrict genr ll1 = $rlnl Partial output: Unrestricted loglikelihood (lu) = 116.60268 Restricted loglikelihood (lr) = 115.86451 2 * (lu - lr) = 1.47635 P(Chi-Square(3) > 1.47635) = 0.68774 beta (cointegrating vectors, standard errors in parentheses) m_p infl rl rs y 1.0000 (0.0000) 0.0000 (0.0000) 1.6108 (0.62752) 0.0000 (0.0000) -1.3304 (0.030533) 0.0000 (0.0000) 1.0000 (0.0000) -0.67100 (0.049482) 0.0000 (0.0000) 0.0000 (0.0000) 0.0000 (0.0000) 0.0000 (0.0000) 1.0000 (0.0000) -1.0000 (0.0000) 0.0000 (0.0000) Chapter 22. Cointegration and Vector Error Correction Models 179 Example 22.2: Further testing of money demand system Input: restrict b[1,5] = -1 end restrict genr ll_uie =$rlnl restrict b[2,3] = -1 end restrict genr ll_hfh = $rlnl # replicate table 5, page 824 printf "Testing zero restrictions in cointegration space:\n" printf " LR-test, rank = 3: chi^2(3) = %6.4f [%6.4f]\n", 2*(ll0-ll1), \ pvalue(X, 3, 2*(ll0-ll1)) printf "Unit income elasticity: LR-test, rank = 3:\n" printf " chi^2(4) = %g [%6.4f]\n", 2*(ll0-ll_uie), \ pvalue(X, 4, 2*(ll0-ll_uie)) printf "Homogeneity in the Fisher hypothesis:\n" printf " LR-test, rank = 3: chi^2(4) = %6.3f [%6.4f]\n", 2*(ll0-ll_hfh), \ pvalue(X, 4, 2*(ll0-ll_hfh)) Output: Testing zero restrictions in cointegration space: LR-test, rank = 3: chi^2(3) = 1.4763 [0.6877] Unit income elasticity: LR-test, rank = 3: chi^2(4) = 17.2071 [0.0018] Homogeneity in the Fisher hypothesis: LR-test, rank = 3: chi^2(4) = 15.547 [0.0037] Another type of test that is commonly performed is the “weak exogeneity” test. In this context, a variable is said to be weakly exogenous if all coeﬃcients on the corresponding row in the α matrix are zero. If this is the case, that variable does not adjust to deviations from any of the long-run equilibria and can be considered an autonomous driving force of the whole system. The code in Example 22.3 performs this test for each variable in turn, thus replicating the ﬁrst column of Table 6 on page 825 of Brand and Cassola (2004). The results show that weak exogeneity might perhaps be accepted for the long-term interest rate and real GDP (p-values 0.07 and 0.08 respectively). Identiﬁcation and testability One point regarding VECM restrictions that can be confusing at ﬁrst is that identiﬁcation (does the restriction identify the system?) and testability (is the restriction testable?) are quite separate matters. Restrictions can be identifying but not testable; less obviously, they can be testable but not identifying. This can be seen quite easily in relation to a rank-1 system. The restriction β1 = 1 is identifying (it pins down the scale of β) but, being a pure scaling, it is not testable. On the other hand, the restriction β1 + β2 = 0 is testable — the system with this requirement imposed will almost certainly have a lower maximized likelihood — but it is not identifying; it still leaves open the scale of β. We said above that the number of restrictions must equal at least r 2 , where r is the cointegrating Chapter 22. Cointegration and Vector Error Correction Models 180 Example 22.3: Testing for weak exogeneity Input: restrict a1 = 0 end restrict ts_m = 2*(ll0 -$rlnl) restrict a2 = 0 end restrict ts_p = 2*(ll0 - $rlnl) restrict a3 = 0 end restrict ts_l = 2*(ll0 -$rlnl) restrict a4 = 0 end restrict ts_s = 2*(ll0 - $rlnl) restrict a5 = 0 end restrict ts_y = 2*(ll0 -$rlnl) loop foreach i m p l s y --quiet printf "\Delta $i\t%6.3f [%6.4f]\n", ts_$i, pvalue(X, 6, ts_$i) endloop Output (variable, LR test, p-value): \Delta \Delta \Delta \Delta \Delta m p l s y 18.111 21.067 11.819 16.000 11.335 [0.0060] [0.0018] [0.0661] [0.0138] [0.0786] rank, for identiﬁcation. This is a necessary and not a suﬃcient condition. In fact, when r > 1 it can be quite tricky to assess whether a given set of restrictions is identifying. Gretl uses the method suggested by Doornik (1995), where identiﬁcation is assessed via the rank of the information matrix. It can be shown that for restrictions of the sort (22.7) and (22.8) the information matrix has the same rank as the Jacobian matrix J (θ) = (Ip ⊗ β)G : (α ⊗ Ip1 )H A suﬃcient condition for identiﬁcation is that the rank of J (θ) equals the number of free parameters. The rank of this matrix is evaluated by examination of its singular values at a randomly selected point in the parameter space. For practical purposes we treat this condition as if it were both necessary and suﬃcient; that is, we disregard the special cases where identiﬁcation could be achieved without this condition being met.6 6 See Boswijk and Doornik (2004, pp. 447–8) for discussion of this point. Chapter 22. Cointegration and Vector Error Correction Models 181 22.7 Numerical solution methods In general, the ML estimator for the restricted VECM problem has no closed form solution, hence the maximum must be found via numerical methods.7 In some cases convergence may be diﬃcult, and gretl provides several choices to solve the problem. Switching and LBFGS Two maximization methods are available in gretl. The default is the switching algorithm set out in Boswijk and Doornik (2004). The alternative is a limited-memory variant of the BFGS algorithm (LBFGS), using analytical derivatives. This is invoked using the --lbfgs ﬂag with the restrict command. The switching algorithm works by explicitly maximizing the likelihood at each iteration, with reˆˆ ˆ spect to φ, ψ and Ω (the covariance matrix of the residuals) in turn. This method shares a feature with the basic Johansen eigenvalues procedure, namely, it can handle a set of restrictions that does not fully identify the parameters. LBFGS, on the other hand, requires that the model be fully identiﬁed. When using LBFGS, therefore, you may have to supplement the restrictions of interest with normalizations that serve to identify the parameters. For example, one might use all or part of the Phillips normalization (see section 22.5). Neither the switching algorithm nor LBFGS is guaranteed to ﬁnd the global ML solution.8 The optimizer may end up at a local maximum (or, in the case of the switching algorithm, at a saddle point). The solution (or lack thereof) may be sensitive to the initial value selected for θ . By default, gretl selects a starting point using a deterministic method based on Boswijk (1995), but two further options are available: the initialization may be adjusted using simulated annealing, or the user may supply an explicit initial value for θ . The default initialization method is: ˆ 1. Calculate the unrestricted ML β using the Johansen procedure. 2. If the restriction on β is non-homogeneous, use the method proposed by Boswijk (1995): ˆ ˆ φ0 = −[(Ir ⊗ β⊥ ) H ]+ (Ir ⊗ β⊥ ) h0 ˆˆ where β⊥ β = 0 and A+ denotes the Moore–Penrose inverse of A. Otherwise ˆ φ0 = (H H )−1 H vec(β) 3. vec(β0 ) = Hφ0 + h0 . ˆ 4. Calculate the unrestricted ML α conditional on β0 , as per Johansen: ˆ α = S01 β0 (β0 S11 β0 )−1 ˆ 5. If α is restricted by vec(α ) = Gψ, then ψ0 = (G G)−1 G vec(α ) and vec(α0 ) = Gψ0 . 7 The exception is restrictions that are homogeneous, common to all β or all α (in case r > 1), and involve either β only or α only. Such restrictions are handled via the modiﬁed eigenvalues method set out by Johansen (1995). We solve directly for the ML estimator, without any need for iterative methods. 8 In developing gretl’s VECM-testing facilities we have considered a fair number of “tricky cases” from various sources. We’d like to thank Luca Fanelli of the University of Bologna and Sven Schreiber of Goethe University Frankfurt for their help in devising torture-tests for gretl’s VECM code. (22.9) (22.10) (22.11) Chapter 22. Cointegration and Vector Error Correction Models Alternative initialization methods 182 As mentioned above, gretl oﬀers the option of adjusting the initialization using simulated annealing. This is invoked by adding the --jitter option to the restrict command. The basic idea is this: we start at a certain point in the parameter space, and for each of n iterations (currently n = 4096) we randomly select a new point within a certain radius of the previous one, and determine the likelihood at the new point. If the likelihood is higher, we jump to the new point; otherwise, we jump with probability P (and remain at the previous point with probability 1 − P ). As the iterations proceed, the system gradually “cools” — that is, the radius of the random perturbation is reduced, as is the probability of making a jump when the likelihood fails to increase. In the course of this procedure many points in the parameter space are evaluated, starting with the point arrived at by the deterministic method, which we’ll call θ0 . One of these points will be “best” in the sense of yielding the highest likelihood: call it θ ∗ . This point may or may not have a greater likelihood than θ0 . And the procedure has an end point, θn , which may or may not be “best”. The rule followed by gretl in selecting an initial value for θ based on simulated annealing is this: use θ ∗ if θ ∗ > θ0 , otherwise use θn . That is, if we get an improvement in the likelihood via annealing, we make full use of this; on the other hand, if we fail to get an improvement we nonetheless allow the annealing to randomize the starting point. Experiments indicated that the latter eﬀect can be helpful. Besides annealing, a further alternative is manual initialization. This is done by passing a predeﬁned vector to the set command with parameter initvals, as in set initvals myvec The details depend on whether the switching algorithm or LBFGS is used. For the switching algorithm, there are two options for specifying the initial values. The more user-friendly one (for most people, we suppose) is to specify a matrix that contains vec(β) followed by vec(α). For example: open denmark.gdt vecm 2 1 LRM LRY IBO IDE --rc --seasonals matrix BA = {1, -1, 6, -6, -6, -0.2, 0.1, 0.02, 0.03} set initvals BA restrict b[1] = 1 b[1] + b[2] = 0 b[3] + b[4] = 0 end restrict In this example — from Johansen (1995) — the cointegration rank is 1 and there are 4 variables. However, the model includes a restricted constant (the --rc ﬂag) so that β has 5 elements. The α matrix has 4 elements, one per equation. So the matrix BA may be read as (β1 , β2 , β3 , β4 , β5 , α1 , α2 , α3 , α4 ) The other option, which is compulsory when using LBFGS, is to specify the initial values in terms of the free parameters, φ and ψ. Getting this right is somewhat less obvious. As mentioned above, the implicit-form restriction R vec(β) = q has explicit form vec(β) = Hφ + h0 , where H = R⊥ , the right nullspace of R . The vector φ is shorter, by the number of restrictions, than vec(β). The savvy user will then see what needs to be done. The other point to take into account is that if α is unrestricted, the eﬀective length of ψ is 0, since it is then optimal to compute α using Johansen’s formula, conditional on β (equation 22.11 above). The example above could be rewritten as: open denmark.gdt vecm 2 1 LRM LRY IBO IDE --rc --seasonals Chapter 22. Cointegration and Vector Error Correction Models 183 matrix phi = {-8, -6} set initvals phi restrict --lbfgs b[1] = 1 b[1] + b[2] = 0 b[3] + b[4] = 0 end restrict In this more economical formulation the initializer speciﬁes only the two free parameters in φ (5 elements in β minus 3 restrictions). There is no call to give values for ψ since α is unrestricted. Scale removal Consider a simpler version of the restriction discussed in the previous section, namely, restrict b[1] = 1 b[1] + b[2] = 0 end restrict This restriction comprises a substantive, testable requirement — that β1 and β2 sum to zero — and a normalization or scaling, β1 = 1. The question arises, might it be easier and more reliable to maximize the likelihood without imposing β1 = 1?9 If so, we could record this normalization, remove it for the purpose of maximizing the likelihood, then reimpose it by scaling the result. Unfortunately it is not possible to say in advance whether “scale removal” of this sort will give better results, for any particular estimation problem. However, this does seem to be the case more often than not. Gretl therefore performs scale removal where feasible, unless you • explicitly forbid this, by giving the --no-scaling option ﬂag to the restrict command; or • provide a speciﬁc vector of initial values; or • select the LBFGS algorithm for maximization. Scale removal is deemed infeasible if there are any cross-column restrictions on β, or any nonhomogeneous restrictions involving more than one element of β. In addition, experimentation has suggested to us that scale removal is inadvisable if the system is just identiﬁed with the normalization(s) included, so we do not do it in that case. By “just identiﬁed” we mean that the system would not be identiﬁed if any of the restrictions were removed. On that criterion the above example is not just identiﬁed, since the removal of the second restriction would not aﬀect identiﬁcation; and gretl would in fact perform scale removal in this case unless the user speciﬁed otherwise. 9 As a numerical matter, that is. In principle this should make no diﬀerence. Chapter 23 The Kalman Filter 23.1 Preamble The Kalman ﬁlter has been used “behind the scenes” in gretl for quite some time, in computing ARMA estimates. But user access to the Kalman ﬁlter is new and it has not yet been tested to any great extent. We have run some tests of relatively simple cases against the benchmark of SsfPack Basic. This is state-space software written by Koopman, Shephard and Doornik and documented in Koopman et al (1999). It requires Doornik’s ox program. Both ox and SsfPack are available as free downloads for academic use but neither is open-source; see http://www.ssfpack.com. Since Koopman is one of the leading researchers in this area, presumably the results from SsfPack are generally reliable. To date we have been able to replicate the SsfPack results in gretl with a high degree of precision. We welcome both success reports and bug reports. 23.2 Notation It seems that in econometrics everyone is happy with y = Xβ + u, but we can’t, as a community, make up our minds on a standard notation for state-space models. Harvey (1989), Hamilton (1994), Harvey and Proietti (2005) and Pollock (1999) all use diﬀerent conventions. The notation used here is based on James Hamilton’s, with slight variations. A state-space model can be written as ξ t +1 = Ft ξ t + vt yt = At xt + Ht ξ t + wt (23.1) (23.2) where (23.1) is the state transition equation and (23.2) is the observation or measurement equation. The state vector, ξ t , is (r × 1) and the vector of observables, yt , is (n × 1); xt is a (k × 1) vector of exogenous variables. The (r × 1) vector vt and the (n × 1) vector wt are assumed to be vector white noise: E(vt vs ) = Qt for t = s, otherwise 0 E(wt ws ) = Rt for t = s, otherwise 0 The number of time-series observations will be denoted by T . In the special case when Ft = F, Ht = H, At = A, Qt = Q and Rt = R, the model is said to be time-invariant. The Kalman recursions Using this notation, and assuming for the moment that vt and wt are mutually independent, the Kalman recursions can be written as follows. Initialization is via the unconditional mean and variance of ξ 1 : ˆ ξ ξ 1|0 = E(ξ 1 ) P1|0 = E ξ ξ 1 − E(ξ 1 ) 184 ξ ξ 1 − E(ξ 1 ) Chapter 23. The Kalman Filter ˆ Usually these are given by ξ 1|0 = 0 and vec(P1|0 ) = [Ir 2 − F ⊗ F]−1 · vec(Q) but see below for further discussion of the initial variance. Iteration then proceeds in two steps.1 First we update the estimate of the state ˆ ˆ ξ t +1|t = Ft ξ t |t −1 + Kt et where et is the prediction error for the observable: ˆ et = yt − At xt − Ht ξ t |t −1 and Kt is the gain matrix, given by with Σ t = Ht Pt |t −1 Ht + Rt The second step then updates the estimate of the variance of the state using Pt +1|t = Ft Pt |t −1 Ft − Kt Σt Kt + Qt Cross-correlated disturbances Kt = Ft Pt |t −1 Ht Σ −1 t 185 (23.3) (23.4) (23.5) (23.6) The formulation given above assumes mutual independence of the disturbances in the state and observation equations, vt and wt . This assumption holds good in many practical applications, but a more general formulation allows for cross-correlation. In place of (23.1)–(23.2) we may write ξ t +1 = Ft ξ t + Bt ε t yt = At xt + Ht ξ t + Ct ε t where ε t is a (p × 1) disturbance vector, all the elements of which have unit variance, Bt is (r × p ) and Ct is (n × p ). The no-correlation case is nested thus: deﬁne v∗ and w∗ as modiﬁed versions of vt and wt , scaled t t such that each element has unit variance, and let εt = v∗ t w∗ t so that p = r + n. Then (suppressing time subscripts for simplicity) let . . B = Γ r ×r . 0r ×n . C= 0n×r . . . . Λ n×n where Γ and Λ are lower triangular matrices satisfying Q = Γ Γ and R = Λ Λ respectively. The zero sub-matrices in the above expressions for B and C produce the case of mutual independence; this corresponds to the condition BC = 0. In the general case p is not necessarily equal to r + n, and BC may be non-zero. This means that the Kalman gain equation (23.5) must be modiﬁed as Kt = (Ft Pt |t −1 Ht + Bt Ct )Σ −1 t (23.7) Otherwise, the equations given earlier hold good, if we write BB in place of Q and CC in place of R. In the account of gretl’s Kalman facility below we take the uncorrelated case as the baseline, but add remarks on how to handle the correlated case where applicable. 1 For a justiﬁcation of the following formulae see the classic book by Anderson and Moore (1979) or, for a more modern treatment, Pollock (1999) or Hamilton (1994). A transcription of R. E. Kalman’s original paper of 1960 is available at http://www.cs.unc.edu/~welch/kalman/kalmanPaper.html. Chapter 23. The Kalman Filter 186 23.3 Intended usage The Kalman ﬁlter can be used in three ways: two of these are the classic forward and backward pass, or ﬁltering and smoothing respectively; the third use is simulation. In the ﬁltering/smoothing case you have the data yt and you want to reconstruct the states ξ t (and the forecast errors as a byproduct), but we may also have a computational apparatus that does the reverse: given artiﬁciallygenerated series wt and vt , generate the states ξ t (and the observables yt as a by-product). The usefulness of the classical ﬁlter is well known; the usefulness of the Kalman ﬁlter as a simulation tool may be huge too. Think for instance of Monte Carlo experiments, simulation-based inference—see Gourieroux–Monfort (1996) — or Bayesian methods, especially in the context of the estimation of DSGE models. 23.4 Overview of syntax Using the Kalman ﬁlter in gretl is a two-step process. First you set up your ﬁlter, using a block of commands starting with kalman and ending with end kalman — much like the gmm command. Then you invoke the functions kfilter, ksmooth or ksimul to do the actual work. The next two sections expand on these points. 23.5 Deﬁning the ﬁlter Each line within the kalman . . . end kalman block takes the form keyword value where keyword represents a matrix, as shown below. Keyword obsy obsymat obsx obsxmat obsvar statemat statevar inistate inivar Symbol y H x A R F Q ˆ ξ 1|0 P1|0 Dimensions T ×n r ×n T ×k k×n n×n r ×r r ×r r ×1 r ×r For the data matrices y and x the corresponding value may be the name of a predeﬁned matrix, the name of a data series, or the name of a list of series.2 For the other inputs, value may be the name of a predeﬁned matrix or, if the input in question happens to be (1×1), the name of a scalar variable or a numerical constant. If the value of a coeﬃcient matrix is given as the name of a matrix or scalar variable, the input is not “hard-wired” into the Kalman structure, rather a record is made of the name of the variable and on each run of a Kalman function (as described below) its value is re-read. It is therefore possible to write one kalman block and then do several ﬁltering or smoothing passes using diﬀerent sets of coeﬃcients.3 An example of this technique is provided later, in the example scripts 23.1 and 23.2. This facility 2 Note that the data matrices obsy and obsx have T rows. That is, the column vectors y and x in (23.1) and (23.2) are t t in fact the transposes of the t -dated rows of the full matrices. 3 Note, however, that the dimensions of the various input matrices are deﬁned via the initial kalman set-up and it is an error if any of the matrices are changed in size. Chapter 23. The Kalman Filter 187 to alter the values of the coeﬃcients between runs of the ﬁlter is to be distinguished from the case of time-varying matrices, which is discussed below. Not all of the above-mentioned inputs need be speciﬁed in every case; some are optional. (In addition, you can specify the matrices in any order.) The mandatory elements are y, H, F and Q, so the minimal kalman block looks like this: kalman obsy y obsymat H statemat F statevar Q end kalman The optional matrices are listed below, along with the implication of omitting the given matrix. Keyword obsx obsxmat obsvar inistate inivar If omitted. . . no exogenous variables in observation equation no exogenous variables in observation equation no disturbance term in observation equation ˆ ξ 1|0 is set to a zero vector P1|0 is set automatically It might appear that the obsx (x) and obsxmat (A) matrices must go together — either both are given or neither is given. But an exception is granted for convenience. If the observation equation includes a constant but no additional exogenous variables, you can give a (1 × n) value for A without having to specify obsx. More generally, if the row dimension of A is 1 greater than the column dimension of x, it is assumed that the ﬁrst element of A is associated with an implicit column of 1s. Regarding the automatic initialization of P1|0 (in case no inivar input is given): by default this is done as in equation (23.3). However, this method is applicable only if all the eigenvalues of F lie inside the unit circle. If this condition is not satisﬁed we instead apply a diﬀuse prior, setting P1|0 = κ Ir with κ = 107 . If you wish to impose this diﬀuse prior from the outset, append the option ﬂag --diffuse to the end kalman statement.4 Time-varying matrices Any or all of the matrices obsymat, obsxmat, obsvar, statemat and statevar may be timevarying. In that case the value corresponding to the matrix keyword should be given in a special form: the name of an existing matrix plus a function call which modiﬁes that matrix, separated by a semicolon. Note that in this case you must use a matrix variable, even if the matrix in question happens to be 1 × 1. For example, suppose the matrix H is time-varying. Then we might write obsymat H ; modify_H(&H, theta) where modify_H is a user-deﬁned function which modiﬁes matrix H (and theta is a suitable additional argument to that function, if required). The above is just an illustration: the matrix argument does not have to come ﬁrst, and the function can have as many arguments as you like. The essential point is that the function must modify the 4 Initialization of the Kalman ﬁlter outside of the case where equation (23.3) applies has been the subject of much discussion in the literature—see for example de Jong (1991), Koopman (1997). At present gretl does not implement any of the more elaborate proposals that have been made. Chapter 23. The Kalman Filter 188 speciﬁed matrix, which requires that it be given as an argument in “pointer” form (preceded by &). The function need not return any value directly; if it does, that value is ignored. Such matrix-modifying functions will be called at each time-step of the ﬁlter operation, prior to performing any calculations. They have access to the current time-step of the Kalman ﬁlter via the internal variable$kalman_t, which has value 1 on the ﬁrst step, 2 on the second, and so on, up to step T . They also have access to the previous n-vector of forecast errors, et −1 , under the name $kalman_uhat. When t = 1 this will be a zero vector. Correlated disturbances Deﬁning a ﬁlter in which the disturbances vt and wt are correlated involves one modiﬁcation to the account given above. If you append the --cross option ﬂag to the end kalman statement, then the matrices corresponding to the keywords statevar and obsvar are interpreted not as Q and R but rather as B and C as discussed in section 23.2. Gretl then computes Q = BB and R = CC as well as the cross-product BC and utilizes the modiﬁed expression for the gain as given in equation (23.7). As mentioned above, B should be (r × p ) and C should be (n × p ), where p is the number of elements in the combined disturbance vector ε t . Handling of missing values It is acceptable for the data matrices, obsy and obsx, to contain missing values. In this case the ﬁltering operation will work around the missing values, and the ksmooth function can be used to obtain estimates of these values. However, there are two points to note. First, gretl’s default behavior is to skip missing observations when constructing matrices from data series. To change this, use the set command thus: set skip_missing off Second, the handling of missing values is not yet quite right for the case where the observable vector yt contains more than one element. At present, if any of the elements of yt are missing the entire observation is ignored. Clearly it should be possible to make use of any non-missing elements, and this is not very diﬃcult in principle, it’s just awkward and is not implemented yet. Persistence and identity of the ﬁlter At present there is no facility to create a “named ﬁlter”. Only one ﬁlter can exist at any point in time, namely the one created by the last kalman block.5 If a ﬁlter is already deﬁned, and you give a new kalman block, the old ﬁlter is over-written. Otherwise the existing ﬁlter persists (and remains available for the kfilter, ksmooth and ksimul functions) until either (a) the gretl session is terminated or (b) the command delete kalman is given. 23.6 The kfilter function Once a ﬁlter is established, as discussed in the previous section, kfilter can be used to run a forward, forecasting pass. This function returns a scalar code: 0 for successful completion, or 1 if numerical problems were encountered. On successful completion, two scalar accessor variables become available:$kalman_lnl, which gives the overall log-likelihood under the joint normality assumption, =− 1 nT log(2π ) + |Σ t | + et Σ −1 et t 2 t =1 t =1 T T 5 This is not quite true: more precisely, there can be no more than one Kalman ﬁlter at each level of function execution. That is, if a gretl script creates a Kalman ﬁlter, a user-deﬁned function called from that script may also create a ﬁlter, without interfering with the original one. Chapter 23. The Kalman Filter and $kalman_s2, which gives the estimated variance, ˆ σ2 = 1 nT T 189 et Σ −1 et t t =1 (but see below for modiﬁcations to these formulae for the case of a diﬀuse prior). In addition the accessor$kalman_llt gives a (T × 1) vector, element t of which is t =− 1 n log(2π ) + |Σ t | + et Σ −1 et t 2 The kfilter function does not require any arguments, but up to ﬁve matrix quantities may be retrieved via optional pointer arguments. Each of these matrices has T rows, one for each timestep; the contents of the rows are shown in the following listing. 1. Forecast errors for the observable variables: et , n columns. 2. Variance matrix for the forecast errors: vech(Σ t ) , n(n + 1)/2 columns. ˆ 3. Estimate of the state vector: ξ t |t −1 , r columns. 4. MSE of estimate of the state vector: vech(Pt |t −1 ) , r (r + 1)/2 columns. 5. Kalman gain: vec(Kt ) , r n columns. Unwanted trailing arguments can be omitted, otherwise unwanted arguments can be skipped by using the keyword null. For example, the following call retrieves the forecast errors in the matrix E and the estimate of the state vector in S: matrix E S kfilter(&E, null, &S) Matrices given as pointer arguments do not have to be correctly dimensioned in advance; they will be resized to receive the speciﬁed content. Further note: in general, the arguments to kfilter should all be matrix-pointers, but under two conditions you can give a pointer to a series variable instead. The conditions are: (i) the matrix in question has just one column in context (for example, the ﬁrst two matrices will have a single column if the length of the observables vector, n, equals 1) and (ii) the time-series length of the ﬁlter is equal to the current gretl sample size. Likelihood under the diﬀuse prior There seems to be general agreement in the literature that the log-likelihood calculation should be modiﬁed in the case of a diﬀuse prior for P1|0 . However, it is not clear to us that there is a well-deﬁned “correct” method for this. At present we emulate SsfPack (see Koopman et al (1999) and section 23.1). In case P1|0 = κ Ir , we set d = r and calculate T T 1 =− (nT − d) log(2π ) + |Σ t | + et Σ −1 et − d log(κ) t 2 t =1 t =1 and ˆ σ2 = T 1 e Σ −1 et nT − d t =1 t t Chapter 23. The Kalman Filter 190 23.7 The ksmooth function This function returns the (T × r ) matrix of smoothed estimates of the state vector — that is, estiˆ mates based on all T observations: row t of this matrix holds ξ t |T . This function has no required arguments but it oﬀers one optional matrix-pointer argument, which retrieves the variance of the smoothed state estimate, Pt |T . The latter matrix is (T × r (r + 1)/2); each row is in transposed vech form. Examples: matrix S = ksmooth() matrix P S = ksmooth(&P) # smoothed state only # the variance is wanted These values are computed via a backward pass of the ﬁlter, from t = T to t = 1, as follows: Lt = Ft − Kt Ht ut −1 = Ht Σ −1 et + Lt ut t Ut −1 = Ht Σ −1 Ht + Lt Ut Lt t ˆ ˆ ξ t |T = ξ t |t −1 + Pt |t −1 ut −1 Pt |T = Pt |t −1 − Pt |t −1 Ut −1 Pt |t −1 with initial values uT = 0 and UT = 0.6 ˆ This iteration is preceded by a special forward pass in which the matrices Kt , Σ −1 , ξ t |t −1 and Pt |t −1 t are stored for all t . If F is time-varying, its values for all t are stored on the forward pass, and similarly for H. 23.8 The ksimul function This simulation function takes up to three arguments. The ﬁrst, mandatory, argument is a (T × r ) matrix containing artiﬁcial disturbances for the state transition equation: row t of this matrix represents vt . If the current ﬁlter has a non-null R (obsvar) matrix, then the second argument should be a (T × n) matrix containing artiﬁcial disturbances for the observation equation, on the same pattern. Otherwise the second argument should be given as null. If r = 1 you may give a series for the ﬁrst argument, and if n = 1 a series is acceptable for the second argument. Provided that the current ﬁlter does not include exogenous variables in the observation equation (obsx), the T for simulation need not equal that deﬁned by the original obsy data matrix: in eﬀect T is temporarily redeﬁned by the row dimension of the ﬁrst argument to ksimul. Once the simulation is completed, the T value associated with the original data is restored. The value returned by ksimul is a (T × n) matrix holding simulated values for the observables at each time step. A third optional matrix-pointer argument allows you to retrieve a (T × r ) matrix holding the simulated state vector. Examples: matrix Y = ksimul(V) # obsvar is null Y = ksimul(V, W) # obsvar is non-null matrix S Y = ksimul(V, null, &S) # the simulated state is wanted The initial value ξ 1 is calculated thus: we ﬁnd the matrix T such that TT = P1|0 (as given by the inivar element in the kalman block), multiply it into v1 , and add the result to ξ 1|0 (as given by inistate). If the disturbances are correlated across the two equations the arguments to ksimul must be revised: the ﬁrst argument should be a (T × p ) matrix, each row of which represents ε t (see section 23.2), and the second argument should be given as null. 6 See I. Karibzhanov’s exposition at http://www.econ.umn.edu/~karib003/help/kalcvs.htm. Chapter 23. The Kalman Filter 191 23.9 Example 1: ARMA estimation As is well known, the Kalman ﬁlter provides a very eﬃcient way to compute the likelihood of ARMA models; as an example, take an ARMA(1,1) model yt = φyt −1 + εt + θεt −1 One of the ways the above equation can be cast in state-space form is by deﬁning a latent process ξt = (1 − φL)−1 εt . The observation equation corresponding to (23.2) is then yt = ξt + θξt −1 and the state transition equation corresponding to (23.1) is ξt ξt −1 = φ 1 0 0 ξt −1 ξt −2 + εt 0 (23.8) The gretl syntax for a corresponding kalman block would be matrix H = {1; theta} matrix F = {phi, 0; 1, 0} matrix Q = {s^2, 0; 0, 0} kalman obsy y obsymat H statemat F statevar Q end kalman Note that the observation equation (23.8) does not include an “error term”; this is equivalent to saying that V (wt ) = 0 and, as a consequence, the kalman block does not include an obsvar keyword. Once the ﬁlter is set up, all it takes to compute the loglikelihood for given values of φ, θ and σ 2 is to execute the kfilter() function and use the $kalman_lnl accessor (which returns the total log-likelihood) or, more appropriately if the likelihood has to be maximized through mle, the$kalman_llt accessor, which returns the series of individual contribution to the log-likelihood for each observation. An example is shown in script 23.1. 23.10 Example 2: local level model Suppose we have a series yt = µt + εt , where µt is a random walk with normal increments of 2 2 variance σ1 and εt is a normal white noise with variance σ2 , independent of µt . This is known as the “local level” model in Harvey’s (1991) terminology, and it can be cast in state-space form as 2 2 equations (23.1)-(23.2) with F = 1, vt ∼ N(0, σ1 ), H = 1 and wt ∼ N(0, σ2 ). The translation to a kalman block is kalman obsy y obsymat 1 statemat 1 statevar s2 obsvar s1 end kalman --diffuse 2 2 The two unknown parameters σ1 and σ2 can be estimated via maximum likelihood. Script 23.2 provides an example of simulation and estimation of such a model. For the sake of brevity, simulation is carried out via ordinary gretl commands, rather than the state-space apparatus described above. Chapter 23. The Kalman Filter 192 Example 23.1: ARMA estimation function void arma11_via_kalman(series y) /* parameter initalization */ phi = 0 theta = 0 sigma = 1 /* Kalman filter setup */ matrix H = {1; theta} matrix F = {phi, 0; 1, 0} matrix Q = {sigma^2, 0; 0, 0} kalman obsy y obsymat H statemat F statevar Q end kalman /* maximum likelihood estimation */ mle logl = ERR ? NA : $kalman_llt H[2] = theta F[1,1] = phi Q[1,1] = sigma^2 ERR = kfilter() params phi theta sigma end mle -h end function # ------------------------ main --------------------------open arma.gdt # open the "arma" example dataset arma11_via_kalman(y) # estimate an arma(1,1) model arma 1 1 ; y --nc # check via native command The example contains two functions: the ﬁrst one carries out the estimation of the unknown pa2 2 rameters σ1 and σ2 via maximum likelihood; the second one uses these estimates to compute a smoothed estimate of the unobservable series µt calles muhat. A plot of µt and its estimate is presented in Figure 23.1. By appending the following code snippet to the example in Table 23.2, one may check the results against the R command StructTS. foreign language=R --send-data y <- gretldata[,"y"] a <- StructTS(y, type="level") a StateFromR <- as.ts(tsSmooth(a)) gretl.export(StateFromR) end foreign append @dotdir/StateFromR.csv ols Uhat 0 StateFromR --simple Chapter 23. The Kalman Filter 193 Example 23.2: Local level model function matrix /* starting scalar s1 = scalar s2 = local_level (series y) values */ 1 1 /* Kalman filter set-up */ kalman obsy y obsymat 1 statemat 1 statevar s2 obsvar s1 end kalman --diffuse /* ML estimation */ mle ll = ERR ? NA :$kalman_llt ERR = kfilter() params s1 s2 end mle return s1 ~ s2 end function function series loclev_sm (series y, scalar s1, scalar s2) /* return the smoothed estimate of \mu_t */ kalman obsy y obsymat 1 statemat 1 statevar s2 obsvar s1 end kalman --diffuse series ret = ksmooth() return ret end function /* -------------------- main script -------------------- */ nulldata 200 set seed 202020 setobs 1 1 --special true_s1 = 0.25 true_s2 = 0.5 v = normal() * sqrt(true_s1) w = normal() * sqrt(true_s2) mu = 2 + cum(w) y = mu + v matrix Vars = local_level(y) # estimate the variances muhat = loclev_sm(y, Vars[1], Vars[2]) # compute the smoothed state Chapter 23. The Kalman Filter 194 10 mu muhat 8 6 4 2 0 -2 -4 -6 -8 0 50 100 150 200 Figure 23.1: Local level model: µt and its smoothed estimate Chapter 24 Discrete and censored dependent variables 24.1 Logit and probit models It often happens that one wants to specify and estimate a model in which the dependent variable is not continuous, but discrete. A typical example is a model in which the dependent variable is the occupational status of an individual (1 = employed, 0 = unemployed). A convenient way of formalizing this situation is to consider the variable yi as a Bernoulli random variable and analyze its distribution conditional on the explanatory variables xi . That is, yi = 1 0 Pi 1 − Pi (24.1) where Pi = P (yi = 1|xi ) is a given function of the explanatory variables xi . In most cases, the function Pi is a cumulative distribution function F , applied to a linear combination of the xi s. In the probit model, the normal cdf is used, while the logit model employs the logistic function Λ(). Therefore, we have probit logit Pi = F (zi ) = Φ(zi ) 1 Pi = F (zi ) = Λ(zi ) = 1 + e −zi k (24.2) (24.3) (24.4) zi = j =1 xij βj where zi is commonly known as the index function. Note that in this case the coeﬃcients βj cannot be interpreted as the partial derivatives of E(yi |xi ) with respect to xij . However, for a given value of xi it is possible to compute the vector of “slopes”, that is ¯ slopej (x) = ∂F (z) ∂xj ¯ z=z Gretl automatically computes the slopes, setting each explanatory variable at its sample mean. ∗ Another, equivalent way of thinking about this model is in terms of an unobserved variable yi which can be described thus: k ∗ yi = j =1 ∗ We observe yi = 1 whenever yi > 0 and yi = 0 otherwise. If εi is assumed to be normal, then we have the probit model. The logit model arises if we assume that the density function of εi is xij βj + εi = zi + εi (24.5) λ(εi ) = ∂ Λ(εi ) e−εi = ∂εi (1 + e−εi )2 Both the probit and logit model are estimated in gretl via maximum likelihood, where the loglikelihood can be written as L(β) = yi =0 ln[1 − F (zi )] + yi =1 ln F (zi ), (24.6) 195 Chapter 24. Discrete and censored dependent variables 196 which is always negative, since 0 < F (·) < 1. Since the score equations do not have a closed form solution, numerical optimization is used. However, in most cases this is totally transparent to the user, since usually only a few iterations are needed to ensure convergence. The --verbose switch can be used to track the maximization algorithm. Example 24.1: Estimation of simple logit and probit models open greene19_1 logit GRADE const GPA TUCE PSI probit GRADE const GPA TUCE PSI As an example, we reproduce the results given in Greene (2000), chapter 21, where the eﬀectiveness of a program for teaching economics is evaluated by the improvements of students’ grades. Running the code in example 24.1 gives the following output: Model 1: Logit estimates using the 32 observations 1-32 Dependent variable: GRADE VARIABLE const GPA TUCE PSI COEFFICIENT -13.0213 2.82611 0.0951577 2.37869 STDERROR 4.93132 1.26294 0.141554 1.06456 T STAT -2.641 2.238 0.672 2.234 SLOPE (at mean) 0.533859 0.0179755 0.449339 Mean of GRADE = 0.344 Number of cases ’correctly predicted’ = 26 (81.2%) f(beta’x) at mean of independent vars = 0.189 McFadden’s pseudo-R-squared = 0.374038 Log-likelihood = -12.8896 Likelihood ratio test: Chi-square(3) = 15.4042 (p-value 0.001502) Akaike information criterion (AIC) = 33.7793 Schwarz Bayesian criterion (BIC) = 39.6422 Hannan-Quinn criterion (HQC) = 35.7227 Predicted 0 1 Actual 0 18 3 1 3 8 Model 2: Probit estimates using the 32 observations 1-32 Dependent variable: GRADE VARIABLE const GPA TUCE PSI COEFFICIENT -7.45232 1.62581 0.0517288 1.42633 STDERROR 2.54247 0.693883 0.0838903 0.595038 T STAT -2.931 2.343 0.617 2.397 SLOPE (at mean) 0.533347 0.0169697 0.467908 Mean of GRADE = 0.344 Number of cases ’correctly predicted’ = 26 (81.2%) f(beta’x) at mean of independent vars = 0.328 Chapter 24. Discrete and censored dependent variables McFadden’s pseudo-R-squared = 0.377478 Log-likelihood = -12.8188 Likelihood ratio test: Chi-square(3) = 15.5459 (p-value 0.001405) Akaike information criterion (AIC) = 33.6376 Schwarz Bayesian criterion (BIC) = 39.5006 Hannan-Quinn criterion (HQC) = 35.581 Predicted 0 1 Actual 0 18 3 1 3 8 197 In this context, the $uhat accessor function takes a special meaning: it returns generalized residuals as deﬁned in Gourieroux et al (1987), which can be interpreted as unbiased estimators of the latent disturbances εt . These are deﬁned as yi − Pi ˆ for the logit model ui = (24.7) ˆ φ(zi ) yi · ˆ − (1 − yi ) · φ(zi ) for the probit model ˆ Φ(zi ) ˆ 1−Φ(zi ) Among other uses, generalized residuals are often used for diagnostic purposes. For example, it is very easy to set up an omitted variables test equivalent to the familiar LM test in the context of a linear regression; example 24.2 shows how to perform a variable addition test. Example 24.2: Variable addition test in a probit model open greene19_1 probit GRADE const GPA PSI series u =$uhat %$ols u const GPA PSI TUCE -q printf "Variable addition test for TUCE:\n" printf "Rsq * T = %g (p. val. = %g)\n",$trsq, pvalue(X,1,$trsq) The perfect prediction problem One curious characteristic of logit and probit models is that (quite paradoxically) estimation is not feasible if a model ﬁts the data perfectly; this is called the perfect prediction problem. The reason why this problem arises is easy to see by considering equation (24.6): if for some vector β and scalar k it’s the case that zi < k whenever yi = 0 and zi > k whenever yi = 1, the same thing is true for any multiple of β. Hence, L(β) can be made arbitrarily close to 0 simply by choosing enormous values for β. As a consequence, the log-likelihood has no maximum, despite being bounded. Gretl has a mechanism for preventing the algorithm from iterating endlessly in search of a nonexistent maximum. One sub-case of interest is when the perfect prediction problem arises because of a single binary explanatory variable. In this case, the oﬀending variable is dropped from the model and estimation proceeds with the reduced speciﬁcation. Nevertheless, it may happen that no single “perfect classiﬁer” exists among the regressors, in which case estimation is simply impossible and the algorithm stops with an error. This behavior is triggered during the iteration process if max zi < min zi i:yi =0 i:yi =1 Chapter 24. Discrete and censored dependent variables 198 If this happens, unless your model is trivially mis-speciﬁed (like predicting if a country is an oil exporter on the basis of oil revenues), it is normally a small-sample problem: you probably just don’t have enough data to estimate your model. You may want to drop some of your explanatory variables. This problem is well analyzed in Stokes (2004); the results therein are replicated in the example script murder_rates.inp. 24.2 Ordered response models These models constitute a simple variation on ordinary logit/probit models, and are usually applied when the dependent variable is a discrete and ordered measurement — not simply binary, but on an ordinal rather than an interval scale. For example, this sort of model may be applied when the dependent variable is a qualitative assessment such as “Good”, “Average” and “Bad”. In the general case, consider an ordered response variable, y , that can take on any of the J +1 values 0, 1, 2, . . . , J . We suppose, as before, that underlying the observed response is a latent variable, y ∗ = Xβ + ε = z + ε Now deﬁne “cut points”, α1 < α2 < · · · < αJ , such that y =0 y =1 . . . y =J if y ∗ ≤ α1 if α1 < y ∗ ≤ α2 if y ∗ > αJ For example, if the response takes on three values there will be two such cut points, α1 and α2 . The probability that individual i exhibits response j , conditional on the characteristics xi , is then given by P (y ∗ ≤ α1 | xi ) = F (α1 − zi ) for j = 0 ∗ P (αj < y ≤ αj +1 | xi ) = F (αj +1 − zi ) − F (αj − zi ) for 0 < j < J P (yi = j | xi ) = (24.8) P (y ∗ > α | x ) = 1 − F (α − z ) for j = J J i J i ˆ The unknown parameters αj are estimated jointly with the βs via maximum likelihood. The αj estimates are reported by gretl as cut1, cut2 and so on. In order to apply these models in gretl, the dependent variable must either take on only nonnegative integer values, or be explicitly marked as discrete. (In case the variable has non-integer values, it will be recoded internally.) Note that gretl does not provide a separate command for ordered models: the logit and probit commands automatically estimate the ordered version if the dependent variable is acceptable, but not binary. Example 24.3 reproduces the results presented in section 15.10 of Wooldridge (2002a). The question of interest in this analysis is what diﬀerence it makes, to the allocation of assets in pension funds, whether individual plan participants have a choice in the matter. The response variable is an ordinal measure of the weight of stocks in the pension portfolio. Having reported the results of estimation of the ordered model, Wooldridge illustrates the eﬀect of the choice variable by reference to an “average” participant. The example script shows how one can compute this eﬀect in gretl. After estimating ordered models, the$uhat accessor yields generalized residuals as in binary modˆ els; additionally, the $yhat accessor function returns zi , so it is possible to compute an unbiased ∗ estimator of the latent variable yi simply by adding the two together. Chapter 24. Discrete and censored dependent variables 199 Example 24.3: Ordered probit model /* Replicate the results in Wooldridge, Econometric Analysis of Cross Section and Panel Data, section 15.10, using pension-plan data from Papke (AER, 1998). The dependent variable, pctstck (percent stocks), codes the asset allocation responses of "mostly bonds", "mixed" and "mostly stocks" as {0, 50, 100}. The independent variable of interest is "choice", a dummy indicating whether individuals are able to choose their own asset allocations. */ open pension.gdt # demographic characteristics of participant list DEMOG = age educ female black married # dummies coding for income level list INCOME = finc25 finc35 finc50 finc75 finc100 finc101 # Papke’s OLS approach ols pctstck const choice DEMOG INCOME wealth89 prftshr # save the OLS choice coefficient choice_ols =$coeff(choice) # estimate ordered probit probit pctstck choice DEMOG INCOME wealth89 prftshr k = $ncoeff matrix b =$coeff[1:k-2] a1 = $coeff[k-1] a2 =$coeff[k] /* Wooldridge illustrates the ’choice’ effect in the ordered probit by reference to a single, non-black male aged 60, with 13.5 years of education, income in the range $50K -$75K and wealth of $200K, participating in a plan with profit sharing. */ matrix X = {60, 13.5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 200, 1} # with ’choice’ = 0 scalar Xb = (0 ~ X) * b P0 = cdf(N, a1 - Xb) P50 = cdf(N, a2 - Xb) - P0 P100 = 1 - cdf(N, a2 - Xb) E0 = 50 * P50 + 100 * P100 # with ’choice’ = 1 Xb = (1 ~ X) * b P0 = cdf(N, a1 - Xb) P50 = cdf(N, a2 - Xb) - P0 P100 = 1 - cdf(N, a2 - Xb) E1 = 50 * P50 + 100 * P100 printf "\nWith choice, E(y) = %.2f, without E(y) = %.2f\n", E1, E0 printf "Estimated choice effect via ML = %.2f (OLS = %.2f)\n", E1 - E0, choice_ols Chapter 24. Discrete and censored dependent variables 200 24.3 Multinomial logit When the dependent variable is not binary and does not have a natural ordering, multinomial models are used. Multinomial logit is supported in gretl via the --multinomial option to the logit command. Simple models can also be handled via the mle command (see chapter 17). We give here an example of such a model. Let the dependent variable, yi , take on integer values 0, 1, . . . p . The probability that yi = k is given by P (yi = k|xi ) = exp(xi βk ) p j =0 exp(xi βj ) For the purpose of identiﬁcation one of the outcomes must be taken as the “baseline”; it is usually assumed that β0 = 0, in which case P (yi = k|xi ) = and P (yi = 0|xi ) = 1+ p j =1 exp(xi βk ) 1+ p j =1 exp(xi βj ) 1 exp(xi βj ) . Example 24.4 reproduces Table 15.2 in Wooldridge (2002a), based on data on career choice from Keane and Wolpin (1997). The dependent variable is the occupational status of an individual (0 = in school; 1 = not in school and not working; 2 = working), and the explanatory variables are education and work experience (linear and square) plus a “black” binary variable. The full data set is a panel; here the analysis is conﬁned to a cross-section for 1987. For explanations of the matrix methods employed in the script, see chapter 12. 24.4 The Tobit model The Tobit model is used when the dependent variable of a model is censored.1 Assume a latent ∗ variable yi can be described as k ∗ yi = j =1 ∗ where εi ∼ N(0, σ 2 ). If yi were observable, the model’s parameters could be estimated via ordinary least squares. On the contrary, suppose that we observe yi , deﬁned as ∗ yi ∗ for yi > 0 xij βj + εi , yi = 0 for ∗ yi ≤ 0 (24.9) In this case, regressing yi on the xi s does not yield consistent estimates of the parameters β, k because the conditional mean E(yi |xi ) is not equal to j =1 xij βj . It can be shown that restricting the sample to non-zero observations would not yield consistent estimates either. The solution is to estimate the parameters via maximum likelihood. The syntax is simply tobit depvar indvars As usual, progress of the maximization algorithm can be tracked via the --verbose switch, while$uhat returns the generalized residuals. Note that in this case the generalized residual is deﬁned ˆ ˆ ˆ as ui = E(εi |yi = 0) for censored observations, so the familiar equality ui = yi − yi only holds for uncensored observations, that is, when yi > 0. 1 We assume here that censoring occurs from below at 0. Censoring from above, or at a point diﬀerent from zero, can be rather easily handled by re-deﬁning the dependent variable appropriately. For the more general case of two-sided censoring the intreg command may be used (see below). Chapter 24. Discrete and censored dependent variables 201 Example 24.4: Multinomial logit function scalar scalar matrix matrix series series mlogitlogprobs (series y, matrix X, matrix theta) n = max(y) k = cols(X) b = mshape(theta,k,n) tmp = X*b ret = -ln(1 + sumr(exp(tmp))) loop for i=1..n --quiet series x = tmp[,i] ret += (y=$i)? x : 0 endloop return ret end function open keane.gdt # for the manual mle variant the dep. var. must be 0-based status = status - 1 # and we must exclude missing values smpl (year=87 && ok(status)) --restrict matrix X = { const, educ, exper, expersq, black } scalar k = cols(X) matrix theta = zeros(2*k, 1) mle loglik = mlogitlogprobs(status,X,theta) params theta end mle --hessian # Compare the built-in command (in this case we don’t need # status to be 0-based, and NAs are handled correctly) smpl --full status = status + 1 smpl (year=87) --restrict logit status 0 educ exper expersq black --multinomial An important diﬀerence between the Tobit estimator and OLS is that the consequences of nonnormality of the disturbance term are much more severe: non-normality implies inconsistency for the Tobit estimator. For this reason, the output for the tobit model includes the Chesher–Irish (1987) normality test by default. 24.5 Interval regression The interval regression model arises when the dependent variable is unobserved for some (possibly all) observations; what we observe instead is an interval in which the dependent variable lies. In other words, the data generating process is assumed to be ∗ yi = xi β + i ∗ but we only know that mi ≤ yi ≤ Mi , where the interval may be left- or right-unbounded (but ∗ not both). If mi = Mi , we eﬀectively observe yi and no information loss occurs. In practice, each Chapter 24. Discrete and censored dependent variables observation belongs to one of four categories: 1. left-unbounded, when mi = −∞, 2. right-unbounded, when Mi = ∞, 3. bounded, when −∞ < mi < Mi < ∞ and 4. point observations when mi = Mi . 202 It is interesting to note that this model bears similarities to other models in several special cases: • When all observations are point observations the model trivially reduces to the ordinary linear regression model. ∗ • When mi = Mi when yi > 0, while mi = −∞ and Mi = 0 otherwise, we have the Tobit model (see 24.4). • The interval model could be thought of an ordered probit model (see 24.2) in which the cut points (the αj coeﬃcients in eq. 24.8) are observed and don’t need to be estimated. The gretl command intreg estimates interval models by maximum likelihood, assuming normality of the disturbance term i . Its syntax is intreg minvar maxvar X where minvar contains the mi series, with NAs for left-unbounded observations, and maxvar contains Mi , with NAs for right-unbounded observations. By default, standard errors are computed using the negative inverse of the Hessian. If the --robust ﬂag is given, then QML or Huber–White standard errors are calculated instead. In this case the estimated covariance matrix is a “sandwich” of the inverse of the estimated Hessian and the outer product of the gradient. If the model speciﬁcation contains regressors other than just a constant, the output includes a chi-square statistic for testing the joint null hypothesis that none of these regressors has any eﬀect on the outcome. This is a Wald statistic based on the estimated covariance matrix. If you wish to construct a likelihood ratio test, this is easily done by estimating both the full model and the null model (containing only the constant), saving the log-likelihood in both cases via the$lnl accessor, and then referring twice the diﬀerence between the two log-likelihoods to the chi-square distribution with k degrees of freedom, where k is the number of additional regressors (see the pvalue command in the Gretl Command Reference). An example is contained in the sample script wtp.inp, provided with the gretl distribution. As with the probit and Tobit models, after a model has been estimated the $uhat accessor returns ˆ the generalized residual, which is an estimate of i : more precisely, it equals yi − xi β for point observations and E( i |mi , Mi , xi ) otherwise. Note that it is possible to compute an unbiased pre∗ ˆ dictor of yi by summing this estimate to xi β. Script 24.5 shows an example. As a further similarity with Tobit, the interval regression model may deliver inconsistent estimates if the disturbances are non-normal; hence, the Chesher–Irish (1987) test for normality is included by default here too. 24.6 Sample selection model In the sample selection model (also known as “Tobit II” model), there are two latent variables: k ∗ yi = j =1 p xij βj + εi (24.10) ∗ si = j =1 zij γj + ηi (24.11) Chapter 24. Discrete and censored dependent variables and the observation rule is given by yi = ∗ yi ∗ for si > 0 ∗ for si ≤ 0 203 ♦ (24.12) In this context, the ♦ symbol indicates that for some observations we simply do not have data on y : yi may be 0, or missing, or anything else. A dummy variable di is normally used to set censored observations apart. One of the most popular applications of this model in econometrics is a wage equation coupled ∗ ∗ with a labor force participation equation: we only observe the wage for the employed. If yi and si were (conditionally) independent, there would be no reason not to use OLS for estimating equation (24.10); otherwise, OLS does not yield consistent estimates of the parameters βj . ∗ ∗ Since conditional independence between yi and si is equivalent to conditional independence between εi and ηi , one may model the co-dependence between εi and ηi as εi = ληi + vi ; substituting the above expression in (24.10), you obtain the model that is actually estimated: k yi = j =1 ˆ xij βj + ληi + vi , so the hypothesis that censoring does not matter is equivalent to the hypothesis H0 : λ = 0, which can be easily tested. The parameters can be estimated via maximum likelihood under the assumption of joint normality of εi and ηi ; however, a widely used alternative method yields the so-called Heckit estimator, named after Heckman (1979). The procedure can be brieﬂy outlined as follows: ﬁrst, a probit model is ﬁt on equation (24.11); next, the generalized residuals are inserted in equation (24.10) to correct for the eﬀect of sample selection. Gretl provides the heckit command to carry out estimation; its syntax is heckit y X ; d Z where y is the dependent variable, X is a list of regressors, d is a dummy variable holding 1 for uncensored observations and Z is a list of explanatory variables for the censoring equation. Since in most cases maximum likelihood is the method of choice, by default gretl computes ML estimates. The 2-step Heckit estimates can be obtained by using the --two-step option. After estimation, the$uhat accessor contains the generalized residuals. As in the ordinary Tobit model, the residuals equal the diﬀerence between actual and ﬁtted yi only for uncensored observations (those for which di = 1). Example 24.6 shows two estimates from the dataset used in Mroz (1987): the ﬁrst one replicates Table 22.7 in Greene (2003),2 while the second one replicates table 17.1 in Wooldridge (2002a). 2 Note that the estimates given by gretl do not coincide with those found in the printed volume. They do, however, match those found on the errata web page for Greene’s book: http://pages.stern.nyu.edu/~wgreene/Text/Errata/ ERRATA5.htm. Chapter 24. Discrete and censored dependent variables 204 Example 24.5: Interval model on artiﬁcial data Input: nulldata 100 # generate artificial data set seed 201449 x = normal() epsilon = 0.2*normal() ystar = 1 + x + epsilon lo_bound = floor(ystar) hi_bound = ceil(ystar) # run the interval model intreg lo_bound hi_bound const x # estimate ystar gen_resid = $uhat yhat =$yhat + gen_resid corr ystar yhat Output (selected portions): Model 1: Interval estimates using the 100 observations 1-100 Lower limit: lo_bound, Upper limit: hi_bound coefficient std. error t-ratio p-value --------------------------------------------------------const 0.993762 0.0338325 29.37 1.22e-189 *** x 0.986662 0.0319959 30.84 8.34e-209 *** Chi-square(1) Log-likelihood Schwarz criterion 950.9270 -44.21258 102.2407 p-value Akaike criterion Hannan-Quinn 8.3e-209 94.42517 97.58824 sigma = 0.223273 Left-unbounded observations: 0 Right-unbounded observations: 0 Bounded observations: 100 Point observations: 0 ... corr(ystar, yhat) = 0.98960092 Under the null hypothesis of no correlation: t(98) = 68.1071, with two-tailed p-value 0.0000 Chapter 24. Discrete and censored dependent variables 205 Example 24.6: Heckit model open mroz87.gdt genr EXP2 = AX^2 genr WA2 = WA^2 genr KIDS = (KL6+K618)>0 # Greene’s specification list X = const AX EXP2 WE CIT list Z = const WA WA2 FAMINC KIDS WE heckit WW X ; LFP Z --two-step heckit WW X ; LFP Z # Wooldridge’s specification series series list X list Z NWINC = FAMINC - WW*WHRS lww = log(WW) = const WE AX EXP2 = X NWINC WA KL6 K618 heckit lww X ; LFP Z --two-step Chapter 25 Quantile regression 25.1 Introduction ˆ ˆ In Ordinary Least Squares (OLS) regression, the ﬁtted values, yi = Xi β, represent the conditional mean of the dependent variable — conditional, that is, on the regression function and the values of the independent variables. In median regression, by contrast and as the name implies, ﬁtted values represent the conditional median of the dependent variable. It turns out that the principle of estimation for median regression is easily stated (though not so easily computed), namely, choose ˆ β so as to minimize the sum of absolute residuals. Hence the method is known as Least Absolute Deviations or LAD. While the OLS problem has a straightforward analytical solution, LAD is a linear programming problem. Quantile regression is a generalization of median regression: the regression function predicts the conditional τ -quantile of the dependent variable — for example the ﬁrst quartile (τ = .25) or the ninth decile (τ = .90). If the classical conditions for the validity of OLS are satisﬁed — that is, if the error term is independently and identically distributed, conditional on X — then quantile regression is redundant: all the conditional quantiles of the dependent variable will march in lockstep with the conditional mean. Conversely, if quantile regression reveals that the conditional quantiles behave in a manner quite distinct from the conditional mean, this suggests that OLS estimation is problematic. As of version 1.7.5, gretl oﬀers quantile regression functionality (in addition to basic LAD regression, which has been available since early in gretl’s history via the lad command).1 25.2 Basic syntax The basic invocation of quantile regression is quantreg tau reglist where • reglist is a standard gretl regression list (dependent variable followed by regressors, including the constant if an intercept is wanted); and • tau is the desired conditional quantile, in the range 0.01 to 0.99, given either as a numerical value or the name of a pre-deﬁned scalar variable (but see below for a further option). Estimation is via the Frisch–Newton interior point solver (Portnoy and Koenker, 1997), which is substantially faster than the “traditional” Barrodale–Roberts (1974) simplex approach for large problems. gratefully acknowledge our borrowing from the quantreg package for GNU R (version 4.17). The core of the quantreg package is composed of Fortran code written by Roger Koenker; this is accompanied by various driver and auxiliary functions written in the R language by Koenker and Martin Mächler. The latter functions have been re-worked in C for gretl. We have added some guards against potential numerical problems in small samples. 1 We 206 Chapter 25. Quantile regression 207 By default, standard errors are computed according to the asymptotic formula given by Koenker and Bassett (1978). Alternatively, if the --robust option is given, we use the sandwich estimator developed in Koenker and Zhao (1994).2 25.3 Conﬁdence intervals An option --intervals is available. When this is given we print conﬁdence intervals for the parameter estimates instead of standard errors. These intervals are computed using the rank inversion method and in general they are asymmetrical about the point estimates — that is, they are not simply “plus or minus so many standard errors”. The speciﬁcs of the calculation are inﬂected by the --robust option: without this, the intervals are computed on the assumption of IID errors (Koenker, 1994); with it, they use the heteroskedasticity-robust estimator developed by Koenker and Machado (1999). By default, 90 percent intervals are produced. You can change this by appending a conﬁdence value (expressed as a decimal fraction) to the intervals option, as in quantreg tau reglist --intervals=.95 When the conﬁdence intervals option is selected, the parameter estimates are calculated using the Barrodale–Roberts method. This is simply because the Frisch–Newton code does not currently support the calculation of conﬁdence intervals. Two further details. First, the mechanisms for generating conﬁdence intervals for quantile estimates require that the model has at least two regressors (including the constant). If the --intervals option is given for a model containing only one regressor, an error is ﬂagged. Second, when a model is estimated in this mode, you can retrieve the conﬁdence intervals using the accessor $coeff_ci. This produces a k × 2 matrix, where k is the number of regressors. The lower bounds are in the ﬁrst column, the upper bounds in the second. See also section 25.5 below. 25.4 Multiple quantiles As a further option, you can give tau as a matrix — either the name of a predeﬁned matrix or in numerical form, as in {.05, .25, .5, .75, .95}. The given model is estimated for all the τ values and the results are printed in a special form, as shown below (in this case the --intervals option was also given). Model 1: Quantile estimates using the 235 observations 1-235 Dependent variable: foodexp With 90 percent confidence intervals VARIABLE const TAU 0.05 0.25 0.50 0.75 0.95 0.05 0.25 0.50 0.75 0.95 COEFFICIENT 124.880 95.4835 81.4822 62.3966 64.1040 0.343361 0.474103 0.560181 0.644014 0.709069 LOWER 98.3021 73.7861 53.2592 32.7449 46.2649 0.343327 0.420330 0.487022 0.580155 0.673900 UPPER 130.517 120.098 114.012 107.314 83.5790 0.389750 0.494329 0.601989 0.690413 0.734441 income 2 These correspond to the iid and nid options in R’s quantreg package, respectively. Chapter 25. Quantile regression 208 Coefficient on income 0.75 Quantile estimates with 90% band OLS estimate with 90% band 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0 0.2 0.4 tau 0.6 0.8 1 Figure 25.1: Regression of food expenditure on income; Engel’s data The gretl GUI has an entry for Quantile Regression (under /Model/Robust estimation), and you can select multiple quantiles there too. In that context, just give space-separated numerical values (as per the predeﬁned options, shown in a drop-down list). When you estimate a model in this way most of the standard menu items in the model window are disabled, but one extra item is available — graphs showing the τ sequence for a given coefﬁcient in comparison with the OLS coeﬃcient. An example is shown in Figure 25.1. This sort of graph provides a simple means of judging whether quantile regression is redundant (OLS is ﬁne) or informative. In the example shown — based on data on household income and food expenditure gathered by Ernst Engel (1821–1896) — it seems clear that simple OLS regression is potentially misleading. The “crossing” of the OLS estimate by the quantile estimates is very marked. However, it is not always clear what implications should be drawn from this sort of conﬂict. With the Engel data there are two issues to consider. First, Engel’s famous “law” claims an incomeelasticity of food consumption that is less than one, and talk of elasticities suggests a logarithmic formulation of the model. Second, there are two apparently anomalous observations in the data set: household 105 has the third-highest income but unexpectedly low expenditure on food (as judged from a simple scatter plot), while household 138 (which also has unexpectedly low food consumption) has much the highest income, almost twice that of the next highest. With n = 235 it seems reasonable to consider dropping these observations. If we do so, and adopt a log–log formulation, we get the plot shown in Figure 25.2. The quantile estimates still cross the OLS estimate, but the “evidence against OLS” is much less compelling: the 90 percent conﬁdence bands of the respective estimates overlap at all the quantiles considered. 25.5 Large datasets As noted above, when you give the --intervals option with the quantreg command, which calls for estimation of conﬁdence intervals via rank inversion, gretl switches from the default Frisch– Newton algorithm to the Barrodale–Roberts simplex method. Chapter 25. Quantile regression 209 Coefficient on log(income) 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8 0.78 Quantile estimates with 90% band OLS estimate with 90% band 0.76 0 0.2 0.4 tau 0.6 0.8 1 Figure 25.2: Log–log regression; 2 observations dropped from full Engel data set. This is OK for moderately large datasets (up to, say, a few thousand observations) but on very large problems the simplex algorithm may become seriously bogged down. For example, Koenker and Hallock (2001) present an analysis of the determinants of birth weights, using 198377 observations and with 15 regressors. Generating conﬁdence intervals via Barrodale–Roberts for a single value of τ took about half an hour on a Lenovo Thinkpad T60p with 1.83GHz Intel Core 2 processor. If you want conﬁdence intervals in such cases, you are advised not to use the --intervals option, but to compute them using the method of “plus or minus so many standard errors”. (One Frisch– Newton run took about 8 seconds on the same machine, showing the superiority of the interior point method.) The script below illustrates: quantreg .10 y 0 xlist scalar crit = qnorm(.95) matrix ci =$coeff - crit * $stderr ci = ci~($coeff + crit * $stderr) print ci The matrix ci will contain the lower and upper bounds of the (symmetrical) 90 percent conﬁdence intervals. To avoid a situation where gretl becomes unresponsive for a very long time we have set the maximum number of iterations for the Borrodale–Roberts algorithm to the (somewhat arbitrary) value of 1000. We will experiment further with this, but for the meantime if you really want to use this method on a large dataset, and don’t mind waiting for the results, you can increase the limit using the set command with parameter rq_maxiter, as in set rq_maxiter 5000 Part III Technical details 210 Chapter 26 Gretl and TEX 26.1 Introduction TEX — initially developed by Donald Knuth of Stanford University and since enhanced by hundreds of contributors around the world — is the gold standard of scientiﬁc typesetting. Gretl provides various hooks that enable you to preview and print econometric results using the TEX engine, and to save output in a form suitable for further processing with TEX. This chapter explains the ﬁner points of gretl’s TEX-related functionality. The next section describes the relevant menu items; section 26.3 discusses ways of ﬁne-tuning TEX output; section 26.4 explains how to handle the encoding of characters not found in English; and section 26.5 gives some pointers on installing (and learning) TEX if you do not already have it on your computer. (Just to be clear: TEX is not included with the gretl distribution; it is a separate package, including several programs and a large number of supporting ﬁles.) Before proceeding, however, it may be useful to set out brieﬂy the stages of production of a ﬁnal document using TEX. For the most part you don’t have to worry about these details, since, in regard to previewing at any rate, gretl handles them for you. But having some grasp of what is going on behind the scences will enable you to understand your options better. The ﬁrst step is the creation of a plain text “source” ﬁle, containing the text or mathematics to be typset, interspersed with mark-up that deﬁnes how it should be formatted. The second step is to run the source through a processing engine that does the actual formatting. Typically this is either: • a program called latex that generates so-called DVI (device-independent) output, or • a program called pdﬂatex that generates PDF output.1 For previewing, one uses either a DVI viewer (typically xdvi on GNU/Linux systems) or a PDF viewer (for example, Adobe’s Acrobat Reader or xpdf), depending on how the source was processed. If the DVI route is taken, there’s then a third step to produce printable output, typically using the program dvips to generate a PostScript ﬁle. If the PDF route is taken, the output is ready for printing without any further processing. On the MS Windows and Mac OS X platforms, gretl calls pdﬂatex to process the source ﬁle, and expects the operating system to be able to ﬁnd the default viewer for PDF output; DVI is not supported. On GNU/Linux the default is to take the DVI route, but if you prefer to use PDF you can do the following: select the menu item “Tools, Preferences, General” then the “Programs” tab. Find the item titled “Command to compile TeX ﬁles”, and set this to pdflatex. Make sure the “Command to view PDF ﬁles” is set to something appropriate. 26.2 TEX-related menu items The model window The fullest TEX support in gretl is found in the GUI model window. This has a menu item titled “LaTeX” with sub-items “View”, “Copy”, “Save” and “Equation options” (see Figure 26.1). 1 Experts will be aware of something called “plain T X”, which is processed using the program tex. The great majority E A of TEX users, however, use the L TEX macros, initially developed by Leslie Lamport. Gretl does not support plain TEX. 211 Chapter 26. Gretl and TEX A Figure 26.1: LTEX menu in model window 212 The ﬁrst three sub-items have branches titled “Tabular” and “Equation”. By “Tabular” we mean that the model is represented in the form of a table; this is the fullest and most explicit presentation of the results. See Table 26.1 for an example; this was pasted into the manual after using the “Copy, Tabular” item in gretl (a few lines were edited out for brevity). A Table 26.1: Example of LTEX tabular output Model 1: OLS estimates using the 51 observations 1–51 Dependent variable: ENROLL Variable const CATHOL PUPIL WHITE Coeﬃcient 0.241105 0.223530 −0.00338200 −0.152643 Mean of dependent variable S.D. of dependent variable Sum of squared residuals ˆ Standard error of residuals (σ ) Unadjusted R 2 ¯ Adjusted R 2 F (3, 47) Std. Error 0.0660225 0.0459701 0.00271962 0.0407064 t -statistic 3.6519 4.8625 −1.2436 −3.7499 p-value 0.0007 0.0000 0.2198 0.0005 0.0955686 0.0522150 0.0709594 0.0388558 0.479466 0.446241 14.4306 The “Equation” option is fairly self-explanatory — the results are written across the page in equation format, as below: ENROLL = 0.241105 + 0.223530 CATHOL − 0.00338200 PUPIL − 0.152643 WHITE (0.066022) (0.04597) (0.0027196) (0.040706) ¯ T = 51 R 2 = 0.4462 F (3, 47) = 14.431 ˆ σ = 0.038856 (standard errors in parentheses) The distinction between the “Copy” and “Save” options (for both tabular and equation) is twofold. First, “Copy” puts the TEX source on the clipboard while with “Save” you are prompted for the name of a ﬁle into which the source should be saved. Second, with “Copy” the material is copied as a Chapter 26. Gretl and TEX 213 “fragment” while with “Save” it is written as a complete ﬁle. The point is that a well-formed TEX source ﬁle must have a header that deﬁnes the documentclass (article, report, book or whatever) and tags that say \begin{document} and \end{document}. This material is included when you do “Save” but not when you do “Copy”, since in the latter case the expectation is that you will paste the data into an existing TEX source ﬁle that already has the relevant apparatus in place. The items under “Equation options” should be self-explanatory: when printing the model in equation form, do you want standard errors or t -ratios displayed in parentheses under the parameter estimates? The default is to show standard errors; if you want t -ratios, select that item. Other windows Several other sorts of output windows also have TEX preview, copy and save enabled. In the case of windows having a graphical toolbar, look for the TEX button. Figure 26.2 shows this icon (second from the right on the toolbar) along with the dialog that appears when you press the button. Figure 26.2: TEX icon and dialog One aspect of gretl’s TEX support that is likely to be particularly useful for publication purposes is the ability to produce a typeset version of the “model table” (see section 3.4). An example of this is shown in Table 26.2. 26.3 Fine-tuning typeset output There are three aspects to this: adjusting the appearance of the output produced by gretl in A L TEX preview mode; adjusting the formatting of gretl’s tabular output for models when using the tabprint command; and incorporating gretl’s output into your own TEX ﬁles. Previewing in the GUI As regards preview mode, you can control the appearance of gretl’s output using a ﬁle named gretlpre.tex, which should be placed in your gretl user directory (see the Gretl Command Reference). If such a ﬁle is found, its contents will be used as the “preamble” to the TEX source. The default value of the preamble is as follows: \documentclass[11pt]{article} \usepackage[latin1]{inputenc} %% but see below \usepackage{amsmath} \usepackage{dcolumn,longtable} \begin{document} \thispagestyle{empty} Chapter 26. Gretl and TEX Table 26.2: Example of model table output 214 OLS estimates Dependent variable: ENROLL Model 1 const CATHOL PUPIL WHITE ADMEXP n ¯ R2 0.2907∗∗ (0.07853) Model 2 0.2411∗∗ (0.06602) Model 3 0.08557 (0.05794) 0.2216∗∗ (0.04584) 0.2235∗∗ (0.04597) 0.2065∗∗ (0.05160) −0.003035 (0.002727) −0.003382 (0.002720) −0.001697 (0.003025) −0.1482∗∗ (0.04074) −0.1526∗∗ (0.04071) −0.1551 (0.1342) 51 0.4502 96.09 51 0.4462 95.36 51 0.2956 88.69 Standard errors in parentheses * indicates signiﬁcance at the 10 percent level ** indicates signiﬁcance at the 5 percent level Note that the amsmath and dcolumn packages are required. (For some sorts of output the longtable package is also needed.) Beyond that you can, for instance, change the type size or the font by altering the documentclass declaration or including an alternative font package. The line \usepackage[latin1]{inputenc} is automatically changed if gretl ﬁnds itself running on a system where UTF-8 is the default character encoding — see section 26.4 below. In addition, if you should wish to typeset gretl output in more than one language, you can set up per-language preamble ﬁles. A “localized” preamble ﬁle is identiﬁed by a name of the form gretlpre_xx.tex, where xx is replaced by the ﬁrst two letters of the current setting of the LANG environment variable. For example, if you are running the program in Polish, using LANG=pl_PL, then gretl will do the following when writing the preamble for a TEX source ﬁle. 1. Look for a ﬁle named gretlpre_pl.tex in the gretl user directory. If this is not found, then 2. look for a ﬁle named gretlpre.tex in the gretl user directory. If this is not found, then 3. use the default preamble. Conversely, suppose you usually run gretl in a language other than English, and have a suitable gretlpre.tex ﬁle in place for your native language. If on some occasions you want to produce TEX output in English, then you could create an additional ﬁle gretlpre_en.tex: this ﬁle will be used for the preamble when gretl is run with a language setting of, say, en_US. Chapter 26. Gretl and TEX Command-line options 215 After estimating a model via a script — or interactively via the gretl console or using the commandline program gretlcli — you can use the commands tabprint or eqnprint to print the model to ﬁle in tabular format or equation format respectively. These options are explained in the Gretl Command Reference. If you wish alter the appearance of gretl’s tabular output for models in the context of the tabprint command, you can specify a custom row format using the --format ﬂag. The format string must be enclosed in double quotes and must be tied to the ﬂag with an equals sign. The pattern for the format string is as follows. There are four ﬁelds, representing the coeﬃcient, standard error, t ratio and p-value respectively. These ﬁelds should be separated by vertical bars; they may contain a printf-type speciﬁcation for the formatting of the numeric value in question, or may be left blank to suppress the printing of that column (subject to the constraint that you can’t leave all the columns blank). Here are a few examples: --format="%.4f|%.4f|%.4f|%.4f" --format="%.4f|%.4f|%.3f|" --format="%.5f|%.4f||%.4f" --format="%.8g|%.8g||%.4f" The ﬁrst of these speciﬁcations prints the values in all columns using 4 decimal places. The second suppresses the p-value and prints the t -ratio to 3 places. The third omits the t -ratio. The last one again omits the t , and prints both coeﬃcient and standard error to 8 signiﬁcant ﬁgures. Once you set a custom format in this way, it is remembered and used for the duration of the gretl session. To revert to the default formatting you can use the special variant --format=default. Further editing Once you have pasted gretl’s TEX output into your own document, or saved it to ﬁle and opened it in an editor, you can of course modify the material in any wish you wish. In some cases, machinegenerated TEX is hard to understand, but gretl’s output is intended to be human-readable and A -editable. In addition, it does not use any non-standard style packages. Besides the standard L TEX document classes, the only ﬁles needed are, as noted above, the amsmath, dcolumn and longtable packages. These should be included in any reasonably full TEX implementation. 26.4 Character encodings People using gretl in English-speaking locales are unlikely to have a problem with this, but if you’re generating TEX output in a locale where accented characters (not in the ASCII character set) are employed, you may want to pay attention here. Gretl generates TEX output using whatever character encoding is standard on the local system. If the system encoding is in the ISO-8859 family, this will probably be OK wihout any special eﬀort on the part of the user. Newer GNU/Linux systems, however, typically use Unicode (UTF-8). This is also OK so long as your TEX system can handle UTF-8 input, which requires use of the latex-ucs package. So: if you are using gretl to generate TEX in a non-English locale, where the system encoding is UTF8, you will need to ensure that the latex-ucs package is installed. This package may or may not be installed by default when you install TEX. For reference, if gretl detects a UTF-8 environment, the following lines are used in the TEX preamble: \usepackage{ucs} \usepackage[utf8x]{inputenc} Chapter 26. Gretl and TEX 216 26.5 Installing and learning TEX This is not the place for a detailed exposition of these matters, but here are a few pointers. So far as we know, every GNU/Linux distribution has a package or set of packages for TEX, and in fact these are likely to be installed by default. Check the documentation for your distribution. For MS Windows, several packaged versions of TEX are available: one of the most popular is MiKTEX at http://www.miktex.org/. For Mac OS X a nice implementation is iTEXMac, at http://itexmac. sourceforge.net/. An essential starting point for online TEX resources is the Comprehensive TEX Archive Network (CTAN) at http://www.ctan.org/. As for learning TEX, many useful resources are available both online and in print. Among online A guides, Tony Roberts’ “L TEX: from quick and dirty to style and ﬁnesse” is very helpful, at http://www.sci.usq.edu.au/staff/robertsa/LaTeX/latexintro.html A An excellent source for advanced material is The L TEX Companion (Goossens et al., 2004). Chapter 27 Gretl and R 27.1 Introduction R is, by far, the largest free statistical project.1 Like gretl, it is a GNU project and the two have a lot in common; however, gretl’s approach focuses on ease of use much more than R, which instead aims to encompass the widest possible range of statistical procedures. As is natural in the free software ecosystem, we don’t view ourselves as competitors to R,2 but rather as projects sharing a common goal who should support each other whenever possible. For this reason, gretl provides a way to interact with R and thus enable users to pool the capabilities of the two packages. In this chapter, we will explain how to exploit R’s power from within gretl. We assume that the reader has a working installation of R available and a basic grasp of R’s syntax.3 Despite several valiant attempts, no graphical shell has gained wide acceptance in the R community: by and large, the standard method of working with R is by writing scripts, or by typing commands at the R prompt, much in the same way as one would write gretl scripts or work with the gretl console. In this chapter, the focus will be on the methods available to execute R commands without leaving gretl. 27.2 Starting an interactive R session The easiest way to use R from gretl is in interactive mode. Once you have your data loaded in gretl, you can select the menu item “Tools, Start GNU R” and an interactive R session will be started, with your dataset automatically pre-loaded. A simple example: OLS on cross-section data For this example we use Ramanathan’s dataset data4-1, one of the sample ﬁles supplied with gretl. We ﬁrst run, in gretl, an OLS regression of price on sqft, bedrms and baths. The basic results are shown in Table 27.1. Table 27.1: OLS house price regression via gretl Variable const sqft bedrms baths 1 R’s Coeﬃcient 129.062 0.154800 −21.587 −12.192 Std. Error 88.3033 0.0319404 27.0293 43.2500 t -statistic 1.4616 4.8465 −0.7987 −0.2819 p-value 0.1746 0.0007 0.4430 0.7838 homepage is at http://www.r-project.org/. who are we kidding? But it’s friendly competition! 3 The main reference for R documentation is http://cran.r-project.org/manuals.html. In addition, R tutorials abound on the Net; as always, Google is your friend. 2 OK, 217 Chapter 27. Gretl and R 218 We will now replicate the above results using R. Select the menu item “Tools, Start GNU R”. A window similar to the one shown in ﬁgure 27.1 should appear. Figure 27.1: R window The actual look of the R window may be somewhat diﬀerent from what you see in Figure 27.1 (especially for Windows users), but this is immaterial. The important point is that you have a window where you can type commands to R. If the above procedure doesn’t work and no R window opens, it means that gretl was unable to launch R. You should ensure that R is installed and working on your system and that gretl knows where it is. The relevant settings can be found by selecting the “Tools, Preferences, General” menu entry, under the “Programs” tab. Assuming R was launched successfully, you will notice that two commands have been executed automatically: gretldata <- read.table("/home/jack/.gretl/Rdata.tmp", header=TRUE) attach(gretldata) These commands have the eﬀect of loading our dataset into the R workspace in the form of a data frame (one of several forms in which R can store data). Use of a data frame enables the subsequent attach() command, which sets things up so that the variable names deﬁned in the gretl workspace are available as valid identiﬁers within R. In order to replicate gretl’s OLS estimation, go into the R window and type at the prompt model <- lm(price ~ sqft + bedrms + baths) summary(model) You should see something similar to Figure 27.2. Surprise — the estimates coincide! To get out, just close the R window or type q() at the R prompt. Time series data We now turn to an example which uses time series data: we will compare gretl’s and R’s estimates of Box and Jenkins’ immortal “airline” model. The data are contained in the bjg sample dataset. The following gretl code Chapter 27. Gretl and R 219 Figure 27.2: OLS regression on house prices via R open bjg arima 0 1 1 ; 0 1 1 ; lg --nc produces the estimates shown in Table 27.2. Table 27.2: Airline model from Box and Jenkins (1976) — selected portion of gretl’s estimates Variable θ1 Θ1 Coeﬃcient −0.401824 −0.556936 Variance of innovations Log-likelihood Std. Error 0.0896421 0.0731044 t -statistic −4.4825 −7.6184 0.00134810 244.696 −483.39 p-value 0.0000 0.0000 Akaike information criterion If we now open an R session as described in the previous subsection, the data-passing mechanism is slightly diﬀerent. The R commands that read the data from gretl are in this case # load data from gretl gretldata <- read.table("/home/jack/.gretl/Rdata.tmp", header=TRUE) gretldata <- ts(gretldata, start=c(1949, 1), frequency = 12) Since our data were deﬁned in gretl as time series, we use an R time-series object (ts for short) for the transfer. In this way we can retain in R useful information such as the periodicity of the data and the sample limits. The downside is that the names of individual series, as deﬁned in gretl, are not valid identiﬁers. In order to extract the variable lg, one needs to use the syntax lg <- gretldata[, "lg"]. ARIMA estimation can be carried out by issuing the following two R commands: Chapter 27. Gretl and R lg <- gretldata[, "lg"] arima(lg, c(0,1,1), seasonal=c(0,1,1)) 220 which yield Coefficients: ma1 -0.4018 s.e. 0.0896 sma1 -0.5569 0.0731 log likelihood = 244.7, aic = -483.4 sigma^2 estimated as 0.001348: Happily, the estimates again coincide. 27.3 Running an R script Opening an R window and keying in commands is a convenient method when the job is small. In some cases, however, it would be preferable to have R execute a script prepared in advance. One way to do this is via the source() command in R. Alternatively, gretl oﬀers the facility to edit an R script and run it, having the current dataset pre-loaded automatically. This feature can be accessed via the “File, Script Files” menu entry. By selecting “User ﬁle”, one can load a pre-existing R script; if you want to create a new script instead, select the “New script, R script” menu entry. Figure 27.3: Editing window for R scripts In either case, you are presented with a window very similar to the editor window used for ordinary gretl scripts, as in Figure 27.3. There are two main diﬀerences. First, you get syntax highlighting for R’s syntax instead of gretl’s. Second, clicking on the Execute button (the gears icon), launches an instance of R in which your commands are executed. Before R is actually run, you are asked if you want to run R interactively or not (see Figure 27.4). An interactive run opens an R instance similar to the one seen in the previous section: your data will be pre-loaded (if the “pre-load data” box is checked) and your commands will be executed. Once this is done, you will ﬁnd yourself at the R prompt, where you can enter more commands. A non-interactive run, on the other hand, will execute your script, collect the output from R and present it to you in an output window; R will be run in the background. If, for example, the script in Figure 27.3 is run non-interactively, a window similar to Figure 27.5 will appear. 27.4 Taking stuﬀ back and forth As regards the passing of data between the two programs, so far we have only considered passing series from gretl to R. In order to achieve a satisfactory degree of interoperability, more is needed. In the following sub-sections we see how matrices can be exchanged, and how data can be passed from R back to gretl. Chapter 27. Gretl and R 221 Figure 27.4: Editing window for R scripts Passing matrices from gretl to R For passing matrices from gretl to R, you can use the mwrite matrix function described in section 12.6. For example, the following gretl code fragment generates the matrix 3 7 11 4 8 12 A= 5 9 13 6 10 14 and stores it into the ﬁle mymatfile.mat. matrix A = mshape(seq(3,14),4,3) err = mwrite(A, "mymatfile.mat") In order to retrieve this matrix from R, all you have to do is A <- as.matrix(read.table("mymatfile.mat", skip=1)) Although in principle you can give your matrix ﬁle any valid ﬁlename, a couple of conventions may prove useful. First, you may want to use an informative ﬁle suﬃx such as “.mat”, but this is a matter of taste. More importantly, the exact location of the ﬁle created by mwrite could be an issue. By default, if no path is speciﬁed in the ﬁle name, gretl stores matrix ﬁles in the current work directory. However, it may be wise for the purpose at hand to use the directory in which gretl stores all its temporary ﬁles, whose name is stored in the built-in string dotdir (see section 11.2). The value of this string is automatically passed to R as the string variable gretl.dotdir, so the above example may be rewritten more cleanly as Gretl side: matrix A = mshape(seq(3,14),4,3) err = mwrite(A, "@dotdir/mymatfile.mat") R side: fname <- paste(gretl.dotdir, "mymatfile.mat", sep="") A <- as.matrix(read.table(fname, skip=1)) Passing data from R to gretl For passing data in the opposite direction, gretl deﬁnes a special function that can be used in the R environment. An R object will be written as a temporary ﬁle in gretl’s dotdir directory, from where it can be easily retrieved from within gretl. Chapter 27. Gretl and R 222 Figure 27.5: Output from a non-interactive R run The name of this function is gretl.export(), and it accepts one argument, the object to be exported. At present, the objects that can be exported with this method are matrices, data frames and time-series objects. The function creates a text ﬁle, with the same name as the exported object, in gretl’s temporary directory. Data frames and time-series objects are stored as CSV ﬁles, and can be retrieved by using gretl’s append command. Matrices are stored in a special text format that is understood by gretl (see section 12.6); the ﬁle suﬃx is in this case .mat, and to read the matrix in gretl you must use the mread() function. As an example, we take the airline data and use them to estimate a structural time series model à la Harvey (1989). The model we will use is the Basic Structural Model (BSM), in which a time series is decomposed into three terms: yt = µt + γt + εt where µt is a trend component, γt is a seasonal component and εt is a noise term. In turn, the following is assumed to hold: ∆ µt ∆ βt ∆s γt = = = βt −1 + ηt ζt ∆ωt where ∆s is the seasonal diﬀerencing operator, (1 − Ls ), and ηt , ζt and ωt are mutually uncorrelated white noise processes. The object of the analysis is to estimate the variances of the noise components (which may be zero) and to recover estimates of the latent processes µt (the “level”), βt (the “slope”) and γt . Gretl does not provide (yet) a command for estimating this class of models, so we will use R’s StructTS command and import the results back into gretl. Once the bjg dataset is loaded in gretl, we pass the data to R and execute the following script: # extract the log series Chapter 27. Gretl and R y <- gretldata[, "lg"] # estimate the model strmod <- StructTS(y) # save the fitted components (smoothed) compon <- as.ts(tsSmooth(strmod)) # save the estimated variances vars <- as.matrix(strmod$coef) # export into gretl’s temp dir gretl.export(compon) gretl.export(vars) 223 In this case, running the above in R produces nothing more that the echoing of commands: > # load data from gretl > gretldata <- read.table("/home/jack/.gretl/Rdata.tmp", header=TRUE) > gretldata <- ts(gretldata, start=c(1949, 1), frequency = 12) > # load script from gretl > # extract the log series > y <- gretldata[, "lg"] > # estimate the model > strmod <- StructTS(y) > # save the fitted components (smoothed) > compon <- as.ts(tsSmooth(strmod)) > # save the estimated variances > vars <- as.matrix(strmod$coef) > # export into gretl’s temp dir > gretl.export(compon) > gretl.export(vars) However, we see from the output that the two gretl.export() commands ran without errors. Hence, we are ready to pull the results back into gretl by executing the following commands, either from the console or by creating a small script:4 append @dotdir/compon.csv vars = mread("@dotdir/vars.mat") The ﬁrst command reads the estimated time-series components from a CSV ﬁle, which is the format that the passing mechanism employs for series. The matrix vars is read from the ﬁle vars.mat. After the above commands have been executed, three new series will have appeared in the gretl workspace, namely the estimates of the three components; by plotting them together with the original data, you should get a graph similar to Figure 27.6. The estimates of the variances can be seen by printing the vars matrix, as in ? print vars vars (4 x 1) 0.00077185 example will work on Linux and presumably on OSX without modiﬁcations. On the Windows platform, you may have to substitute the “/” character with “\”. 4 This Chapter 27. Gretl and R 224 lg 6.6 6.4 6.2 6 5.8 5.6 5.4 5.2 5 4.8 4.6 1949 6.2 6 5.8 5.6 5.4 5.2 5 4.8 1955 slope 0.01025 0.0102 0.01015 0.0101 0.01005 0.01 1949 0.3 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25 1949 1961 4.6 1949 level 1955 sea 1961 1955 1961 1955 1961 Figure 27.6: Estimated components from BSM 0.0000 0.0013969 0.0000 That is, ˆ2 ση = 0.00077185, ˆ2 σζ = 0, ˆ2 σω = 0.0013969, ˆ2 σε = 0 ˆ2 Notice that, since σζ = 0, the estimate for βt is constant and the level component is simply a random walk with a drift. 27.5 Interacting with R from the command line Up to this point we have spoken only of interaction with R via the GUI program. In order to do the same from the command line interface, gretl provides the foreign command. This enables you to embed non-native commands within a gretl script. A “foreign” block takes the form foreign language=R [--send-data] [--quiet] ... R commands ... end foreign and achieves the same eﬀect as submitting the enclosed R commands via the GUI in the noninteractive mode (see section 27.3 above). The --send-data option arranges for auto-loading of the data present in the gretl session. The --quiet option prevents the output from R from being echoed in the gretl output. Using this method, replicating the example in the previous subsection is rather easy: basically, all it takes is encapsulating the content of the R script in a foreign. . . end foreign block; see example 27.1. Chapter 27. Gretl and R 225 Example 27.1: Estimation of the Basic Structural Model — simple open bjg.gdt foreign language=R --send-data y <- gretldata[, "lg"] strmod <- StructTS(y) compon <- as.ts(tsSmooth(strmod)) vars <- as.matrix(strmod$coef) gretl.export(compon) gretl.export(vars) end foreign append rename rename rename @dotdir/compon.csv level lg_level slope lg_slope sea lg_seas vars = mread("@dotdir/vars.mat") Example 27.2: Estimation of the Basic Structural Model — via a function function list RStructTS(series myseries) smpl ok(myseries) --restrict sx = argname(myseries) foreign language=R --send-data --quiet @sx <- gretldata[, "myseries"] strmod <- StructTS(@sx) compon <- as.ts(tsSmooth(strmod)) gretl.export(compon) end foreign append rename rename rename @dotdir/compon.csv level @sx_level slope @sx_slope sea @sx_seas list ret = @sx_level @sx_slope @sx_seas return ret end function # ------------ main ------------------------open bjg.gdt list X = RStructTS(lg) Chapter 27. Gretl and R 226 The above syntax, despite being already quite useful by itself, shows its full power when it is used in conjunction with user-written functions. Example 27.2 shows how to deﬁne a gretl function that calls R internally. 27.6 Performance issues with R R is a large and complex program, which takes an appreciable time to initialize itself.5 In interactive use this not a signiﬁcant problem, but if you have a gretl script that calls R repeatedly the cumulated start-up costs can become bothersome. To get around this, gretl calls the R shared library by preference; in this case the start-up cost is borne only once, on the ﬁrst invocation of R code from within gretl. Support for the R shared library is built into the gretl packages for MS Windows and OS X — but the advantage is realized only if the library is in fact available at run time. If you are building gretl yourself on Linux and wish to make use of the R library, you should ensure (a) that R has been built with the shared library enabled (specify --enable-R-shlib when conﬁguring your build of R), and (b) that the pkg-config program is able to detect your R installation. We do not link to the R library at build time, rather we open it dynamically on demand. The gretl GUI has an item under the Tools/Preferences menu which enables you to select the path to the library, if it is not detected automatically. If you have the R shared library installed but want to force gretl to call the R executable instead, you can do set R_lib off 27.7 Further use of the R library Besides improving performance, as noted above, use of the R shared library makes possible a further reﬁnement. That is, you can deﬁne functions in R, within a foreign block, then call those functions later in your script much as if they were gretl functions. This is illustrated below. set R_functions on foreign language=R plus_one <- function(q) { z = q+1 invisible(z) } end foreign scalar b=R.plus_one(2) The R function plus_one is obviously trivial in itself, but the example shows a couple of points. First, for this mechanism to work you need to enable R_functions via the set command. Second, to avoid collision with the gretl function namespace, calls to functions deﬁned in this way must be preﬁxed with “R.”, as in R.plus_one. Built-in R functions may also be called in this way, once R_functions is set on. For example one can invoke R’s choose function, which computes binomial coeﬃcients: set R_functions on scalar b=R.choose(10,4) Note, however, that the possibilities for use of built-in R functions are limited; only functions whose arguments and return values are suﬃciently generic (basically scalars or matrices) will work. 5 About one third of a second on an Intel Core Duo machine of 2009 vintage. Chapter 28 Gretl and Ox 28.1 Introduction Ox, written by Jurgen A. Doornik (see Doornik, 2007), is described by its author as “an objectoriented statistical system. At its core is a powerful matrix language, which is complemented by a comprehensive statistical library. Among the special features of Ox are its speed [and] welldesigned syntax. . . . Ox comes in two versions: Ox Professional and Ox Console. Ox is available for Windows, Linux, Mac (OS X), and several Unix platforms.” (www.doornik.com) Ox is proprietary, closed-source software. The command-line version of the program is, however, available free of change for academic users. Quoting again from Doornik’s website: “The Console (command line) versions may be used freely for academic research and teaching purposes only. . . . The Ox syntax is public, and, of course, you may do with your own Ox code whatever you wish.” If you wish to use Ox in conjunction with gretl please refer to doornik.com for further details on licensing. As the reader will no doubt have noticed, all the other software that we discuss in this Guide is open-source and freely available for all users. We make an exception for Ox on the grounds that it is indeed fast and well designed, and that its statistical library — along with various add-on packages that are also available — has exceptional coverage of cutting-edge techniques in econometrics. The gretl authors have used Ox for benchmarking some of gretl’s more advanced features such as dynamic panel models and the state space models.1 28.2 Ox support in gretl The support oﬀered for Ox in gretl is similar to that oﬀered for R, as discussed in chapter 27, but with a few diﬀerences. The ﬁrst diﬀerence to note is that Ox support is not on by default; it must be enabled explicitly. To enable support for Ox, go to the Tools/Preferences/General menu item and check the box labeled “Enable Ox support”. Click “OK” in the preferences dialog, then quit and restart gretl. You will now ﬁnd, under the Programs tab in the Tools/Preferences/General dialog, an entry for specifying the path to the oxl executable, that is, the program that runs Ox ﬁles (on MS Windows it is called oxl.exe). Make sure that path is right, and you’re ready to go. With support enabled, you can open and edit Ox programs in the gretl GUI. Clicking the “execute” icon in the editor window will send your code to Ox for execution. Figures 28.1 and Figure 28.2 show an Ox program and part of its output. In addition you can embed Ox code within a gretl script using a foreign block, as described in connection with R. A trivial example, which simply prints the gretl data matrix within Ox, is shown below: open data4-1 matrix m = { dataset } mwrite(m, "@dotdir/gretl.mat") 1 For a review of Ox, see Cribari-Neto and Zarkos (2003) and for a (somewhat dated) comparison of Ox with other matrix-oriented packages such as GAUSS, see Steinhaus (1999). 227 Chapter 28. Gretl and Ox 228 Figure 28.1: Ox editing window Figure 28.2: Output from Ox Chapter 28. Gretl and Ox foreign language=Ox #include <oxstd.h> main() { decl gmat = gretl_loadmat("gretl.mat"); print(gmat); } end foreign 229 The above example illustrates how a matrix can be passed from gretl to Ox. We use the mwrite function to write a matrix into the user’s “dotdir” (see section 11.2), then in Ox we use the function gretl_loadmat to retrieve the matrix. How does gretl_loadmat come to be deﬁned? When gretl writes out the Ox program corresponding to your foreign block it does two things in addition. First, it writes a small utility ﬁle named gretl_io.ox into your dotdir. This contains a deﬁnition for gretl_loadmat and also for the function gretl_export (see below). Second, gretl interpolates into your Ox code a line which includes this utility ﬁle (it is inserted right after the inclusion of oxstd.h, which is needed in all Ox programs). Note that gretl_loadmat expects to ﬁnd the named ﬁle in the user’s dotdir. 28.3 Illustration: replication of DPD model Example 28.1 shows a more ambitious case. This script replicates one of the dynamic panel data models in Arellano and Bond (1991), ﬁrst using gretl and then using Ox; we then check the relative diﬀerences between the parameter estimates produced by the two programs (which turn out to be reassuringly small). Unlike the previous example, in this case we pass the dataset from gretl to Ox as a CSV ﬁle in order to preserve the variable names. Note the use of the internal variable csv_na to get the right representation of missing values for use with Ox — and also note that the --send-data option for the foreign command is not available in connection with Ox. We get the parameter estimates back from Ox using gretl_export on the Ox side and mread on the gretl side. The gretl_export function takes two arguments, a matrix and a ﬁle name. The ﬁle is written into the user’s dotdir, from where it can be picked up using mread. The ﬁnal portion of the output from Example 28.1 is shown below: ? matrix oxparm = mread("/home/cottrell/.gretl/oxparm.mat") Generated matrix oxparm ? eval abs((parm - oxparm) ./ oxparm) 1.4578e-13 3.5642e-13 5.0672e-15 1.6091e-13 8.9808e-15 2.0450e-14 1.0218e-13 2.1048e-13 9.5898e-15 1.8658e-14 2.1852e-14 2.9451e-13 1.9398e-13 Chapter 28. Gretl and Ox 230 Example 28.1: Estimation of dynamic panel data model via gretl and Ox open abdata.gdt # Take first differences of the independent variables genr Dw = diff(w) genr Dk = diff(k) genr Dys = diff(ys) # 1-step GMM estimation arbond 2 ; n Dw Dw(-1) Dk Dys Dys(-1) 0 --time-dummies matrix parm = \$coeff # Write CSV file for Ox set csv_na .NaN store @dotdir/abdata.csv # Replicate using the Ox DPD package foreign language=Ox #include <oxstd.h> #import <packages/dpd/dpd> main () { decl dpd = new DPD(); dpd.Load("@dotdir/abdata.csv"); dpd.SetYear("YEAR"); dpd.Select(Y_VAR, {"n", 0, 2}); dpd.Select(X_VAR, {"w", 0, 1, "k", 0, 0, "ys", 0, 1}); dpd.Select(I_VAR, {"w", 0, 1, "k", 0, 0, "ys", 0, 1}); dpd.Gmm("n", 2, 99); // GMM-type instrument dpd.SetDummies(D_CONSTANT + D_TIME); dpd.SetTest(2, 2); // Sargan, AR 1-2 tests dpd.Estimate(); // 1-step estimation decl parm = dpd.GetPar(); gretl_export(parm, "oxparm.mat"); delete dpd; } end foreign # Compare the results matrix oxparm = mread("@dotdir/oxparm.mat") eval abs((parm - oxparm) ./ oxparm) Chapter 29 Troubleshooting gretl 29.1 Bug reports Bug reports are welcome. Hopefully, you are unlikely to ﬁnd bugs in the actual calculations done by gretl (although this statement does not constitute any sort of warranty). You may, however, come across bugs or oddities in the behavior of the graphical interface. Please remember that the usefulness of bug reports is greatly enhanced if you can be as speciﬁc as possible: what exactly went wrong, under what conditions, and on what operating system? If you saw an error message, what precisely did it say? 29.2 Auxiliary programs As mentioned above, gretl calls some other programs to accomplish certain tasks (gnuplot for A graphing, L TEX for high-quality typesetting of regression output, GNU R). If something goes wrong with such external links, it is not always easy for gretl to produce an informative error message. If such a link fails when accessed from the gretl graphical interface, you may be able to get more information by starting gretl from the command prompt rather than via a desktop menu entry or icon. On the X window system, start gretl from the shell prompt in an xterm; on MS Windows, start the program gretlw32.exe from a console window or “DOS box” using the -g or --debug option ﬂag. Additional error messages may be displayed on the terminal window. Also please note that for most external calls, gretl assumes that the programs in question are available in your “path” — that is, that they can be invoked simply via the name of the program, without supplying the program’s full location.1 Thus if a given program fails, try the experiment of typing the program name at the command prompt, as shown below. Graphing X window system MS Windows gnuplot wgnuplot.exe Typesetting latex, xdvi pdﬂatex GNU R R RGui.exe If the program fails to start from the prompt, it’s not a gretl issue but rather that the program’s home directory is not in your path, or the program is not installed (properly). For details on modifying your path please see the documentation or online help for your operating system or shell. 1 The exception to this rule is the invocation of gnuplot under MS Windows, where a full path to the program is given. 231 Chapter 30 The command line interface The gretl package includes the command-line program gretlcli. On Linux it can be run from a terminal window (xterm, rxvt, or similar), or at the text console. Under MS Windows it can be run in a console window (sometimes inaccurately called a “DOS box”). gretlcli has its own help ﬁle, which may be accessed by typing “help” at the prompt. It can be run in batch mode, sending output directly to a ﬁle (see also the Gretl Command Reference). If gretlcli is linked to the readline library (this is automatically the case in the MS Windows version; also see Appendix C), the command line is recallable and editable, and oﬀers command completion. You can use the Up and Down arrow keys to cycle through previously typed commands. On a given command line, you can use the arrow keys to move around, in conjunction with Emacs editing keystokes.1 The most common of these are: Keystroke Ctrl-a Ctrl-e Ctrl-d Eﬀect go to start of line go to end of line delete character to right where “Ctrl-a” means press the “a” key while the “Ctrl” key is also depressed. Thus if you want to change something at the beginning of a command, you don’t have to backspace over the whole line, erasing as you go. Just hop to the start and add or delete characters. If you type the ﬁrst letters of a command name then press the Tab key, readline will attempt to complete the command name for you. If there’s a unique completion it will be put in place automatically. If there’s more than one completion, pressing Tab a second time brings up a list. Probably the most useful mode for heavy-duty work with gretlcli is batch (non-interactive) mode, in which the program reads and processes a script, and sends the output to ﬁle. For example gretlcli -b scriptfile > outputfile Note that scriptﬁle is treated as a program argument; only the output ﬁle requires redirection (>). Don’t forget the -b (batch) switch, otherwise the program will wait for user input after executing the script (and if output is redirected, the program will appear to “hang”). 1 Actually, the key bindings shown below are only the defaults; they can be customized. See the readline manual. 232 Part IV Appendices 233 Appendix A Data ﬁle details A.1 Basic native format In gretl’s native data format, a data set is stored in XML (extensible mark-up language). Data ﬁles correspond to the simple DTD (document type deﬁnition) given in gretldata.dtd, which is supplied with the gretl distribution and is installed in the system data directory (e.g. /usr/share/ gretl/data on Linux.) Data ﬁles may be plain text or gzipped. They contain the actual data values plus additional information such as the names and descriptions of variables, the frequency of the data, and so on. Most users will probably not have need to read or write such ﬁles other than via gretl itself, but if you want to manipulate them using other software tools you should examine the DTD and also take a look at a few of the supplied practice data ﬁles: data4-1.gdt gives a simple example; data4-10.gdt is an example where observation labels are included. A.2 Traditional ESL format For backward compatibility, gretl can also handle data ﬁles in the “traditional” format inherited from Ramanathan’s ESL program. In this format (which was the default in gretl prior to version 0.98) a data set is represented by two ﬁles. One contains the actual data and the other information on how the data should be read. To be more speciﬁc: 1. Actual data: A rectangular matrix of white-space separated numbers. Each column represents a variable, each row an observation on each of the variables (spreadsheet style). Data columns can be separated by spaces or tabs. The ﬁlename should have the suﬃx .gdt. By default the data ﬁle is ASCII (plain text). Optionally it can be gzip-compressed to save disk space. You can insert comments into a data ﬁle: if a line begins with the hash mark (#) the entire line is ignored. This is consistent with gnuplot and octave data ﬁles. 2. Header : The data ﬁle must be accompanied by a header ﬁle which has the same basename as the data ﬁle plus the suﬃx .hdr. This ﬁle contains, in order: • (Optional) comments on the data, set oﬀ by the opening string (* and the closing string *), each of these strings to occur on lines by themselves. • (Required) list of white-space separated names of the variables in the data ﬁle. Names are limited to 8 characters, must start with a letter, and are limited to alphanumeric characters plus the underscore. The list may continue over more than one line; it is terminated with a semicolon, ;. • (Required) observations line of the form 1 1 85. The ﬁrst element gives the data frequency (1 for undated or annual data, 4 for quarterly, 12 for monthly). The second and third elements give the starting and ending observations. Generally these will be 1 and the number of observations respectively, for undated data. For time-series data one can use dates of the form 1959.1 (quarterly, one digit after the point) or 1967.03 (monthly, two digits after the point). See Chapter 15 for special use of this line in the case of panel data. • The keyword BYOBS. 234 Appendix A. Data ﬁle details Here is an example of a well-formed data header ﬁle. (* DATA9-6: Data on log(money), log(income) and interest rate from US. Source: Stock and Watson (1993) Econometrica (unsmoothed data) Period is 1900-1989 (annual data). Data compiled by Graham Elliott. *) lmoney lincome intrate ; 1 1900 1989 BYOBS 235 The corresponding data ﬁle contains three columns of data, each having 90 entries. Three further features of the “traditional” data format may be noted. 1. If the BYOBS keyword is replaced by BYVAR, and followed by the keyword BINARY, this indicates that the corresponding data ﬁle is in binary format. Such data ﬁles can be written from gretlcli using the store command with the -s ﬂag (single precision) or the -o ﬂag (double precision). 2. If BYOBS is followed by the keyword MARKERS, gretl expects a data ﬁle in which the ﬁrst column contains strings (8 characters maximum) used to identify the observations. This may be handy in the case of cross-sectional data where the units of observation are identiﬁable: countries, states, cities or whatever. It can also be useful for irregular time series data, such as daily stock price data where some days are not trading days — in this case the observations can be marked with a date string such as 10/01/98. (Remember the 8-character maximum.) Note that BINARY and MARKERS are mutually exclusive ﬂags. Also note that the “markers” are not considered to be a variable: this column does not have a corresponding entry in the list of variable names in the header ﬁle. 3. If a ﬁle with the same base name as the data ﬁle and header ﬁles, but with the suﬃx .lbl, is found, it is read to ﬁll out the descriptive labels for the data series. The format of the label ﬁle is simple: each line contains the name of one variable (as found in the header ﬁle), followed by one or more spaces, followed by the descriptive label. Here is an example: price New car price index, 1982 base year If you want to save data in traditional format, use the -t ﬂag with the store command, either in the command-line program or in the console window of the GUI program. A.3 Binary database details A gretl database consists of two parts: an ASCII index ﬁle (with ﬁlename suﬃx .idx) containing information on the series, and a binary ﬁle (suﬃx .bin) containing the actual data. Two examples of the format for an entry in the idx ﬁle are shown below: G0M910 Composite index of 11 leading indicators (1987=100) M 1948.01 - 1995.11 n = 575 currbal Balance of Payments: Balance on Current Account; SA Q 1960.1 - 1999.4 n = 160 The ﬁrst ﬁeld is the series name. The second is a description of the series (maximum 128 characters). On the second line the ﬁrst ﬁeld is a frequency code: M for monthly, Q for quarterly, A for annual, B for business-daily (daily with ﬁve days per week) and D for daily (seven days per week). No other frequencies are accepted at present. Then comes the starting date (N.B. with two digits following the point for monthly data, one for quarterly data, none for annual), a space, a hyphen, Appendix A. Data ﬁle details 236 another space, the ending date, the string “n = ” and the integer number of observations. In the case of daily data the starting and ending dates should be given in the form YYYY/MM/DD. This format must be respected exactly. Optionally, the ﬁrst line of the index ﬁle may contain a short comment (up to 64 characters) on the source and nature of the data, following a hash mark. For example: # Federal Reserve Board (interest rates) The corresponding binary database ﬁle holds the data values, represented as “ﬂoats”, that is, singleprecision ﬂoating-point numbers, typically taking four bytes apiece. The numbers are packed “by variable”, so that the ﬁrst n numbers are the observations of variable 1, the next m the observations on variable 2, and so on. Appendix B Data import via ODBC Since version 1.7.5, gretl provides a method for retrieving data from databases which support the ODBC standard. Most users won’t be interested in this, but there may be some for whom this feature matters a lot — typically, those who work in an environment where huge data collections are accessible via a Data Base Management System (DBMS). ODBC is the de facto standard for interacting with such systems. In the next section we provide some background information on how ODBC works. What you actually need to do to have gretl retrieve data from a database is explained in section B.2. B.1 ODBC base concepts ODBC is short for Open DataBase Connectivity, a group of software methods that enable a client to interact with a database server. The most common operation is when the client fetches some data from the server. ODBC acts as an intermediate layer between client and server, so the client “talks” to ODBC rather than accessing the server directly (see Figure B.1). ODBC query data Figure B.1: Retrieving data via ODBC For the above mechanism to work, it is necessary that the relevant ODBC software is installed and working on the client machine (contact your DB administrator for details). At this point, the database (or databases) that the server provides will be accessible to the client as a data source with a speciﬁc identiﬁer (a Data Source Name or DSN); in most cases, a username and a password are required to connect to the data source. Once the connection is established, the user sends a query to ODBC, which contacts the database manager, collects the results and sends them back to the user. The query is almost invariably formulated in a special language used for the purpose, namely SQL.1 We will not provide here an SQL tutorial: there are many such tutorials on the Net; besides, each database manager tends to support its own SQL dialect so the precise form of an SQL query may vary slightly if the DBMS on the other end is Oracle, MySQL, PostgreSQL or something else. Suﬃce it to say that the main statement for retrieving data is the SELECT statement. Within a DBMS, data are organized in tables, which are roughly equivalent to spreadsheets. The SELECT statement returns a subset of a table, which is itself a table. For example, imagine that the database holds a table called “NatAccounts”, containing the data shown in Table B.1. The SQL statement SELECT qtr, tradebal, gdp FROM NatAccounts WHERE year=1970; 1 See http://en.wikipedia.org/wiki/SQL. 237 Appendix B. Data import via ODBC year 1970 1970 1970 1970 1971 1971 1971 1971 qtr 1 2 3 4 1 2 3 4 gdp 584763 597746 604270 609706 609597 617002 625536 630047 consump 344746.9 350176.9 355249.7 361794.7 362490 368313.6 372605 377033.9 tradebal −5891.01 −7068.71 −8379.27 −7917.61 −6274.3 −6658.76 −4795.89 −6498.13 238 Table B.1: The “NatAccounts” table produces the subset of the original data shown in Table B.2. qtr 1 2 3 4 tradebal −5891.01 −7068.71 −8379.27 −7917.61 gdp 584763 597746 604270 609706 Table B.2: Result of a SELECT statement Gretl provides a mechanism for forwarding your query to the DBMS via ODBC and including the results in your currently open dataset. B.2 Syntax At present gretl does not oﬀer a graphical interface for ODBC import; this must be done via the command line interface. The two commands used for fetching data via an ODBC connection are open and data. The open command is used for connecting to a DBMS: its syntax is open dsn=database [user=username] [password=password ] --odbc The user and password items are optional; the eﬀect of this command is to initiate an ODBC connection. It is assumed that the machine gretl runs on has a working ODBC client installed. In order to actually retrieve the data, the data command is used. Its syntax is: data series [obs-format=format-string ] query=query-string --odbc where: series is a list of names of gretl series to contain the incoming data, separated by spaces. Note that these series need not exist pior to the ODBC import. format-string is an optional parameter, used to handle cases when a “rectangular” organisation of the database cannot be assumed (more on this later); query-string is a string containing the SQL statement used to extract the data.2 2 Prior to gretl 1.8.8, the tag “query=” was not required (or accepted) before the query string, and only one series could be imported at a time. This variant is still accepted for the sake of backward compatibility. Appendix B. Data import via ODBC 239 There should be no spaces around the equals signs in the obs-format and query ﬁelds in the data command. The query-string can, in principle, contain any valid SQL statement which results in a table. This string may be speciﬁed directly within the command, as in data x query="SELECT foo FROM bar" --odbc which will store into the gretl variable x the content of the column foo from the table bar. However, since in a real-life situation the string containing the SQL statement may be rather long, it may be best to store it in a string variable. For example: string SqlQry = "SELECT foo1, foo2 FROM bar" data x y query=SqlQry --odbc The observation format speciﬁer If the optional parameter obs-format is absent, as in the above example, the SQL query should return k columns of data, where k is the number of series names listed in the data command. It may be necessary to include a smpl command before the data command to set up the right “window” for the incoming data. In addition, if one cannot assume that the data will be delivered in the correct order (typically, chronological order), the SQL query should contain an appropriate ORDER BY clause. The optional format string is used for those cases when there is no certainty that the data from the query will arrive in the same order as the gretl dataset. This may happen when missing values are interspersed within a column, or with data that do not have a natural ordering, e.g. cross-sectional data. In this case, the SQL statement should return a table with m + k columns, where the ﬁrst m columns are used to identify the observation or row in the gretl dataset into which the actual data values in the ﬁnal k columns should be placed. The obs-format string is used to translate the ﬁrst m ﬁelds into a string which matches the string gretl uses to identify observations in the currently open dataset. Up to three columns can be used for this purpose (m ≤ 3). Note that the strings gretl uses to identify observations can be seen by printing any variable “by observation”, as in print index --byobs (The series named index is automatically added to a dataset created via the nulldata command.) The format speciﬁers available for use with obs-format are as follows: %d %s %g print an integer value print an string value print a ﬂoating-point value In addition the format can include literal characters to be passed through, such as slashes or colons, to make the resulting string compatible with gretl’s observation identiﬁers. For example, consider the following ﬁctitious case: we have a 5-days-per-week dataset, to which we want to add the stock index for the Verdurian market;3 it so happens that in Verduria Saturdays are working days but Wednesdays are not. We want a column which does not contain data on Saturdays, because we wouldn’t know where to put them, but at the same time we want to place missing values on all the Wednesdays. In this case, the following syntax could be used 3 See http://www.almeopedia.com/index.php/Verduria. Appendix B. Data import via ODBC string QRY="SELECT year,month,day,VerdSE FROM AlmeaIndexes" data y obs-format="%d/%d/%d" query=QRY --odbc 240 The column VerdSE holds the data to be fetched, which will go into the gretl series y. The ﬁrst three columns are used to construct a string which identiﬁes the day. Daily dates take the form YYYY/MM/DD in gretl. If a row from the DBMS produces the observation string 2008/04/01 this will match OK (it’s a Tuesday), but 2008/04/05 will not match since it is a Saturday; the corresponding row will therefore be discarded. On the other hand, since no string 2008/04/23 will be found in the data coming from the DBMS (it’s a Wednesday), that entry is left blank in our series y. B.3 Examples Table Consump Field time income consump Type decimal(7,2) decimal(16,6) decimal(16,6) Table DATA Field year qtr varname xval Type decimal(4,0) decimal(1,0) varchar(16) decimal(20,10) Table B.3: Example AWM database – structure Table Consump Table DATA 1970 1 2 3 4 1 2 3 4 1 2 CAN CAN CAN CAN COMPR COMPR COMPR COMPR D1 D1 −517.9085000000 662.5996000000 1130.4155000000 467.2508000000 18.4000000000 18.6341000000 18.3000000000 18.2663000000 1.0000000000 0.0000000000 1970 1970 1970 1970 1970 1970 1970 1970 1970 ... Table B.4: Example AWM database — data 1970.00 1970.25 1970.50 1970.75 1971.00 1971.25 1971.50 ... 424278.975500 433218.709400 440954.219100 446278.664700 447752.681800 453553.860100 460115.133100 344746.944000 350176.890400 355249.672300 361794.719900 362489.970500 368313.558500 372605.015300 In the following examples, we will assume that access is available to a database known to ODBC with the data source name “AWM”, with username “Otto” and password “Bingo”. The database “AWM” contains quarterly data in two tables (see B.3 and B.4): The table Consump is the classic “rectangular” dataset; that is, its internal organization is the same as in a spreadsheet or econometrics package: each row is a data point and each column is a variable. The structure of the DATA table is diﬀerent: each record is one ﬁgure, stored in the column xval, and the other ﬁelds keep track of which variable it belongs to, for which date. Example B.1 shows a query for two series: ﬁrst we set up an empty quarterly dataset. Then we connect to the database using the open statement. Once the connection is established we retrieve two columns from the Consump table. No observation string is required because the data already have a suitable structure; we need only import the relevant columns. In example B.2, by contrast, we make use of the observation string since we are drawing from the DATA table, which is not rectangular. The SQL statement stored in the string S produces a table with Appendix B. Data import via ODBC 241 Example B.1: Simple query from a rectangular table nulldata 160 setobs 4 1970:1 --time open dsn=AWM user=Otto password=Bingo --odbc string Qry = "SELECT consump, income FROM Consump" data cons inc query=Qry --odbc Example B.2: Simple query from a non-rectangular table string S = "select year, qtr, xval from DATA \ where varname=’WLN’ ORDER BY year, qtr" data wln obs-format="%d:%d" query=S --odbc three columns. The ORDER BY clause ensures that the rows will be in chronological order, although this is not strictly necessary in this case. Appendix B. Data import via ODBC 242 Example B.3: Handling of missing values for a non-rectangular table string foo = "select year, qtr, xval from DATA \ where varname=’STN’ AND qtr>1" data bar obs-format="%d:%d" query=foo --odbc print bar --byobs Example B.3 shows what happens if the rows in the outcome from the SELECT statement do not match the observations in the currently open gretl dataset. The query includes a condition which ﬁlters out all the data from the ﬁrst quarter. The query result (invisible to the user) would be something like +------+------+---------------+ | year | qtr | xval | +------+------+---------------+ | 1970 | 2 | 7.8705000000 | | 1970 | 3 | 7.5600000000 | | 1970 | 4 | 7.1892000000 | | 1971 | 2 | 5.8679000000 | | 1971 | 3 | 6.2442000000 | | 1971 | 4 | 5.9811000000 | | 1972 | 2 | 4.6883000000 | | 1972 | 3 | 4.6302000000 | ... Internally, gretl ﬁlls the variable bar with the corresponding value if it ﬁnds a match; otherwise, NA is used. Printing out the variable bar thus produces Obs 1970:1 1970:2 1970:3 1970:4 1971:1 1971:2 1971:3 1971:4 1972:1 1972:2 1972:3 ... bar 7.8705 7.5600 7.1892 5.8679 6.2442 5.9811 4.6883 4.6302 Appendix C Building gretl C.1 Requirements Gretl is written in the C programming language, abiding as far as possible by the ISO/ANSI C Standard (C90) although the graphical user interface and some other components necessarily make use of platform-speciﬁc extensions. The program was developed under Linux. The shared library and command-line client should compile and run on any platform that supports ISO/ANSI C and has the libraries listed in Table C.1. If the GNU readline library is found on the host system this will be used for gretcli, providing a much enhanced editable command line. See the readline homepage. Library zlib libxml2 LAPACK FFTW3 glib-2.0 purpose data compression XML manipulation linear algebra Fast Fourier Transform Numerous utilities website info-zip.org xmlsoft.org netlib.org ﬀtw.org gtk.org Table C.1: Libraries required for building gretl The graphical client program should compile and run on any system that, in addition to the above requirements, oﬀers GTK version 2.4.0 or higher (see gtk.org).1 Gretl calls gnuplot for graphing. You can ﬁnd gnuplot at gnuplot.info. As of this writing the most recent oﬃcial release is 4.2.6 (of September, 2009). The gretl packages for MS Windows and Mac OS X come with current CVS gnuplot (version 4.5), and the gretl website oﬀers information on building or installing gnuplot 4.5 on Linux. Some features of gretl make use of portions of Adrian Feguin’s gtkextra library. The relevant parts of this package are included (in slightly modiﬁed form) with the gretl source distribution. A binary version of the program is available for the Microsoft Windows platform (Windows 2000 or higher). This version was cross-compiled under Linux using mingw (the GNU C compiler, gcc, ported for use with win32) and linked against the Microsoft C library, msvcrt.dll. The (free, open-source) Windows installer program is courtesy of Jordan Russell (jrsoftware.org). C.2 Build instructions: a step-by-step guide In this section we give instructions detailed enough to allow a user with only a basic knowledge of a Unix-type system to build gretl. These steps were tested on a fresh installation of Debian Etch. For other Linux distributions (especially Debian-based ones, like Ubuntu and its derivatives) little should change. Other Unix-like operating systems such as MacOSX and BSD would probably require more substantial adjustments. In this guided example, we will build gretl complete with documentation. This introduces a few 1 Up till version 1.5.1, gretl could also be built using GTK 1.2. Support for this was dropped at version 1.6.0 of gretl. 243 Appendix C. Building gretl 244 more requirements, but gives you the ability to modify the documentation ﬁles as well, like the help ﬁles or the manuals. Installing the prerequisites We assume that the basic GNU utilities are already installed on the system, together with these other programs: A • some TEX/L TEXsystem (texlive will do beautifully) • Gnuplot • ImageMagick We also assume that the user has administrative privileges and knows how to install packages. The examples below are carried out using the apt-get shell command, but they can be performed with menu-based utilities like aptitude, dselect or the GUI-based program synaptic. Users of Linux distributions which employ rpm packages (e.g. Red Hat/Fedora, Mandriva, SuSE) may want to refer to the dependencies page on the gretl website. The ﬁrst step is installing the C compiler and related basic utilities, if these are not already in place. On a Debian system, these are contained in a bunch of packages that can be installed via the command apt-get install gcc autoconf automake1.9 libtool flex bison gcc-doc \ libc6-dev libc-dev gfortran gettext pkgconfig Then it is necessary to install the “development” (dev) packages for the libraries that gretl uses: Library GLIB GTK 2.0 PNG XSLT LAPACK FFTW READLINE ZLIB XML GMP MPFR command apt-get install libglib2.0-dev apt-get install libgtk2.0-dev apt-get install libpng12-dev apt-get install libxslt1-dev apt-get install liblapack-dev apt-get install libfftw3-dev apt-get install libreadline-dev apt-get install zlib1g-dev apt-get install libxml2-dev apt-get install libgmp3-dev apt-get install libmpfr-dev (GMP and MPFR are optional, but recommended.) The dev packages for these libraries are necessary to compile gretl — you’ll also need the plain, non-dev library packages to run gretl, but most of these should already be part of a standard installation. In order to enable other optional features, like audio support, you may need to install more libraries. Note! The above steps can be much simpliﬁed on Linux systems that provide deb-based package managers, such as Debian and its derivatives (Ubuntu, Knoppix and other distributions). The command apt-get build-dep gretl Appendix C. Building gretl 245 will download and install all the necessary packages for building the version of gretl that is currently present in your APT sources. Techincally, this does not guarantee that all the software necessary to build the CVS version is included, because the version of gretl on your repository may be quite old and build requirements may have changed in the meantime. However, the chances of a mismatch are rather remote for a reasonably up-to-date system, so the above command should in most cases take care of everything correctly. Getting the source: release or CVS At this point, it is possible to build from the source. You have two options here: obtain the latest released source package, or retrieve the current CVS version of gretl (CVS = Concurrent Versions System). The usual caveat applies to the CVS version, namely, that it may not build correctly and may contain “experimental” code; on the other hand, CVS often contains bug-ﬁxes relative to the released version. If you want to help with testing and to contribute bug reports, we recommend using CVS gretl. To work with the released source: 1. Download the gretl source package from gretl.sourceforge.net. 2. Unzip and untar the package. On a system with the GNU utilities available, the command would be tar xvfz gretl-N.tar.gz (replace N with the speciﬁc version number of the ﬁle you downloaded at step 1). 3. Change directory to the gretl source directory created at step 2 (e.g. gretl-1.6.6). 4. Proceed to the next section, “Conﬁgure and make”. To work with CVS you’ll ﬁrst need to install the cvs client program if it’s not already on your system. Relevant resources you may wish to consult include the CVS website at www.nongnu.org/cvs, general information on sourceforge CVS on the SourceForge CVS page, and instructions speciﬁc to gretl at the SF gretl CVS page. When grabbing the CVS sources for the ﬁrst time, you should ﬁrst decide where you want to store the code. For example, you might create a directory called cvs under your home directory. Open a terminal window, cd into this directory, and type the following commands: cvs -d:pserver:anonymous@gretl.cvs.sourceforge.net:/cvsroot/gretl login cvs -z3 -d:pserver:anonymous@gretl.cvs.sourceforge.net:/cvsroot/gretl co -P gretl After the ﬁrst command you will be prompted for a password: just hit the Enter key. After the second command, cvs should create a subdirectory named gretl and ﬁll it with the current sources. When you want to update the source, this is very simple: just move into the gretl directory and type cvs update -d -P Assuming you’re now in the CVS gretl directory, you can proceed in the same manner as with the released source package. Conﬁgure the source The next command you need is ./configure; this is a complex script that detects which tools you have on your system and sets things up. The configure command accepts many options; you may want to run ./configure --help Appendix C. Building gretl 246 ﬁrst to see what options are available. One option you way wish to tweak is --prefix. By default the installation goes under /usr/local but you can change this. For example ./configure --prefix=/usr will put everything under the /usr tree. Another useful option refers to the fact that, by default, gretl oﬀers support for the gnome desktop. If you want to suppress the gnome-speciﬁc features you can pass the option --without-gnome to configure. In order to have the documentation built, we need to pass the relevant option to configure, as in ./configure --enable-build-doc But please note that this option will work only if you are using the CVS source. You will see a number of checks being run, and if everything goes according to plan, you should see a summary similar to that displayed in Example C.1. Example C.1: Output from ./configure --enable-build-doc Configuration: Installation path: Use readline library: Use gnuplot for graphs: Use PNG for gnuplot graphs: Use LaTeX for typesetting output: Gnu Multiple Precision support: MPFR support: LAPACK support: FFTW3 support: Build with GTK version: Script syntax highlighting: Use installed gtksourceview: Build with gnome support: Build gretl documentation: Build message catalogs: Gnome installation prefix: X-12-ARIMA support: TRAMO/SEATS support: Experimental audio support: Now type ’make’ to build gretl. /usr/local yes yes yes yes yes no yes yes 2.0 yes yes no yes yes NA yes yes no If you’re using CVS, it’s a good idea to re-run the configure script after doing an update. This is not always necessary, but sometimes it is, and it never does any harm. For this purpose, you may want to write a little shell script that calls configure with any options you want to use. Build and install We are now ready to undertake the compilation proper: this is done by running the make command, which takes care of compiling all the necessary source ﬁles in the correct order. All you need to do is type make Appendix C. Building gretl 247 This step will likely take several minutes to complete; a lot of output will be produced on screen. Once this is done, you can install your freshly baked copy of gretl on your system via make install On most systems, the make install command requires you to have administrative privileges. Hence, either you log in as root before launching make install or you may want to use the sudo utility: sudo make install Appendix D Numerical accuracy Gretl uses double-precision arithmetic throughout — except for the multiple-precision plugin invoked by the menu item “Model, Other linear models, High precision OLS” which represents ﬂoatingpoint values using a number of bits given by the environment variable GRETL_MP_BITS (default value 256). The normal equations of Least Squares are by default solved via Cholesky decomposition, which is highly accurate provided the matrix of cross-products of the regressors, X X , is not very ill conditioned. If this problem is detected, gretl automatically switches to use QR decomposition. The program has been tested rather thoroughly on the statistical reference datasets provided by NIST (the U.S. National Institute of Standards and Technology) and a full account of the results may be found on the gretl website (follow the link “Numerical accuracy”). To date, two published reviews have discussed gretl’s accuracy: Giovanni Baiocchi and Walter Distaso (2003), and Talha Yalta and Yasemin Yalta (2007). We are grateful to these authors for their careful examination of the program. Their comments have prompted several modiﬁcations including the use of Stephen Moshier’s cephes code for computing p-values and other quantities relating to probability distributions (see netlib.org), changes to the formatting of regression output to ensure that the program displays a consistent number of signiﬁcant digits, and attention to compiler issues in producing the MS Windows version of gretl (which at one time was slighly less accurate than the Linux version). Gretl now includes a “plugin” that runs the NIST linear regression test suite. You can ﬁnd this under the “Tools” menu in the main window. When you run this test, the introductory text explains the expected result. If you run this test and see anything other than the expected result, please send a bug report to cottrell@wfu.edu. All regression statistics are printed to 6 signiﬁcant ﬁgures in the current version of gretl (except when the multiple-precision plugin is used, in which case results are given to 12 ﬁgures). If you want to examine a particular value more closely, ﬁrst save it (for example, using the genr command) then print it using printf, to as many digits as you like (see the Gretl Command Reference). 248 Appendix E Related free software Gretl’s capabilities are substantial, and are expanding. Nonetheless you may ﬁnd there are some things you can’t do in gretl, or you may wish to compare results with other programs. If you are looking for complementary functionality in the realm of free, open-source software we recommend the following programs. The self-description of each program is taken from its website. • GNU R r-project.org: “R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script ﬁles. . . It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.” Comment: There are numerous add-on packages for R covering most areas of statistical work. • GNU Octave www.octave.org: “GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.” • JMulTi www.jmulti.de: “JMulTi was originally designed as a tool for certain econometric procedures in time series analysis that are especially diﬃcult to use and that are not available in other packages, like Impulse Response Analysis with bootstrapped conﬁdence intervals for VAR/VEC modelling. Now many other features have been integrated as well to make it possible to convey a comprehensive analysis.” Comment: JMulTi is a java GUI program: you need a java run-time environment to make use of it. As mentioned above, gretl oﬀers the facility of exporting data in the formats of both Octave and R. In the case of Octave, the gretl data set is saved as a single matrix, X. You can pull the X matrix apart if you wish, once the data are loaded in Octave; see the Octave manual for details. As for R, the exported data ﬁle preserves any time series structure that is apparent to gretl. The series are saved as individual structures. The data should be brought into R using the source() command. In addition, gretl has a convenience function for moving data quickly into R. Under gretl’s “Tools” menu, you will ﬁnd the entry “Start GNU R”. This writes out an R version of the current gretl data set (in the user’s gretl directory), and sources it into a new R session. The particular way R is invoked depends on the internal gretl variable Rcommand, whose value may be set under the “Tools, Preferences” menu. The default command is RGui.exe under MS Windows. Under X it is xterm -e R. Please note that at most three space-separated elements in this command string will be processed; any extra elements are ignored. 249 Appendix F Listing of URLs Below is a listing of the full URLs of websites mentioned in the text. Estima (RATS) http://www.estima.com/ FFTW3 http://www.fftw.org/ Gnome desktop homepage http://www.gnome.org/ GNU Multiple Precision (GMP) library http://gmplib.org/ GNU Octave homepage http://www.octave.org/ GNU R homepage http://www.r-project.org/ GNU R manual http://cran.r-project.org/doc/manuals/R-intro.pdf Gnuplot homepage http://www.gnuplot.info/ Gnuplot manual http://ricardo.ecn.wfu.edu/gnuplot.html Gretl data page http://gretl.sourceforge.net/gretl_data.html Gretl homepage http://gretl.sourceforge.net/ GTK+ homepage http://www.gtk.org/ GTK+ port for win32 http://www.gimp.org/~tml/gimp/win32/ Gtkextra homepage http://gtkextra.sourceforge.net/ InfoZip homepage http://www.info-zip.org/pub/infozip/zlib/ JMulTi homepage http://www.jmulti.de/ JRSoftware http://www.jrsoftware.org/ Mingw (gcc for win32) homepage http://www.mingw.org/ Minpack http://www.netlib.org/minpack/ Penn World Table http://pwt.econ.upenn.edu/ Readline homepage http://cnswww.cns.cwru.edu/~chet/readline/rltop.html Readline manual http://cnswww.cns.cwru.edu/~chet/readline/readline.html Xmlsoft homepage http://xmlsoft.org/ 250 Bibliography Agresti, A. (1992) “A Survey of Exact Inference for Contingency Tables”, Statistical Science, 7, pp. 131–53. Akaike, H. (1974) “A New Look at the Statistical Model Identiﬁcation”, IEEE Transactions on Automatic Control, AC-19, pp. 716–23. Anderson, T. W. and Hsiao, C. (1981) “Estimation of Dynamic Models with Error Components”, Journal of the American Statistical Association, 76, pp. 598–606. Anderson B. and Moore, J. (1979) Optimal Filtering, Upper Saddle River, NJ: Prentice-Hall. Andrews, D. W. K. and Monahan, J. C. (1992) “An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator”, Econometrica, 60, pp. 953–66. Arellano, M. (2003) Panel Data Econometrics, Oxford: Oxford University Press. Arellano, M. and Bond, S. (1991) “Some Tests of Speciﬁcation for Panel Data: Monte Carlo Evidence and an Application to Employment Equations”, The Review of Economic Studies, 58, pp. 277–97. Baiocchi, G. and Distaso, W. (2003) “GRETL: Econometric software for the GNU generation”, Journal of Applied Econometrics, 18, pp. 105–10. Baltagi, B. H. (1995) Econometric Analysis of Panel Data, New York: Wiley. Barrodale, I. and Roberts, F. D. K. (1974) “Solution of an overdetermined system of equations in the l norm”, Communications of the ACM, 17, pp. 319–320. Baxter, M. and King, R. G. (1999) “Measuring Business Cycles: Approximate Band-Pass Filters for Economic Time Series”, The Review of Economics and Statistics, 81(4), pp. 575–593. Beck, N. and Katz, J. N. (1995) “What to do (and not to do) with Time-Series Cross-Section Data”, The American Political Science Review, 89, pp. 634–47. Belsley, D., Kuh, E. and Welsch, R. (1980) Regression Diagnostics, New York: Wiley. Berndt, E., Hall, B., Hall, R. and Hausman, J. (1974) “Estimation and Inference in Nonlinear Structural Models”, Annals of Economic and Social Measurement, 3/4, pp. 653–65. Blundell, R. and Bond S. (1998) “Initial Conditions and Moment Restrictions in Dynamic Panel Data Models”, Journal of Econometrics, 87, pp. 115–43. Bollerslev, T. and Ghysels, E. (1996) “Periodic Autoregressive Conditional Heteroscedasticity”, Journal of Business and Economic Statistics, 14, pp. 139–51. Boswijk, H. Peter (1995) “Identiﬁability of Cointegrated Systems”, Tinbergen Institute Discussion Paper 95-78, http://www.ase.uva.nl/pp/bin/258fulltext.pdf Boswijk, H. Peter and Doornik, Jurgen A. (2004) “Identifying, estimating and testing restricted cointegrated systems: An overview”, Statistica Neerlandica, 58/4, pp. 440–465. Box, G. E. P. and Jenkins, G. (1976) Time Series Analysis: Forecasting and Control, San Franciso: Holden-Day. Box, G. E. P. and Muller, M. E. (1958) “A Note on the Generation of Random Normal Deviates”, Annals of Mathematical Statistics, 29, pp. 610–11. Brand, C. and Cassola, N. (2004) “A money demand system for euro area M3”, Applied Economics, 36/8, pp. 817–838. Breusch, T. S. and Pagan, A. R. (1979), “A Simple Test for Heteroscedasticity and Random Coeﬃcient Variation”, Econometrica, 47, pp. 1287–94. 251 Bibliography 252 Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995), “A Limited Memory Algorithm for Bound Constrained Optimization”, SIAM Journal on Scientiﬁc Computing, 16, pp. 1190–1208. Cameron, A. C. and Trivedi, P. K. (2005) Microeconometrics, Methods and Applications, Cambridge: Cambridge University Press. Chesher, A. and Irish, M. (1987), “Residual Analysis in the Grouped and Censored Normal Linear Model”, Journal of Econometrics, 34, pp. 33–61. Cribari-Neto, F. and Zarkos, S. G. (2003), “Econometric and Statistical Computing Using Ox”, Computational Economics, 21, pp. 277–95. Cureton, E. (1967), “The Normal Approximation to the Signed-Rank Sampling Distribution when Zero Diﬀerences are Present”, Journal of the American Statistical Association, 62, pp. 1068– 1069. Davidson, R. and MacKinnon, J. G. (1993) Estimation and Inference in Econometrics, New York: Oxford University Press. Davidson, R. and MacKinnon, J. G. (2004) Econometric Theory and Methods, New York: Oxford University Press. de Jong, P. (1991) “The Diﬀuse Kalman Filter”, The Annals of Statistics, 19, pp. 1073–83. Doornik, J. A. (1995) “Testing general restrictions on the cointegrating space”, Discussion Paper, Nuﬃeld College, http://www.doornik.com/research/coigen.pdf Doornik, J. A. (1998) “Approximations to the Asymptotic Distribution of Cointegration Tests”, Journal of Economic Surveys, 12, pp. 573–93. Reprinted with corrections in M. McAleer and L. Oxley Practical Issues in Cointegration Analysis, Oxford: Blackwell, 1999. Doornik, J. A. (2007), Object-Oriented Matrix Programming Using Ox, 3rd edition, London: Timberlake Consultants Press and Oxford: www.doornik.com. Doornik, J. A. and Hansen, H. (1994) “An Omnibus Test for Univariate and Multivariate Normality”, working paper, Nuﬃeld College, Oxford. Edgerton, D. and Wells, C. (1994) “Critical Values for the Cusumsq Statistic in Medium and Large Sized Samples”, Oxford Bulletin of Economics and Statistics, 56, pp. 355–65. Elliott, G., Rothenberg, T. J., and Stock, J. H. (1996) “Eﬃcient Tests for an Autoregressive Unit Root”, Econometrica, 64, pp. 813–36. Fiorentini, G., Calzolari, G. and Panattoni, L. (1996) “Analytic Derivatives and the Computation of GARCH Estimates”, Journal of Applied Econometrics, 11, pp. 399–417. Frigo, M. and Johnson, S. G. (2005) “The Design and Implementation of FFTW3,” Proceedings of the IEEE 93, 2, pp. 216–231 . Invited paper, Special Issue on Program Generation, Optimization, and Platform Adaptation. Godfrey, L. G. (1994) “Testing for Serial Correlation by Variable Addition in Dynamic Models Estimated by Instrumental Variables”, The Review of Economics and Statistics, 76/3, pp. 550–59. Golub, G. H. and Van Loan, C. F. (1996) Matrix Computations, 3rd edition, Baltimore and London: The John Hopkins University Press. A Goossens, M., Mittelbach, F., and Samarin, A. (2004) The L TEX Companion, 2nd edition, Boston: Addison-Wesley. Gourieroux, C. and Monfort, A. (1996) Simulation-Based Econometric Methods, Oxford: Oxford University Press. Gourieroux, C., Monfort, A., Renault, E. and Trognon, A. (1987) “Generalized Residuals”, Journal of Econometrics, 34, pp. 5–32. Greene, William H. (2000) Econometric Analysis, 4th edition, Upper Saddle River, NJ: Prentice-Hall. Greene, William H. (2003) Econometric Analysis, 5th edition, Upper Saddle River, NJ: Prentice-Hall. Gujarati, Damodar N. (2003) Basic Econometrics, 4th edition, Boston, MA: McGraw-Hill. Bibliography Hall, Alastair D. (2005) Generalized Method of Moments, Oxford: Oxford University Press. Hamilton, James D. (1994) Time Series Analysis, Princeton, NJ: Princeton University Press. 253 Hannan, E. J. and Quinn, B. G. (1979) “The Determination of the Order of an Autoregression”, Journal of the Royal Statistical Society, B, 41, pp. 190–95. Hansen, L. P. (1982) “Large Sample Properties of Generalized Method of Moments Estimation”, Econometrica, 50, pp. 1029–1054. Hansen, L. P. and Singleton, K. J. (1982) “Generalized Instrumental Variables Estimation of Nonlinear Rational Expectations Models”, Econometrica, 50, pp. 1269–86. Harvey, Andrew C. (1989) Forecasting Structural Time Series Models and the Kalman Filter, Cambridge: Cambridge University Press. Harvey, Andrew C. and Proietti, T. (2005) Readings in Unobserved Component Models, Oxford: Oxford University Press. Hausman, J. A. (1978) “Speciﬁcation Tests in Econometrics”, Econometrica, 46, pp. 1251–71. Heckman, J. (1979) “Sample Selection Bias as a Speciﬁcation Error”, Econometrica, 47, pp. 153–61. Hodrick, Robert and Prescott, Edward C. (1997) “Postwar U.S. Business Cycles: An Empirical Investigation”, Journal of Money, Credit and Banking, 29, pp. 1–16. Imhof, J. P. (1961), “Computing the Distribution of Quadratic Forms in Normal Variables”, Biometrika, 48, pp. 419–26. Johansen, Søren (1995) Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, Oxford: Oxford University Press. Keane, Michael P. and Wolpin, Kenneth I. (1997) “The Career Decisions of Young Men”, Journal of Political Economy, 105, pp. 473–522. Kiviet, J. F. (1986) “On the Rigour of Some Misspeciﬁcation Tests for Modelling Dynamic Relationships”, Review of Economic Studies, 53, pp. 241–61. Koenker, R. (1981) “A Note on Studentizing a Test for Heteroscedasticity”, Journal of Econometrics, 17, pp. 107–12. Koenker, R. (1994) “Conﬁdence Intervals for regression quantiles”, in P. Mandl and M. Huskova (eds.), Asymptotic Statistics, pp. 349–359, New York: Springer-Verlag. Koenker, R. and Bassett, G. (1978) “Regression quantiles”, Econometrica, 46, pp. 33–50. Koenker, R. and Hallock, K. (2001) “Quantile Regression”, Journal of Economic Perspectives, 15/4, pp. 143–56. Koenker, R. and Machado, J. (1999) “Goodness of ﬁt and related inference processes for quantile regression”, Journal of the American Statistical Association, 94, pp. 1296–1310. Koenker, R. and Zhao, Q. (1994) “L-estimation for linear heteroscedastic models”, Journal of Nonparametric Statistics, 3, pp. 223–235. Koopman, S. J. (1997) “Exact Initial Kalman Filtering and Smoothing for Nonstationary Time Series Models”, Journal of the American Statistical Association, 92, pp. 1630–38. Koopman, S. J., Shephard, N. and Doornik, J. A. (1999) “Statistical algorithms for models in state space using SsfPack 2.2”, Econometrics Journal, 2, pp. 113–66. Kwiatkowski, D., Phillips, P. C. B., Schmidt, P. and Shin, Y. (1992) “Testing the Null of Stationarity Against the Alternative of a Unit Root: How Sure Are We That Economic Time Series Have a Unit Root?”, Journal of Econometrics, 54, pp. 159–78. Locke, C. (1976) “A Test for the Composite Hypothesis that a Population has a Gamma Distribution”, Communications in Statistics — Theory and Methods, A5(4), pp. 351–64. Lucchetti, R., Papi, L., and Zazzaro, A. (2001) “Banks’ Ineﬃciency and Economic Growth: A Micro Macro Approach”, Scottish Journal of Political Economy, 48, pp. 400–424. Bibliography 254 McCullough, B. D. and Renfro, Charles G. (1998) “Benchmarks and software standards: A case study of GARCH procedures”, Journal of Economic and Social Measurement, 25, pp. 59–71. MacKinnon, J. G. (1996) “Numerical Distribution Functions for Unit Root and Cointegration Tests”, Journal of Applied Econometrics, 11, pp. 601–18. MacKinnon, J. G. and White, H. (1985) “Some Heteroskedasticity-Consistent Covariance Matrix Estimators with Improved Finite Sample Properties”, Journal of Econometrics, 29, pp. 305–25. Maddala, G. S. (1992) Introduction to Econometrics, 2nd edition, Englewood Cliﬀs, NJ: Prentice-Hall. Marsaglia, G. and Tsang, W. W. (2000) “The Ziggurat Method for Generating Random Variables”, Journal of Statistical Software, 5, pp 1–7. Matsumoto, M. and Nishimura, T. (1998) “Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator”, ACM Transactions on Modeling and Computer Simulation, 8, pp. 3–30. Mroz, T. (1987) “The Sensitivity of an Empirical Model of Married Women’s Hours of Work to Economic and Statistical Assumptions” Econometrica 55, pp. 765–99. Nash, J. C. (1990) Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation, 2nd edition, Bristol: Adam Hilger. Nerlove, M, (1999) Properties of Alternative Estimators of Dynamic Panel Models: An Empirical Analysis of Cross-Country Data for the Study of Economic Growth”, in Hsiao, C., Lahiri, K., Lee, L.-F. and Pesaran, M. H. (eds) Analysis of Panels and Limited Dependent Variable Models, Cambridge: Cambridge University Press. Neter, J. Wasserman, W. and Kutner, M. H. (1990) Applied Linear Statistical Models, 3rd edition, Boston, MA: Irwin. Newey, W. K. and West, K. D. (1987) “A Simple, Positive Semi-Deﬁnite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix”, Econometrica, 55, pp. 703–8. Newey, W. K. and West, K. D. (1994) “Automatic Lag Selection in Covariance Matrix Estimation”, Review of Economic Studies, 61, pp. 631–53. Pesaran, M. H. and Taylor, L. W. (1999) “Diagnostics for IV Regressions”, Oxford Bulletin of Economics and Statistics, 61/2, pp. 255–81. Pollock, D. S. G. (1999) A Handbook of Time-Series Analysis, Signal Processing and Dynamics, New York: Academic Press. Portnoy, S. and Koenker, R. (1997) “The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators”, Statistical Science, 12/4, pp. 279–300. R Core Development Team (2000) An Introduction to R, version 1.1.1. Ramanathan, Ramu (2002) Introductory Econometrics with Applications, 5th edition, Fort Worth: Harcourt. Schwarz, G. (1978) “Estimating the dimension of a model”, Annals of Statistics, 6, pp. 461–64. Shapiro, S. and Chen, L. (2001) “Composite Tests for the Gamma Distribution”, Journal of Quality Technology, 33, pp. 47–59. Silverman, B. W. (1986) Density Estimation for Statistics and Data Analysis, London: Chapman and Hall. Steinhaus, Stefan (1999), “Comparison of mathematical programs for data analysis” (Edition 3), University of Frankfurt, http://www.informatik.uni-frankfurt.de/~stst/ncrunch/. Stock, James H. and Watson, Mark W. (2003) Introduction to Econometrics, Boston, MA: AddisonWesley. Stock, James H., Wright, Jonathan H. and Yogo, Motohiro, 2002. “A Survey of Weak Instruments and Weak Identiﬁcation in Generalized Method of Moments,” Journal of Business & Economic Statistics, 20(4), pp. 518–29. Bibliography 255 Stock, James H. and Yogo, Motohiro (2003) “Testing for Weak Instruments in Linear IV Regression”, Revised version of NBER Technical Working Paper 284, available at http://ksghome.harvard. edu/~JStock/pdf/rfa_6.pdf. Stokes, Houston H. (2004) “On the advantage of using two or more econometric software systems to solve the same problem”, Journal of Economic and Social Measurement, 29, pp. 307–20. Swamy, P. A. V. B. and Arora, S. S. (1972) “The Exact Finite Sample Properties of the Estimators of Coeﬃcients in the Error Components Regression Models”, Econometrica, 40, pp. 261–75. Theil, H. (1961) Economic Forecasting and Policy, Amsterdam: North-Holland. Theil, H. (1966) Applied Economic Forecasting, Amsterdam: North-Holland. Verbeek, Marno (2004) A Guide to Modern Econometrics, 2nd edition, New York: Wiley. White, H. (1980) “A Heteroskedasticity-Consistent Covariance Matrix Astimator and a Direct Test for Heteroskedasticity”, Econometrica, 48, pp. 817–38. Windmeijer, F. (2005) “A Finite Sample Correction for the Variance of Linear Eﬃcient Two-step GMM Estimators”, Journal of Econometrics, 126, pp. 25–51. Wooldridge, Jeﬀrey M. (2002a) Econometric Analysis of Cross Section and Panel Data, Cambridge, Mass.: MIT Press. Wooldridge, Jeﬀrey M. (2002b) Introductory Econometrics, A Modern Approach, 2nd edition, Mason, Ohio: South-Western. Yalta, A. Talha and Yalta, A. Yasemin (2007) “GRETL 1.6.0 and its numerical accuracy”, Journal of Applied Econometrics, 22, pp. 849–54. ...
View Full Document

## gretl-guide - Gretl User’s Guide Gnu Regression,...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online