tutorial11

tutorial11 - Week 11 Tutorial Exercises Review Questions...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Week 11 Tutorial Exercises Review Questions (these may or may not be discussed in tutorial classes) What are the main features of time series data? How do time series data differ from cross‐ sectional data? The main features include the following. First, time series data have a temporal ordering, which matters in regression analysis because of the second feature. Second, many economic time series are serially correlated (or autocorrelated), meaning that future observation are dependent on present and past observations. Third, many economic time series contain trends and seasonality. In comparison to cross‐sectional data, the major difference is that time series data are not “random sample”. Recall that, for a random sample, observations are required to be independent of one another. What is a stochastic process and its realisation? A stochastic process (SP) is a random variable that depends on the time index. At any fixed point in time, the SP is a random variable (hence has a distribution). The SP can be viewed as a “random curve” with the horizontal axis being the time index, where the outcome of the underlying experiment is a “curve” (or a time series plot). The time series we observe is viewed as a realisation (or outcome) of the “random curve” or the SP. What is serial correlation or autocorrelation? The serial (or auto) correlation of a time‐series variable is the correlation between the variable at one point in time and the same variable at another point in time. The serial (or auto) correlation of yt is usually denoted as Corr(yt, yt‐h) = Cov(yt, yt‐h)/[Var(yt)Var(yt‐h)]1/2. What is a finite distributed lag model? What is the long‐run propensity (LRP)? How would you estimate the LRP and the associated standard error (say in STATA)? Read ie_Slides10 pages 5‐7. Read Section 10.2. Try Wooldridge 10.3. What are TS1‐6 (assumptions about time series regression)? How do they differ from the assumptions in MLR1‐6? TS1‐6 include: 1) linear in parameter; 2) no perfect collinearity; 3) strict zero conditional mean; 4) homoskedasticity; 5) no serial correlation; 6) normality. These differ from MLR1‐6 mainly in that MLR2 (random sampling) do not hold for time series data. To ensure the unbiasedness of the OLS estimators, TS3 is needed. To ensure the validity of OLS standard errors, t‐stat and F‐stat for statistical inference, TS3‐TS6 are needed. These are very strong assumptions and can be relaxed when sample sizes are large. What are “strictly exogenous” regressors and “contemporaneously exogenous” regressors? Strictly exogenous regressors satisfy the assumption TS3, whereas the contemporaneously exogenous regressors only satisfy the condition (10.10) (or (z10) of the Slides) that is weaker than TS3. What is a trending time series? What is a time trend? A time series is trending if it has the tendency of growing (or shrinking) over time. A time trend is a function of time index (eg, linear, quadratic, etc). A time trend can be used to mimic (or model) the trending component of a time series. Why may a regression with trending time series produce “spurious” results? First, two time trends are always “correlated” because they grow together in the same (or opposite) direction. Now consider two unrelated time series, each of which contains a time trend. Since the time trends in the two series are “correlated”, regressing one on the other will produce a significant slope coefficient. Such “statistical significance” is spurious because it is purely induced by the time trends and has little to say about the true relationship between the two time series. Why would you include a time trend in regressions with trending variables? Including a time trend in regressions allows one to study the relationship among time series variables that is not induced by the time trends in the time series. What is seasonality in a time series? Give an example of time series variable with seasonality. Seasonality in a time series is the fluctuation that repeats itself every year (or every week, or every day). An example would be the daily revenue a pub, which is likely to have week‐day seasonality. For quarterly data, how would you define seasonal dummy variables for a regression model? We need to define three dummies. For instance, if we take the first quarter as the base, we define dummy variables Q2, Q3 and Q4 for the second, third and fourth quarters and include them in the regression model. Problem Set Q1. Wooldridge 10.1 (i) Disagree. Most time series processes are correlated over time, and many of them strongly correlated. This means they cannot be independent across observations, which simply represent different time periods. Even series that do appear to be roughly uncorrelated – such as stock returns – do not appear to be independently distributed, as they have dynamic forms of heteroskedasticity. (ii) Agree. This follows immediately from Theorem 10.1. In particular, we do not need the homoskedasticity and no serial correlation assumptions. (iii) Disagree. Trending variables are used all the time as dependent variables in a regression model. We do need to be careful in interpreting the results because we may simply find a spurious association between yt and trending explanatory variables. Including a trend in the regression is a good idea with trending dependent or independent variables. As discussed in Section 10.5, the usual R‐squared can be misleading when the dependent variable is trending. (iv) Agree. With annual data, each time period represents a year and is not associated with any season. Q2. Wooldridge 10.2 We follow the hint and write gGDPt‐1 = α0 + δ0 intt‐1 + δ1intt‐2 + ut‐1, and plug this into the right‐hand‐side of the intt equation: int t = γ0 + γ1( α0 + δ0int t‐1 + δ1int t‐2 + ut‐1 – 3) + vt = (γ0 + γ1 α0 – 3 γ1) + γ1 δ0int t‐1 + γ1 δ1int t‐2 + γ1ut‐1 + vt . Now by assumption, u t‐1 has zero mean and is uncorrelated with all right‐hand‐side variables in the previous equation, except itself of course. So Cov(intt , ut‐1) = E(int t ut‐1) = γ1E(ut‐12) > 0 because γ1 > 0. If σ2 = E(ut2 ) for all t then Cov(intt , ut‐1) = γ1 σ2. This violates the strict exogeneity assumption, TS3. While u t is uncorrelated with intt, intt‐1, and so on, ut is correlated with intt+1. Q3. Wooldridge 10.7 (i) pet‐1 and pet‐2 must be increasing by the same amount as pet. (ii) The long‐run effect, by definition, should be the change in gfr when pe increases permanently. But a permanent increase means the level of pe increases and stays at the new level, and this is achieved by increasing pet‐1, pet‐1, and pet by the same amount. Q4. Wooldridge C10.10 (intdef_c10_10.do) (i) The sample correlation between inf and def is only about .098, which is pretty small. Perhaps surprisingly, inflation and the deficit rate are practically uncorrelated over this period. Of course, this is a good thing for estimating the effects of each variable on i3, as it implies almost no multicollinearity. (ii) The equation with the lags is i3t = 1.61 + .343 inf t + .382 inf t‐1 − .190 def t + .569 def t‐1 (0.40) (.125) (.134) (.221) (.197) n = 55, R2 = .685, adj.R2 = .660. (iii) The estimated LRP of i3 with respect to inf is .343 + .382 = .725, which is somewhat larger than .606, which we obtain from the static model in (10.15). But the estimates are fairly close considering the size and significance of the coefficient on inft‐1. (iv) The F statistic for significance of inf t‐1 and def t‐1 is about 5.22, with p‐value ≈ .009. So they are jointly significant at the 1% level. It seems that both lags belong in the model. ...
View Full Document

Ask a homework question - tutors are online