How to tackle the heteroscedasticity ? Our ability to tackle the problem will depend upon the assumptions about error variance. The following situations may emerge. i) When is known In this situation the CLRM: can be transformed, dividing each value by corresponding thus, ( ) This effectively transforms error terms to which is homoscedastic and therefore, the OLS estimators would be free of heteroscedasticity. The estimates of and in this case is called the Weighted Least Squares (WLS) estimators. i) When is unknown : We make assumption about the error variance: a) error variance proportional to the such as . Then the transformed regression model is ( )
Research Methodology (ECO 484): Bachelor of Social Sciences, Level 4 Semester 2 Dr. Mohammad Sadiqunnabi Choudhury Here, the coefficient on becomes the constant term. The term gives homoscedastic variances. b) when error variance proportional to the such as . Then we divide by √ to get the transformed regression model √ √ ( √ ) √ Here, the coefficient on becomes the constant term. The term √ will be free of heteroscedasticity. The problem of autocorrelation: The classical regression model also assumes that disturbance terms s do not have any serial correlation. But, in many situations this assumption may not hold. The consequences of the presence of serial or auto correlation are similar to those of heteroscedasticity: the OLS are no longer BLUE. Symbolically, no autocorrelation means . Autocorrelation can arise in economic data on account of many factors: i) Business cycle: cyclical ups and downs in economic time series continue till something happens which reverse the situation. ii) Misspecification of model: fewer variables in the model leaving large systematic components to be clubbed with errors. iii) Cobweb phenomenon: certain types of economic time series (especially agricultural output) in which supply demand interaction either converges to equilibrium or diverges from it. The consequences of autocorrelation are not different from those of heteroscedasticity listed in the previous section. Here too OLS estimators are biased or are not BLUE, & tests are no longer reliable. Therefore, computed value of is not reliable estimate of true goodness of fit. There are many tests for detecting autocorrelation – ranging from visual inspection of error plots, the Runs Test, Swed-Eisenhart critical runs test. But most commonly used in Durbin-Watson d test defined as: ∑ ∑ However, again, we are holding back information on practical detections and avoidance of problem of autocorrelation for the reasons of limitation of space here. 6.4.3 Maximum likelihood estimations Let be n-vector of sample values, dependent on some k-vector of unknown parameters, . Let the joint density function be which indicates the dependence on . This density may be interpreted in two ways. For a given it indicates the probability of a set of sample outcomes. Alternatively it may be interpreted as a function of conditional on a set of sample outcomes. The latter interpretation is referred to as a likelihood function:
You've reached the end of your free preview.
Want to read all 126 pages?
- Spring '19
- Karl Popper