# Ref1 - File C\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF...

This preview shows pages 1–5. Sign up to view the full content.

1 File: C:\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF STRATHCLYDE ESTIMATION AND STATISTICAL INFERENCE UNDER A VARIETY OF DIFFERENT ASSUMPTIONS Aims In these notes, we examine the properties of parameter estimators and of several regression- based statistics under a variety of assumptions about the (observable and unobservable) variables entering the regression model. Our focus is on one particular estimator: the Ordinary Least Squares (OLS) estimator. CLASS 1: NON-STOCHASTIC REGRESSORS CASE 1.1: THE CLASSICAL LINEAR REGRESSION MODEL WITH NON-STOCHASTIC REGRESSOR VARIABLES AND NORMALLY DISTRIBUTED DISTURBANCES

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 TABLE 1: REGRESSION MODEL ASSUMPTIONS The k variable regression model is (1) The dependent variable is a linear function of the set of non-stochastic regressor variables and a random disturbance term as specified in Equation (1). No variables which influence Y are omitted from the regressor set X (where X is taken here to mean the set of variables X j , j=1,. ..,k), nor are any variables which do not influence Y included in the regressor set. In other words, the model specification is correct. (2) The set of regressors is not perfectly collinear. This means that no regressor variable can be obtained as an exact linear combination of any subset of the other regressor variables. (3) The disturbance process has zero mean. That is, E(u t ) = 0 for all t. (4) The disturbance terms, u t , t=1,. .,T, are serially uncorrelated. That is, Cov(u t ,u s ) = 0 for all s t. (5) The disturbances have a constant variance. That is, Var(u t ) = σ 2 for all t. (6) The equation disturbances are normally distributed, for all t. These assumptions are taken to hold for all subsequent models discussed in this paper unless stated otherwise. t 1 2 2t k kt t Y = + X + . .. + X + u t = 1,. ..,T β β β (1 )
3 THE LINEAR REGRESSION MODEL IN MATRIX NOTATION In ordinary algebra, the k-variable linear regression model is Y X X u t T t t k kt t = + + + + = β β β 1 2 2 1 ... , ,..., (1) For notational convenience, we could reverse the variable subscript notation to Y X X u t T t t k tk t = + + + + = β β β 1 2 2 1 ... , ,..., Now define the following vectors; first a k × 1 vector of parameters β β β β = 1 2 k and secondly, a 1 × k row vector of the t th observation on each of the k variables: ( 29 x X X X t t t tk = 1 2 3 ... Equation (1) may now be written as Y x u t T t t t = + = β , ,..., 1 (1b) You should now convince yourself that equations (1) and (1b) are identical. Now define the following vectors or matrices: firstly a T × 1 vector of all T observations on Y t : Y Y Y Y T = 1 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 Secondly, u, a T × 1 vector of disturbances: = T 2 1 u u u u And finally X, a T × k matrix of T observations on each of k explanatory variables:
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 17

Ref1 - File C\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online