Ref1 - File: C:\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
1 File: C:\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF STRATHCLYDE ESTIMATION AND STATISTICAL INFERENCE UNDER A VARIETY OF DIFFERENT ASSUMPTIONS Aims In these notes, we examine the properties of parameter estimators and of several regression- based statistics under a variety of assumptions about the (observable and unobservable) variables entering the regression model. Our focus is on one particular estimator: the Ordinary Least Squares (OLS) estimator. CLASS 1: NON-STOCHASTIC REGRESSORS CASE 1.1: THE CLASSICAL LINEAR REGRESSION MODEL WITH NON-STOCHASTIC REGRESSOR VARIABLES AND NORMALLY DISTRIBUTED DISTURBANCES
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 TABLE 1: REGRESSION MODEL ASSUMPTIONS The k variable regression model is (1) The dependent variable is a linear function of the set of non-stochastic regressor variables and a random disturbance term as specified in Equation (1). No variables which influence Y are omitted from the regressor set X (where X is taken here to mean the set of variables X j , j=1,. ..,k), nor are any variables which do not influence Y included in the regressor set. In other words, the model specification is correct. (2) The set of regressors is not perfectly collinear. This means that no regressor variable can be obtained as an exact linear combination of any subset of the other regressor variables. (3) The disturbance process has zero mean. That is, E(u t ) = 0 for all t. (4) The disturbance terms, u t , t=1,. .,T, are serially uncorrelated. That is, Cov(u t ,u s ) = 0 for all s t. (5) The disturbances have a constant variance. That is, Var(u t ) = σ 2 for all t. (6) The equation disturbances are normally distributed, for all t. These assumptions are taken to hold for all subsequent models discussed in this paper unless stated otherwise. t 1 2 2t k kt t Y = + X + . .. + X + u t = 1,. ..,T β β β (1 )
Background image of page 2
3 THE LINEAR REGRESSION MODEL IN MATRIX NOTATION In ordinary algebra, the k-variable linear regression model is Y X X u t T t t k kt t = + + + + = β β β 1 2 2 1 ... , ,..., (1) For notational convenience, we could reverse the variable subscript notation to Y X X u t T t t k tk t = + + + + = β β β 1 2 2 1 ... , ,..., Now define the following vectors; first a k × 1 vector of parameters β β β β = 1 2 k and secondly, a 1 × k row vector of the t th observation on each of the k variables: ( 29 x X X X t t t tk = 1 2 3 ... Equation (1) may now be written as Y x u t T t t t = + = β , ,..., 1 (1b) You should now convince yourself that equations (1) and (1b) are identical. Now define the following vectors or matrices: firstly a T × 1 vector of all T observations on Y t : Y Y Y Y T = 1 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 Secondly, u, a T × 1 vector of disturbances: = T 2 1 u u u u And finally X, a T × k matrix of T observations on each of k explanatory variables:
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/01/2012 for the course EC 408 taught by Professor Rogerperman during the Fall '07 term at Uni. Strathclyde.

Page1 / 17

Ref1 - File: C:\WINWORD\MSCBE\Ref1.DOC UNIVERSITY OF...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online