1
Introduction to linear models
1.1
Data and variates
We will be concerned with designed experiments and observational
studies yielding data of the form:
yi, xT : i = 1, . . . , n
i
(xT row vector of dimension k )
i
i.e. there are n experimental units or

2
Least squares estimation
2.1
Some preliminary facts
Recall:
1. (AB )T = B T AT ,
1
2.
3.
4.
=
.
.
.
.
.
.
p
,
T a = a
T A
= (A + AT )
= 2A when A is symmetric.
1
2.2
Deriving the least squares estimator
Consider the linear model y = Z + , with E [ ] =

3
3.1
Residuals and the hat matrix
The hat matrix
Recall that the tted values (denoted by y ) are the estimated values of
each observation.
y = Z
= Z ( Z T Z ) 1 Z T y
= P y,
where P = Z Z T Z
1
say,
Z T is called the hat matrix. Therefore P is
a linear m

4
4.1
Optimality of the least squares estimator
Gauss-Markov Theorem
Let y be a random vector with
E [y ] = Z ,
Var (y ) = 2I n,
Z is (n p) with rank p.
Then aT is the unique linear unbiased estimator of aT with
minimum variance.
Proof:
(i) aT = aT Z T Z

5
Normal linear models
All of the results that we have looked at so far can be derived without
assuming an explicit distribution for the errors. In this section we will
investigate the consequences of assuming a Normal error distribution as
a preliminary

ST221: Exercises 1
Linear Statistical Modelling
Instructions
The problem class will be held in room C0.01 the access code for the door is 2357.
As soon as you arrive log-on to a computer using the following details:
Username:
Password:
st221usr
Ready2G0!

ST221: Exercises 2
Linear Statistical Modelling
Instructions
The problem class will be held in room C0.01 the access code for the door is 2357.
As soon as you arrive log-on to a computer using the following details:
Username:
Password:
st221usr
Ready2G0!