STA131C-Handout-Pojections - Handout, STA131C-S07, W....

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Handout, STA131C-S07, W. Polonik On some applications of linear transformations, in particular projections and rotations, in statistics Some notation: Vectors in R d are denoted by bold symbols, such as a , b , e , x , y . . . etc. For a vector a we denote by a i , i = 1 , . . . , d its coordinates, i.e. a = ( a 1 , . . . , a d ) . Symbols like X or O denote matrices, and I denotes the identity matrix. X denotes transpose of a matrix X . For two vectors a , b R d the dot-product between a and b is denoted by a b = d i =1 a i b i . 0. General remarks At various occasions linear transformations play a central role in statistics. This in particular applies to hypothesis testing in the linear regression model. In this context, the (null-)hypothesis often can be thought of as a linear subspaces (of an appropriate space, usually R n ). This subspace is then also called the null-space . The alternative hypothesis (or the unconstraint model) often also defines a linear space itself, the full model. Corresponding test statistics can be thought of measuring the distance of the projection of the observations onto this null-space relative to the distance of this projection to the full model (for an example see section 6 below). In order to better understand these relationships and the corresponding distribution theory (under normal assumptions), one needs to have some understanding of orthogonal matrices, orthogonal projections and how they relate to spherical symmetry of normal iid models. The following attempts to give a brief introduction into these topics. In order to give a more explicit motivation, we first consider the empirical variance S 2 = 1 n n summationdisplay i =1 ( Y i Y ) 2 , where as usual Y = 1 n n i =1 Y i and the Y i , i = 1 , . . . , n represent observations. We will offer a geometric interpretation of S 2 via projections, and the same interpretation will help us to obtain a deeper understanding of the independence of S 2 and Y in the normal case. We will first see that S 2 is closely connected to orthogonal projections. To see this, observe that we can write S 2 = bardbl Y Y 1 bardbl 2 1 where Y = ( Y 1 , . . ., Y n ) and 1 = (1 , . . ., 1) R n . (Also notice that Y is a scalar and we have Y 1 = ( Y , . . . , Y ) . ) Hence, in other words, S 2 is the squared length of the difference between the two vectors Y and Y 1 . The connection to projections comes in once we realize that the vector Y 1 is the orthogonal projection of the vector Y onto the subspace E = { c 1 = ( c, . . . , c ) ; c R } R n . We say that E is the space of all constant vectors. How can we actually see that Y 1 is the orthogonal projection onto E ? Let P Y denote this projection onto E . By definition of an orthogonal projection, we have that P Y E and bardbl Y P Y bardbl 2 = min c E bardbl Y c bardbl 2 , which means that P Y is that vector in E with the smallest (squared) distance to Y . Now, since....
View Full Document

This note was uploaded on 05/15/2008 for the course STATS 130 taught by Professor Samaniego during the Spring '08 term at UC Davis.

Page1 / 11

STA131C-Handout-Pojections - Handout, STA131C-S07, W....

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online