# ch02 - Chapter 2 Supplemental Text Material S2-1 Models for...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 2 Supplemental Text Material S2-1. Models for the Data and the t- Test The model presented in the text, equation (2-23) is more properly called a means model. Since the mean is a location parameter , this type of model is also sometimes called a location model . There are other ways to write the model for a t-test. One possibility is y i j n ij i ij i = + + = = R S T µ τ ε 1 2 1 2 , , , , " where µ is a parameter that is common to all observed responses (an overall mean) and τ i is a parameter that is unique to the i th factor level. Sometimes we call τ i the i th treatment effect. This model is usually called the effects model . Since the means model is y i j n ij i ij i = + = = R S T µ ε 1 2 1 2 , , , , " we see that the i th treatment or factor level mean is µ µ τ i i = + ; that is, the mean response at factor level i is equal to an overall mean plus the effect of the i th factor. We will use both types of models to represent data from designed experiments. Most of the time we will work with effects models, because it’s the “traditional” way to present much of this material. However, there are situations where the means model is useful, and even more natural. S2-2. Estimating the Model Parameters Because models arise naturally in examining data from designed experiments, we frequently need to estimate the model parameters. We often use the method of least squares for parameter estimation. This procedure chooses values for the model parameters that minimize the sum of the squares of the errors ε ij . We will illustrate this procedure for the means model. For simplicity, assume that the sample sizes for the two factor levels are equal; that is n n n 1 2 = = . The least squares function that must be minimized is L y ij j n i ij i j n i = = − = = = = ∑ ∑ ∑ ∑ ε µ 2 1 1 2 2 1 1 2 ( ) Now ∂ ∂ = − ∂ ∂ = − = = ∑ ∑ L y L y j j n j j n µ µ µ µ 1 1 1 1 2 2 1 2 2 2 ( ) ( and ) and equating these partial derivatives to zero yields the least squares normal equations n y n y j i n j i n ¡ ¡ µ µ 1 1 1 2 2 1 = = = = ∑ ∑ The solution to these equations gives the least squares estimators of the factor level means. The solution is ¡ ¡ µ µ 1 1 2 = 2 = y and y ; that is, the sample averages at leach factor level are the estimators of the factor level means. This result should be intuitive, as we learn early on in basic statistics courses that the sample average usually provides a reasonable estimate of the population mean. However, as we have just seen, this result can be derived easily from a simple location model using least squares. It also turns out that if we assume that the model errors are normally and independently distributed, the sample averages are the maximum likelihood estimators of the factor level means. That is, if the observations are normally distributed, least squares and maximum likelihood produce exactly the same estimators of the factor level means. Maximum likelihood is a more general method of parameter estimation that...
View Full Document

## This note was uploaded on 03/20/2011 for the course STATISTIC 101 taught by Professor Fandia during the Spring '10 term at UCLA.

### Page1 / 10

ch02 - Chapter 2 Supplemental Text Material S2-1 Models for...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online