This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Section 3 - Econ 140 GSIs: Hedvig, Tarso, Xiaoyu * 1 Review - Linear regression with one regressor 1.1 Motivation So far we have been busy to use our sample to draw some inference on the population mean. An example would be to estimate the mean standardized test scores of elementary school students in California from a random sample of 100 California schools. Now, we are turning to estimate relationships between two population variables. For the time being, we will assume that the functional form of this relation is linear. The leading example from the book is to explore how class size relates to student performance in California schools. 1.2 Terminology The linear regression model is Y i = β + β 1 X i + u i , (1) where • the subscript i runs over observations, i = 1 ,...,N • Y i is the dependent variable , the endogenous variable, the regressand , or simply the left-hand variable • X i is the independent variable , the exogenous variable, the regressor , or simply the right-hand variable • β + β 1 X is the population regression line or population regression function • β is the intercept of the population regression line • β 1 is the slope of the population regression line • u i is the error term We observe ( Y i ,X i ) N i =1 pairs of data: average standardized test scores (measuring performance) for students in school i ( Y i ), and average student-teacher ratios (measuring class size) in school i ( X i ). Now, ( Y i ,X i ) will be a random (i.i.d.) sample from the joint distribution of ( Y,X ) . * Many thanks to previous GSIs, Edson Severnini and Raymundo M. Campos-Vazquez, as this note is based on theirs. All errors are ours. 1 1.3 Digression: Joint distributions Let Y and X be two random variables (RVs). Important information: 1 • Joint distribution : list of probabilities of Y and X together, for each combination of these RVs discrete RVs: P ( Y = y,X = x ) continuous RVs: f Y X ( y,x ) = lim h → P ( y- h ≤ Y ≤ y + h,x- h ≤ X ≤ x + h ) • Marginal distribution (here, of X ): discrete RVs: P ( X = x ) = N ∑ i =1 P ( Y = y i ,X = x ) continuous RVs: f X ( x ) = ´ + ∞-∞ f Y X ( y,x ) dy • Conditional distribution (here, of Y given X (denoted Y | X )): discrete RVs: P ( Y = y | X = x ) =...
View Full Document
This note was uploaded on 02/02/2012 for the course ECON 140 taught by Professor Duncan during the Spring '08 term at Berkeley.
- Spring '08