This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Economics 140 Professor Enrico Moretti 03/01/10 Lecture 5 ASUC Lecture Notes Online is the only authorized notetaking service at UC Berkeley. Do not share, copy or illegally distribute (electronically or otherwise) these notes. Our studentrun program depends on your individual subscription for its continued existence. These notes are copyrighted by the University of California and are for your personal use only. D O N O T C O P Y Sharing or copying these notes is illegal and could end note taking for this course. ANNOUNCEMENTS Midterm is next Monday during regular lecture time. Today we will finish chapter 9 and cover chapter 10. Skip sections 7.4, 7.5, 9.4.2, and 9.4.3 in the textbook. In addition, skip proof of section 10.7 but know the theorem. LECTURE y x If we are given the above data set with the three fitted lines, ordinary least squares (OLS) would predict that L2 is the best fitted line. L2 minimizes the sum of the squared deviation of the line from the data points. If the fitted line (L1) that represents our best guess of the relationship between variables y and x is written as y i hat = β hat + ( β 1 hat)x i then OLS states that it minimizes ∑ (y i – β – β 1 x i ) 2 . The estimates of true β and β 1 are β hat and β 1 hat. β 1 hat = ∑ [(x i – x bar)(y i – y bar)]/ ∑ [(x i – x bar) 2 ] β hat = y bar – β 1 hat(x bar) You are responsible for the proof regarding the formula of β 1 hat. An example: Suppose that there is a supermarket chain on the east coast trying to predict how well they will do on the west coast. The chain believes that sales in a city are a function of the city’s average income. The chain gathers information on the sales and average income of major cities:...
View
Full
Document
 Spring '10
 Wood
 Normal Distribution, Regression Analysis, Variance, UC Board of Regents

Click to edit the document details