note02 - 1 Chapter 2. Simple Linear Regression Model Model...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Chapter 2. Simple Linear Regression Model Model Specification y : dependent variable, regressand, explained variable x : independent variable, regressor, explanatory variable, covariate u : error term, disturbance term (captures the effects of all other unobservable factors) $ : intercept parameter (coefficient) $ 1 : slope parameter (coefficient) Objective: Find the estimates of the intercept and slope parameters from data on x and y Test certain hypothesis about the parameters (for example, ) Methods of Estimation There are many alternative ways to find the estimates of parameters such as the least squares estimator , maximum likelihood estimator , method of moment estimator , instrumental variable estimator , etc. What is the intuition behind each estimation method? Statistical properties of alternative estimators - Which one is a good estimator? What do we mean by a good estimator? Properties depend on the stochastic nature of the variables x , u and y . Ordinary Least Squares Estimator This is one of the oldest and most widely used estimation method. The idea of this method is very simple. Let the observed data on the dependent and independent variables be denoted by subscript i where n is the number of individuals, which is called the sample size . A graph of and is called the scatterplot which looks like one below (round dots) 2 We wish draw a straight line that approximates (explains, estimates) these dots the best , something like Note that the value of is a little higher than 10 for , but the estimation line we drew indicates a value around 6. Our estimation line underestimates . Similarly, for , is around 2 while our approximation line indicates to be around 5. Our estimation line overestimates . The difference between the data value and estimated value, ( ) and ( ), are called the residuals . The residual for the 5 th observation is positive and the residual for the 4 th observation is negative. Now, how do we find the best estimation line? What do we mean by the best ? The idea of least squares principle is to find the estimation line that minimizes the sum of squared residuals . Let and be the estimates of and , respectively. Let denote the estimated (or predicted) value of for a given independent variable : Then, the residuals are The sum of squared residuals (SSR) is The ordinary least squares estimators and are found by minimizing this SSR . How do we find them? By using the calculus: Take partial derivatives of SSR with respect to and , set them equal to zero, and then solve for and . (1) 3 (2) These equations are called the first order conditions . The solutions are given by where and are the sample means of and , respectively: , ....
View Full Document

Page1 / 13

note02 - 1 Chapter 2. Simple Linear Regression Model Model...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online