#### Economics.3640.Lecture.29

Utah, ECON 3640
Excerpt: ... Economics 3640-001 Lecture 29 * Reading Assignment: Ch.9 Simple Linear Regression (9.1 & 9.3) Instructor: Sanghoon Lee So far, we have studied: - Basic principles (describing data, probability, random variable, probability distribution) - Methods for estimating and testing population parameters for a single sample 9.1 Probabilistic Models Simple Linear Regression Model : y: dependent variable x: independent variable 0: y-intercept 1: slope (epsilon): random error Eg. Consumption and Income (not in the textbook) Suppose that we are interested in studying the relationship between consumption and income: as income increases consumption increases. We can gather data from different households. Let's look at the data. Econ.3640.Spring.2005.Simple.Linear.Regression.01.xls, Lecture29 y = 0 + 1x + On the graph, draw a line that represents the linear relationship between the two variables (C and I). How are we going to conclude which of the lines we drew is the correct one? Are there any criteria that we can follo ...

#### Homework06

San Jose State, MATH 161
Excerpt: ... unners with stride rates of 3.3 that run faster than 19.2 m/s? 2. A number of studies have shown lichens (certain plants composed of an alga and a fungus) to be excellent bioindicators of air pollution. The le Lichen.txt contains data (read from a graph) on x = NO wet deposition (in g N/m2 , a measure for air 3 pollution) and y = lichen N (in % dry weight). (a) Use SPSS to draw a scatter plot of Lichen N against NO wet deposition. Is it 3 reasonable to t a simple linear regression model to this data? Explain. Eyeball from the graph what you think the intercept and slope of the regression line should be (rough approximations are ok). (b) Use the facts that n = 13, x2 = 3.8114, i xi = 5.92, 2 yi = 9.8857, yi = 10.47 xi yi = 5.8464 and the formulas provided in the lecture notes to compute the intercept and slope of the least regression line for the data. (c) Use SPSS to t a simple linear regression model and compare the results to the ones you have computed by hand (Hint: they should be th ...

#### Assignment 2

Waterloo, STAT 331
Excerpt: ... STAT 331/361/SYDE 334 Assignment 2 Due: Thursday, February 14, 2008 in class Note: Question 5 requires using R. Computing outputs need to be integrated into your answers at appropriate places. Stacking your computing stuff all together and putting them at the back is NOT acceptable! 1. If y = ( y1 ,., y n ) and x = ( x1 ,., x n ) , then the sample correlation coefficient between y and x is defined as r ( y, x) = s xy /( s yy s xx ) . Let e = (e1 ,., en ) be the residuals from fitting the simple linear regression model y i = 0 + 1 xi + i using the least square method. Show that r (e, x) = 0 . 2. Suppose that the simple linear regression model with the assumptions given in Section 2.1.1 holds. n n 2 Show that E i =1 ei2 = (n - 2) 2 . (Hint: First show i =1 ei2 = s yy - s xy / s xx , then show ( ) 2 2 E ( s yy ) = (n - 1) 2 + 12 s xx and E ( s xy ) = 12 s xx + 2 s xx ). 3. Consider the simple linear regression model y i = 0 + 1 xi + i , i = 1,., n, where E ( i ) = 0, V ( i ) = 2 and 1 , 2 ...

#### homework6

UPenn, STAT 112
Excerpt: ... Homework 6, Statistics 112, Fall 2005 This homework is due Thursday, November 3rd at the beginning of class. 1. This problem is based on Dielman, Problem 5.8. The data set MPGWT5.JMP contains data on the number of miles per gallon obtained by a car in city driving (CITYMPG) and the weight of a car in pounds (WEIGHT) for 147 cars listed in the Road and Track October 2002 issue. We would like to model E(CITYMPG|WEIGHT). (a) Fit a simple linear regression model of Y=CITYMPG on X=WEIGHT. Construct a residual plot. What is the most obvious problem you see with the residual plot compared to what you would expect to see if the ideal simple linear regression model holds? (b) Using Tukey's Bulging rule, try three appropriate transformations to try to achieve a better fit. Which transformation is best in terms of minimizing the R 2 (equivalently the root mean square error)? Does the transformation improve on the simple linear regression model ? 2. Problem 1 continued. (a) Use polynomial regressions to model E(CITYMPG|WE ...

#### FinalStudyNotes

UNC Wilmington, QMM 280
Excerpt: ... ce level?" How does sample size affect the sampling distribution of the means? When is it appropriate to use the z-distribution in the estimating process? the t-distribution? What is the difference, if any , between the terms "standard deviation" and "standard error"? How can you control the width of a confidence interval for ? How do you compute the sample size needed to produce a given margin of error? How do you decide when to use a one-tailed versus a two-tailed hypothesis test involving ? How do you determine the attributes of the appropriate sampling distribution to use for a hypothesis test? What is meant by "level of significance?" How do you determine the critical point(s) for a given level of significance? How do you calculate the observed level of significance? What is meant by the terms "Type I error" and "Type II error"? Describe the characteristics of a simple linear regression model - that is, what do the terms "simple" and "linear" have to do with it? What does the term "least squares criter ...

#### Assignment 2

Waterloo, STAT 331
Excerpt: ... Assignment #2, STAT331/361/SYDE334 Winter 2007 This assignment is to be handed in at the start of the lecture of Thursday 8th February, 07. For some problems required the application of R software, you need to cut and paste results (either numeric or graphic) into Word, which can then edited, commented on and handed in. DO NOT hand in R code, or a print out of the R session, unless asked for. Problem 1: Let u = (u1 , . . . , un ) and v = (v1 , . . . , vn ) be two n-dimensional vectors. Vectors u and v are said to be orthogonal if the inner product u v = n ui vi = 0, denoted by uv. i=1 Consider a simple linear regression model whose residual vector is e = (e1 , . . . , en ) and explanatory variable vector is x = (x1 , . . . , xn ) , respectively. (a) Let 1 = (1, . . . , 1) be an n-element vector with all elements equal to 1. Show that e1. (b) Show that ex. (c) Show that the vector of fitted values y = (^1 , . . . , yn ) is orthogonal to the residual vector e, y ^ namely ye. Hint: Express the vector of fitted ...

#### 350W1

Sveriges lantbruksuniversitet, STAT 350
Excerpt: ... ed to discover associations between variables. E.g., a group of pregnant women were surveyed about their daily milk intake. The birth weights of their babies were then recorded. It was found that there women who drank more milk had larger babies. However, we cannot conclude that milk causes larger babies! (Perhaps it is protein or calcium which are contained in milk which is the true cause.) In a designed experiment (where we control the values of x), we can establish causation. E.g., if we randomly assign various doses of a new blood pressure drug to a group of patients, then we can conclude that increased dose causes lower blood pressure. Notation We use Y1 , . . . , Yn to denote the responses in a data set of size n. Each response Yi has associated predictors xi1 , . . . , xip . Linear Models Definition: A model is linear if E[Yi ] is linear in the unknown regression coefficients. 3 Example 1: Simple linear regression model Let Yi N (i , 2 ), where i = 0 + 1 x i . This model is linear since E[ ...

#### NA387(3)lecture21

Michigan, NA 387
Excerpt: ... Lecture 21 Devore Chapter 12 Simple Linear Regression and Correlation 12.1 The Simple Linear Regression Model Linear Relationship Thesim st de rm ple te inistic m m athe atical re lationship be e two variable x and twe n s y is a line re ar lationship y = 0 + 1x. Terminology The variable whose value is fixed by the experimenter, denoted x, is theinde nde pe nt (pre dictor, e xplanatory) variable . For a fixe x, these d cond variablewill bea random variableY with obse d valuey, re rre to as the rve fe d de nde (re pe nt sponse variable ) . The Simple Linear Regression Model There exists parameters 0 , 1 and such that for any fixed value of x, the dependent variable is related to x through the model equation y = 0 + 1x + 2 is a random variable (called the random 2 deviation) with E ( ) = 0 and V ( ) = . Linear Regression Model (x1,y1) 1 True regression line y = 0 + 1 x x1 Distribution of Normal, mean = 0, standard deviation - 0 Distribution of Y for Different Values of x 0 + 1 x3 ...

#### notes3

UPenn, STAT 112
Excerpt: ... Statistics 112 Notes 3 I will be posting the first homework on our web site tonight www-stat.wharton.upenn.edu/~dsmall/stat112-f05 ; it will be due next Thursday at the beginning of class. Reading: Chapter 3.3 I. Simple Regression Analysis Another Example Francis Galton was interested in the relationship between Y= fathers height and X= sons height in 19th century England. He surveyed 952 father-son pairs. Bivariate Fit of sons ht By father ht 75 73 71 sons ht 69 67 65 63 61 63 64 65 66 67 68 69 70 71 72 73 74 father ht Simple regression analysis seeks to estimate the mean of sons height given fathers height, E(Y|x). The simple linear regression model is E (Y | x) = 0 + 1 x To estimate 0 , 1 , we use the least squares method. Using our sample of data, we estimate 0 , 1 by b0 , b1 where b0 , b1 are chosen to minimize the sum of squared prediction errors in the data (the least squares method). Bivariate Fit of sons ht By father ht 75 73 71 sons ht 69 67 65 63 61 63 64 65 66 67 ...

#### reviewch3

UPenn, STAT 112
Excerpt: ... Stat 112 Review Notes for Chapter 3, Lecture Notes 1-5 1. Simple Linear Regression Model : The simple linear regression model for the mean of Y given X is E (Y | X ) = 0 + 1 X where 1 =slope=change in mean of Y for each one unit change in X ; 0 =intercept=mean of Y given X = 0 . The disturbance ei for the simple linear regression model is the difference between the actual Yi and the mean of Yi given X i for observation i : ei = Yi - E (Yi | X i ) = Yi - ( 0 + 1 X i ) . In addition to , the simple linear regression model makes the following assumptions about the disturbances ei : (i) Linearity assumption: E (ei ) = 0 . This implies that the linear model for the mean of Y given X is the correct model for the mean. (ii) Constant variance assumption: The disturbances ei are assumed to all have the same variance 2 . (iii) Normality assumption: The disturbances ei are assumed to have a normal distribution. (iv) Independence assumption: The disturbances ei are assumed to be independent. 2. Least Squares Estimat ...

#### homework3

UPenn, STAT 112
Excerpt: ... Homework 3, Statistics 112, Fall 2005 This homework is due Thursday, October 6th at the beginning of class. Notes: (1) All data sets mentioned can be found under the data sets link on our web site; (2) Use JMP for all problems. 1. Dielman, Problem 3.11, page 102. Read the background on the problem on page 92. The data set is in sales_advertising.JMP . 2. Dielman, Problem 3.15, page 111. 3. In most jurisdictions, driving an automobile with a blood alcohol level in excess of .08 is a felony. Because of a number of factors, it is difficult to provide guidelines on when it is safe for someone who has consumed alcohol to drive a car. In an experiment to examine the relationship between blood alcohol level and the weight of a drinker, 50 men of varying weights were each given three beers to drink and 1 hour later their blood alcohol level was measured. The data are stored in bloodalcohol.JMP on the web site. (a) Fit a simple linear regression model to predict blood alcohol level based on weight. Check the assumptio ...

#### Economics.3640.Lecture.34.Exam.6.Solution

Utah, ECON 3640
Excerpt: ... Economics 3640-001 6th Exam True or False? If false, explain why. Instructor: Sanghoon Lee Answer: All True. 1. Suppose that we have a Simple Linear Regression Model : y = 0 + 1x + We assume that the variance of the probability distribution of the random error is constant for all settings of the independent variable. This is equivalent to assuming that the variance of the random error is equal to a constant, say 2, for all values of the independent variable. 2. One way to decide quantitatively how well a straight line fits a set of data is to note the extent to which the data points deviate from the line. The sum of squares of the errors (SSE) is a minimum for the least squares line (or, the regression line). ^ ^ 3 Suppose that we estimate the values of 0 and 1 by using the method of least squares: ^ ^ ^ y = 0 + 1x = 45.357 + 0.675 x. ^ The slope of the least squares line, 1 = 0.675, implies that for every unit increase of the independent variable, the mean value of the dependent var ...

#### Economics.3640.Lecture.34.Exam.6

Utah, ECON 3640
Excerpt: ... Economics 3640-001 6th Exam True or False? If false, explain why. Instructor: Sanghoon Lee 1. Suppose that we have a Simple Linear Regression Model : y = 0 + 1x + We assume that the variance of the probability distribution of the random error is constant for all settings of the independent variable. This is equivalent to assuming that the variance of the random error is equal to a constant, say 2, for all values of the independent variable. 2. One way to decide quantitatively how well a straight line fits a set of data is to note the extent to which the data points deviate from the line. The sum of squares of the errors (SSE) is a minimum for the least squares line (or, the regression line). ^ ^ 3 Suppose that we estimate the values of 0 and 1 by using the method of least squares: ^ ^ ^ y = 0 + 1x = 45.357 + 0.675 x. ^ The slope of the least squares line, 1 = 0.675, implies that for every unit increase of the independent variable, the mean value of the dependent variable is estimated to increase by .67 ...

#### spring09_3500_exam2_review_problems

Maryville MO, DKS 3500
Excerpt: ... 55xy -=C2 X j - (.2;!) = ;), () 3 -!l~ ;)(Y~ 7= J",J 3- ;(~O = 3 1. Suppose a dataset with five observations is given below: Observation 1 y x 3 4 10 11 5SX X ~ ~ .- /Od : 5', c:< A;J wi (- ~ -; 2 3 4 9 12 8 5 5 6 4 -fc ~?2J 9=-1 'I? + O.Slrx (a) Find the least squares line of a simple linear regression model that attempts to study the relationship between X and Yabove. fl :- 55x. '( - 3s: ~ 1\ -:- O,Srr 5S xx -.:.- 4) ~ -o.s=n(-) - /() - 0, !Jff(Cf-, (b) Circle the correct answer for the question below using only the five given observations in problem tll: A 95% confidence interval for the prediction of an individual value of Y given xp=6 would be: - T, Lf / ~ ~der t h V +- narrower than of equ~ width to -' -. . .: 5 'J 1+-1S- a 95% confidence interval for the prediction of an individual value of Y given xp=3. {b'- if, S", <;1 'f2~ 4- ~j i3 d~ ILJ J v0tt-v vhaK1 I ~ 5 . J I +- J: + .l,.;J.:=-L.-_ s:~ . 13 - 9-, V~ (c) Circle the corr ...

#### hw2-fall08

N.C. State, ST 512
Excerpt: ... ST 512 Homework assignment #2 Dr. Jason A. Osborne 1. Rao 12.3a 2. Rao 12.5ab also, estimate the regression parameters using Table 8.1 3. Refer to Rao 12.6b, estimate the contrasts described in parts i and ii. 4. Rao 12.10bdef 5. Rao 12.11d 6. Rao 12.18abcd(try with 90%)fgh(obtain a 95% prediction INTERVAL) (see the code in oysters.sas on the website to read in the data) e 7. Rao 9.17ac(Use 1 = 0.95 and also obtain Sche and Bonferroni intervals for all pairwise dierences.) 8. Using the data from Rao 10.2, test the adequacy of the simple linear regression model in which the mean soil water content is linear in depth. Which model would you select, the linear regression model or the onefactor ANOVA model with 4 treatment means? 9. Consider designing an experiment to evaluate the potential eectiveness of t = 5 weight reducing agents. Suppose that n subjects are to be assigned at random to each of t = 5 treatment groups. Suppose that the smallest meaningful eect on weight loss that researchers ...

#### ms230c1a

East Los Angeles College, MS 230
Excerpt: ... MS230 Coursework 1 Hand in at lecture 18th October 2004. 1. A marketing researcher studied annual sales of a product that had been introduced 10 years ago. The data were as follows where x is the year and Y is sales in thousands of units. xi Yi 1 2 3 4 5 6 7 8 98 135 162 178 221 232 283 300 9 10 374 395 Perform a full analysis of these data including: (a) a fit of a simple linear regression model with an appropriate plot (b) calculation of the standardised residuals and plots with comments on the model assumptions of i. homogeneity of variance ii. normality iii. the linearity of the model in x If you find evidence that a transformation of Y might be necessary try taking Y = Y and repeat the above. For your chosen model find: (c) 95% confidence intervals for the slope and intercept parameters (Note you cannot get these directly from R, you have to do some work) (d) a plot showing the fitted line and 90% confidence bands for it Write up your conclusions in a short, mainly verbal, one or two page report supple ...

#### Quiz03

San Jose State, MATH 161
Excerpt: ... Math 161B - Spring 2009 Quiz 3 Fourteen observations on a quantitative predictor X and a quantitative response Y are related via a simple linear regression model . The SPSS output of the regression analysis can be found below. (Note that some of the output values are missing.) (a) (3 points) Fill in the missing value of MSE in the ANOVA table above. (b) (3 points) Use the values from the ANOVA table to compute the coefficient of determination. (c) (6 points) Write down the fitted regression model (with numbers wherever possible). (d) (4 points) Assuming that the estimated model parameters are exact, find a 95% confidence interval for the response Y if x = 10 (recall that z0.025 = 1.96). (e) (4 points) What percentage of variation in Y can be explained through the regression on X? 1 ...

#### assignment02

Mich Tech, FW 5411
Excerpt: ... FW5411 Applied Regression Analysis 07 February 2009 Assignment 2 Simple Linear Regression Objectives: Practice fitting and evaluating simple linear regression model s Instructions: Complete problems number 2.4, 2.5, 2.9, 2.13 and 2.19 in Montgomery et al. (2006). Note that problem 2.9 asks that you use data in Table 2.9; this is an error and the data are actually in Table 2.11 on p. 53. Product: Summarize your answers briefly and include output from the computer software where appropriate that illustrates your work. Professionalism counts, even in an applied statistics class. Due Date: 23 February 2009. Original content Copyright 2009 by Robert Froese http:/www.biometrics.mtu.edu ...

#### Solution06

San Jose State, MATH 161
Excerpt: ... at is the approximate percentage of female runners with stride rates of 3.3 that run faster than 19.2 m/s? A runner with a stride rate of 3.3 has an average speed of y = 21.786 + 12.406 3.3 = 19.153 The variance of the speed is 0.007. Thus, according to the assumptions of the regression model, the distribution of the speed is Normal( = 19.153, 2 = 0.007). Hence P (Speed > 19.2) = P Z> 19.2 19.153 0.007 = P (Z > 0.562) = 0.287 About 28.7% of runners with stride rate of 3.3 run faster than 19.2 m/s. 2. A number of studies have shown lichens (certain plants composed of an alga and a fungus) to be excellent bioindicators of air pollution. The le Lichen.txt contains data (read from a graph) on x = NO wet deposition (in g N/m2 , a measure for air 3 pollution) and y = lichen N (in % dry weight). (a) Use SPSS to draw a scatter plot of Lichen N against NO wet deposition. Is it 3 reasonable to t a simple linear regression model to this data? Explain. Eyeball from the graph what ...

#### assn1-333

Wisconsin, STAT 333
Excerpt: ... Statistics 333 Assignment 1 Due Sept. 19, 2003 1. For the simple linear regression model Y i = 0 + 1 X i + i , i = 1, . . . , n, consider the least ^ ^ squares fits Y i = b0 + b1 X i and residuals given by ei = Y i - Y i = Y i - (b0 + b1 X i ), i = 1, . . . , n. n ^ n ^ Since b0 = Y - b1 X , it is easy to see (1/n) i=1 Y i = Y and hence also that i=1 (Y i - Y i ) = 0. Similarly, also show that the residuals ei satisfy the additional `orthogonality' constraint n ^ i=1 X i (Y i - Yi ) = 0 . 2. For the simple linear regression model in Exercise 1, under the standard model assumptions for the random errors i , show that the least squares estimator b1 = S xy / S xx and n n Y = (1/n) i=1 Y i have zero covariance, i.e., Cov( b1 , Y ) = 0 , where S xy = i=1 (X i - X) Y i . 3. For the simple linear regression model , the estimator of the error variance 2 = Var( i ) is given by S2 = 1 n 1 ^ (Y i - Yi )2 n - 2 SSE . n - 2 i=1 Show that this estimator S 2 is unbiased for 2 , i.e., prove that E( S 2 ) = 2 . ...

#### Lecture_8

UPenn, ESE 302
Excerpt: ... SYSTEMS 302 LECTURE 8 BIASED ESTIMATORS Jensens Inequality EFFICIENT ESTIMATION Efficiency of the Sample Mean REGRESSION ANALYSIS Simple Linear Regression Model Estimation of Parameters For next time: Devore, Section 12.1-12.2 BEST LINEAR UNBIASED ESTIMATORS For any random sample ( X 1 ,., X n ) from a distribution with parameter , each unbiased estimator of which is of the form (1) n = ai X i i =1 n is called a linear unbiased estimator of . With this definition we have the following fundamental property of the sample mean: THEOREM. The sample mean X n is the unique linear unbiased estimator of the mean with smallest variance. Hence X n is called a Best Linear Unbiased (BLU) estimator of . SIMPLE LINEAR REGRESSION MODEL There exist parameters 0 , 1 and 2 such that for any fixed value of the independent (explanatory) variable, x , the dependent variable, Y , is related to x by the equation (1) Y = 0 + 1 x + The quanti ...

#### lecture12

Uni. Worcester, MA 2612
Excerpt: ... Lecture 12 Administrative MA 2612 - Applied Statistics II D 2004 1. HW 2 due tomorrow Today 1. The simple linear regression model (PNC 7.3) 2. Interpreting the tted simple linear regression model 3. Assessing the quality of model t Simple linear regression In the previous lecture we discussed methods for describing the relation- ship between two quantitative variables. Scatterplots were introduced as a means of displaying the relation- ship between two quantitative variables. The scatterplot reveals the general pattern of the relationship as well as any deviations from the pattern. Correlation, r, was introduced as a numerical measure of the direction and strength of a linear relationship between two variables. In this lecture we will introduce the simple linear regression model which is used to represent one of the variables as a function of the other, usually for the purpose of prediction. The response variable is the variable we are interested in predict- ing. We denote ...