Week 2 - More on the Reliability, Precision, and...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: More on the Reliability, Precision, and Performance of the regression model and its estimated parameters. As the least-squares coefficient/parameter estimates ( j s) and the SRFs ability to explain variation in the dependent variable (Y) can vary from sample to sample, what is needed are some sort of measures of reliability and precision. Let us review some of the more useful indices and tests. 1. Standard error of and related statistics Standard error of the standard deviation of the sampling distribution of based on sample estimates from repeated samples of a given size n, and general noted as se( j ) for any given j (where j = 0, 1, ., k) and se( ) standard error of estimated coefficient for the y-intercept or constant term; and se( j ) standard error of estimated slope coefficient associated with a j-th regressor (j=1,, k) Hypothesis testing for estimated regression coefficients To assess whether an estimated j ( j ) differs significantly from a hypothesized value of j , as designated under a stated null hypothesis. For example, consider a the two-tailed test: H o : j = j,H o H a : j = j,H o , (j=0, 1,, k) Typically, the default test procedure assume j,H o = 0. ^ ^ ^ Test statistic : t-statistic t = ^ ^ j- j,H o se( j ) or when we are testing to see if the estimated beta coefficient is significantly different from a value of zero: t = ^ ^ j se( j ) distributed as a t-distribution with n-k* degrees of freedom. t Probability density -t /2 0 +t /2 non-rejection region rejection region rejection region /2 /2 (reject H o ) (reject H o ) (fail to reject or accept H o ) 2-tailed test criterion: Since | t | > | t /2 | , we must reject H o at the (1- ) x 100% level of confidence. Note: Critical t-values are found for (n-k*) degrees of freedom, where n sample size; k number of regressors; k*=k+1 number of regression coefficents to be estimated including the intercept term or constant. t-distribution In general, the t-tests on the individual s allow us to evaluate the explanatory power and/or statistical significance of each individual explanatory variable in the model. Rule of thumb : the higher the t-value, the greater the contribution of a variable (X) to explain variation in a dependent variable Y. For the bi-variate model, it can be shown that the standard error of the estimated slope parameter 1 is se( 1 ) = 2 / (X i- X ) 2 ^ ^ n [ i 2 ] / (n-k*) i=1 2 = ^ the error variance (where k* =2). Recall that the square root of the error variance is the standard error of the estimate or root mean square error-- RMSE....
View Full Document

Page1 / 146

Week 2 - More on the Reliability, Precision, and...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online