This preview shows page 1. Sign up to view the full content.
Unformatted text preview: b
(6) t = i "bi Just as with the t statistic for a t
test, t for a regression coefficient tells us how large that coefficient is relative to how large it would be expected to be by chance (i.e., according to the null hypothesis that its true value is zero). If t is large (either positive or negative), then bi is larger than would be expected by chance, so chance is not a good explanation for the result that we got. In this case, we reject the null hypothesis and adopt the alternative hypothesis that bi ≠ 0 (i.e., that Xi has a real effect on Y). If t is close to zero, then bi fits with what we’d expect by chance, so we retain the null hypothesis that bi = 0 (i.e., Xi has no real effect on Y). The t statistic from a regression is used in the same way as in a t
test. If t is greater than tcrit, then we reject the null hypothesis. The alternative (and equivalent) approach is to compute a p
value, which is the probability of a result as or more extreme than t: p(tdf > t). This is the formula for a two
tailed test, but we can also compute a one
tailed p
value if the direction of the effect (i.e., the sign of bi) was predicted in advance. In either case, we reject the null hypothesis if p < α. The only remaining information needed to find tcrit or p is the degrees of freedom. As usual, the degrees of freedom for t equals the degrees of freedom for the standard error used to compute t. The standard error for a regression coefficient, "bi , comes from SSresidual, which, as is explained below, has n – m – 1 degrees of freedom. Therefore, a t
test for a regression coefficient uses df = n –...
View
Full
Document
This document was uploaded on 02/25/2014 for the course PSYC 3101 at Colorado.
 Spring '08
 MARTICHUSKI
 Psychology

Click to edit the document details