ec20-newlec8-v1

# ec20-newlec8-v1 - Economics20 Lecture#8:Multivariate

This preview shows pages 1–10. Sign up to view the full content.

1 Economics 20 Lecture #8: Multivariate  Regression Statistical Inference

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Outline Last Thursday Started talking sampling variability Our estimates themselves are variable – depend on which data we get in our random sample This time: Hypothesis testing Confidence intervals
3 Wendesday… Under MLR.1-MLR.5 (+no err autocorrelation): …where R j 2  is the R 2  from a  regression of X j  on the rest of  the X’s in the regression We estimate the standard  error : s is the root mean squared error (std dev of the residuals) –  our estimate of  σ ( 29 ( 29 ( 29 ( 29 [ ] 2 1 2 1 var 1 ˆ j j j R x N sd - - = σ β ( 29 ( 29 ( 29 ( 29 [ ] 2 1 2 1 1 ˆ j j j R x Var N s se - - =

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 More than just variance For hypothesis testing and confidence intervals, we also want to say something about the whole distribution of the estimate Loosely speaking, what is the chance that our estimate is close to the truth? So how do we determine the distribution of our estimates? One approach: assume something about the distribution of the errors: they are normal ~N(0, σ 2 ) normally ~ is distributed mean 0 variance σ 2
5 . . x 1 x 2 Example: the homoskedastic normal distribution with a single explanatory variable E( y | x ) = β 0 + 1 x y f( y|x ) Normal distributions

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 Normal Sampling Distributions Under MLR.1-MLR.6 is normally distributed because it is a linear combination of the (assumed) normally distributed errors. Standardized version: However, sd is unknown (involves σ ) so… ( 29 ( 29 j j j Var N β ˆ , ~ ˆ ( 29 ( 29 ( 29 1 , 0 ~ ˆ ˆ N sd j j j β- j ˆ
7 Normal Sampling Distributions Under MLR.1-MLR.6: A t-distribution with N-K-1 “degrees of freedom” A t-distribution is like a N(0,1) but with “fatter tails”  to account for the fact that  σ  has been replaced  with a noisy estimate, s How well estimated s is measured by its degrees of  freedom As N-K-1    , s    σ  and t N-K-1  N(0,1) ( 29 ( 29 1 ~ ˆ ˆ - - - K N j j j t se β

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 Hypothesis testing: The t -Test Knowing the sampling distribution for the estimator lets us to carry out hypothesis tests Start with a null hypothesis = what we assume is the true value of parameter in the absence of counter evidence For example, H 0 : β j = 0 Why is this a reasonable null? What does it say? x j has no relationship with y (controlling for other x ’s)
9 T -Test Besides our null, H 0 , we need an alternative hypothesis, H 1 , and a significance level, α H 1 may be one-sided, or two-sided H 1 : β j > 0 and H 1 : j < 0 are one-sided H 1 : j 0 is a two-sided alternative

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This document was uploaded on 01/31/2011.

### Page1 / 31

ec20-newlec8-v1 - Economics20 Lecture#8:Multivariate

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online