{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

8sngl_sam_hyptst

# 8sngl_sam_hyptst - Single sample hypothesis testing 9.07...

This preview shows pages 1–10. Sign up to view the full content.

Single sample hypothesis testing 9.07 3/02/2004

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Statistical Inference We generally make 2 kinds of statistical inference: 1. We estimate some population parameter using confidence intervals and margins of error. 2. We evaluate data to determine whether it provides evidence for some claim about the population. Significance testing .
Recall from our first class: Chance vs. systematic factors • A systematic factor is an influence that has a predictable effect on (a subgroup of) our observations. E.G. a longevity gain to elderly people who remain active. E.G. a health benefit to people who take a new drug. • A chance factor is an influence that contributes haphazardly (randomly) to each observation, and is unpredictable. E.G. measurement error

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Observed effects can be due to: A. Systematic effects alone (no chance variation). We’re interested in systematic effects, but this almost never happens! B. Chance effects alone (all chance variation). Often occurs. Often boring because it suggests the effects we’re seeing are just random. C. Systematic effects plus chance. Often occurs. Interesting because there’s at least some systematic factor. An important part of statistics is determining whether we’ve got case B or C.
Tests of significance Invented to deal with this question of whether there’s a systematic effect, or just noise (chance).

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example (from your book) A senator introduces a bill to simplify the tax code. He claims this bill is revenue-neutral, i.e. on balance, tax revenues for the government will stay the same. To evaluate his claim, the Treasury Department will compare, for 100,000 representative tax returns, the amount of tax paid under the new bill, vs. under the old tax law. d = tax under new bill – tax under the old rules (this d is going to be our random variable)
Evaluating the new tax bill However, first, just to get a hint of how the results might turn out (with less work) they run a pilot study, in which they just look at the difference between the old and new rules, d, for 100 returns randomly chosen from the 100,000 representative returns. Results from this sample of 100 returns were as follows: m(d) = -\$219 s(d) = \$725. This is a pretty big standard deviation for a mean of -\$219. How much tax people pay is highly variable, and it’s not surprising that a new bill would have a big effect on some returns, and very little on others.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Initial impressions If, under the new law, the government really loses an average of \$219 per tax return, that could add up to a lot of money! – \$200/return x 100 million returns = \$20 billion! But, this was just a pilot with 100 returns. And there’s a very large standard deviation. – Do we expect this result of m(d)=-\$219 to generalize, or is it different from \$0 just by chance?