8sngl_sam_hyptst

8sngl_sam_hyptst - Single sample hypothesis testing 9.07...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Single sample hypothesis testing 9.07 3/02/2004
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Statistical Inference We generally make 2 kinds of statistical inference: 1. We estimate some population parameter using confidence intervals and margins of error. 2. We evaluate data to determine whether it provides evidence for some claim about the population. Significance testing .
Background image of page 2
Recall from our first class: Chance vs. systematic factors •A systematic factor is an influence that has a predictable effect on (a subgroup of) our observations. – E.G. a longevity gain to elderly people who remain active. – E.G. a health benefit to people who take a new drug. •A chance factor is an influence that contributes haphazardly (randomly) to each observation, and is unpredictable. – E.G. measurement error
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Observed effects can be due to: A. Systematic effects alone (no chance variation). – We’re interested in systematic effects, but this almost never happens! B. Chance effects alone (all chance variation). – Often occurs. Often boring because it suggests the effects we’re seeing are just random. C. Systematic effects plus chance. – Often occurs. Interesting because there’s at least some systematic factor. An important part of statistics is determining whether we’ve got case B or C.
Background image of page 4
Tests of significance • Invented to deal with this question of whether there’s a systematic effect, or just noise (chance).
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example (from your book) • A senator introduces a bill to simplify the tax code. He claims this bill is revenue-neutral, i.e. on balance, tax revenues for the government will stay the same. • To evaluate his claim, the Treasury Department will compare, for 100,000 representative tax returns, the amount of tax paid under the new bill, vs. under the old tax law. d = tax under new bill – tax under the old rules (this d is going to be our random variable)
Background image of page 6
Evaluating the new tax bill However, first, just to get a hint of how the results might turn out (with less work) they run a pilot study, in which they just look at the difference between the old and new rules, d, for 100 returns randomly chosen from the 100,000 representative returns. Results from this sample of 100 returns were as follows: m(d) = -$219 s(d) = $725. – This is a pretty big standard deviation for a mean of -$219. – How much tax people pay is highly variable, and it’s not surprising that a new bill would have a big effect on some returns, and very little on others.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
• If, under the new law, the government really loses an average of $219 per tax return, that could add up to a lot of money! – $200/return x 100 million returns = $20 billion! • But, this was just a pilot with 100 returns. And there’s a very large standard deviation.
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/11/2011 for the course BIO 9.07 taught by Professor Ruthrosenholtz during the Spring '04 term at MIT.

Page1 / 49

8sngl_sam_hyptst - Single sample hypothesis testing 9.07...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online