This preview shows pages 1–14. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: A Quick Review of Hypothesis Testing In this lecture we will quickly review the following • The basic one sample T test as an example • The decision procedure • Type I and Type II error • OC curves and sample size selection • Practical vs. statistical significance • The relationship between confidence intervals and hypothesis tests A Hypothesis Test has 2 basic components: 1. Hypotheses: Null Hypothesis H Alt. Hypothesis H 1 (essentially "not H ") e.g. H : μ =10 H 1 : μ ≠ 10 2. Decision Criteria It works like this Sample data > Criteria > reject or fail to reject H There are four possible outcomes from a Hyp. test: 1. Fail to reject H when H is true (we made the correct decision) 2. Reject H when H is indeed false (right again) 3. Reject H when H is true (a Type I error) 4. Fail to reject H when H is false (a Type II error) Define: α = Pr{we make a Type I error} = Pr{reject H  H true} β = Pr{we make a Type II error} = Pr{fail to reject H  H is false} How good a test is is determined by these probabilities. Example: Recall the one sample Ttest Assumptions: 1. Population is NID ( μ , σ 2 ), 2. μ and σ are unknown population parameters H : μ = μ 0 where μ is some specified constant H 1 : μ ≠ μ Decision criteria: Compute the T statistic from the data: n S X T μ = If T  > T c then reject H 0 (T c is a “critical” value from a table) Why does this work? 1. The test statistic "measures something significant" about H . If the sample average is far from the hypothesized value, H is likely to be false. 2. We know the distribution of T when H is true. In this example, the test statistic T will be close to zero when H is true. Conversely, T will be far from zero when H is false. Thus it measures something significant about H . If T is far enough away from zero, we can reject H . But how far away from zero is far enough to reject H ? That's why we need the distribution of T when H is true. [picture of T distribution] For example, when H 0 is true, and we have n=11 observations, then Pr{3.196 < T 0 < +3.196} = 0.01 (from Ttables) Thus I set my criteria to be: "reject H if T  > 3.196" Then I only have a 0.01 probability of a type I error Thus to summarize the test procedure: Take a sample of n observations X 1 , X 2 , ..., X n Compute the sample average Compute the sample standard deviation Compute T If T  > T ( α , n1) then reject H "Pvalues" of tests We can actually report results 2 ways: 1. State α ahead of time, and report if we reject H or not. 2. After analysis, state the value of α which is on the border of reject and do not reject....
View
Full
Document
This note was uploaded on 08/06/2008 for the course IE 410 taught by Professor Storer during the Fall '04 term at Lehigh University .
 Fall '04
 Storer

Click to edit the document details