371chapter8 - Chapter 8 Statistical Power 8.1 Tests of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 8 Statistical Power 8.1 Tests of Hypotheses Revisited As discussed earlier, a standard practice in research consists of: 1. Select null and alternative hypotheses. 2. Specify the significance level of the test, . (Remember that the significance level is the probability the test rejects a true null hypothesis.) 3. Collect and analyze data; decide whether or not to reject the null. In this chapter we will consider the following two question. In practice, the first question is hugely more applicable than the second. 1. Just b/c we fail to reject the null, how good should we feel about continuing to assume it is true? 2. Just b/c we reject the null, does it really mean the null is no good? Rather than try to explore these questions in a general mathematical way, I will begin with several examples. To fix ideas, consider our Fishers test to investigate whether p is constant in a sequence of trials that might be BT. Suppose we obtain the following data. Table 1 Success Failure Total p First Half 4 1 5 0.80 Second Half 1 4 5 0.20 Total 5 5 10 The P-value for Fishers test is 0.2063 and with any popular choice of the decision would be to fail to reject. But note that the p s are very different! Why did we get such a large P-value when the p s are so different? Well, b/c we do not have much data. In fact, if the amount of data is small, it can be difficult or impossible to reject the null. For example, the following table gives the largest possible difference between p s, yet the P-value is 0.1000; too large to reject for < . 10 . 79 Table 2 Success Failure Total p First Half 3 3 1.00 Second Half 3 3 0.00 Total 3 3 6 At the other extreme (and note that these data are a bit silly in practice), consider the following table. Table 3 Success Failure Total p First Half 50,500 49,500 100,000 0.505 Second Half 50,000 50,000 100,000 0.500 Total 100,500 99,500 200,000 This table gives a P-value of 0.0256, which would lead to rejecting the null for = 0 . 05 . But I cannot think of any scientific problem for which I would want to conclude that p has changed! The p s are so very close that I believe the assumption of constant p would be scientifically useful....
View Full Document

Page1 / 4

371chapter8 - Chapter 8 Statistical Power 8.1 Tests of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online