power - 2004 by Jeeshim and KUCC625 (11/28/2004)...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
© 2004 by Jeeshim and KUCC625 (11/28/2004) Understanding the Statistical Power: 1 http://mypage.iu.edu/~kucc625 Understanding the Statistical Power of a Test Hun Myoung Park Software Consultant UITS Center for Statistical and Mathematical Computing How powerful is my study (test)? How many observations do I need to have for what I want to get from the study? The statistical power analysis estimates the power of the test to detect a meaningful effect, given sample size, test size (significance level), and standardized effect size. Sample size analysis determines the sample size required to get a significant result, given statistical power, test size, and standardized effect size. These analyses examine the sensitivity of statistical power and sample size to other components, enabling researchers to efficiently use the research resources. 1. What Is a Hypothesis? A hypothesis is a specific conjecture (statement) about a property of population. There is a null hypothesis and an alternative (or research) hypothesis. Researchers often expect that evidence supports the alternative hypothesis. The null hypothesis, a specific baseline statement to be tested, usually takes such forms as “no effect” or “no difference.” 1 A hypothesis is either two-tailed (e.g., 0 : 0 = µ H ) or one-tailed (e.g., 0 : 0 H or 0 : 0 H ). 2 Three points deserve being taken into account in making a hypothesis. A hypothesis should be specific enough to be falsifiable; otherwise, the hypothesis cannot be tested successfully. Second, a hypothesis is a conjecture about a population (parameter), not about a sample (statistic). Thus, 0 : 0 = x H is not valid because we can compute and know the sample mean x from a sample. Finally, a valid hypothesis is not based on the sample to be used to test the hypothesis . This tautological logic does not generate any productive information. 3 2. Size and Power of a Test The size of a test , often called significance level , is the probability of Type I error. The Type I error occurs when a null hypothesis is rejected when it is true (Table 1). This test size is denoted by α ( alpha ). The 1- α is called the confidence level . 1 Because it is easy to calculate test statistics (standardized effect sizes) and interpret the test results (Murphy 1998). 2 (Mu) represents population mean, while x denotes sample mean. 3 This behavior, often called “data fishing,” just hunts a model that best fits the sample, not the population.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
© 2004 by Jeeshim and KUCC625 (11/28/2004) Understanding the Statistical Power: 2 http://mypage.iu.edu/~kucc625 In a two-tailed test, the test size (significance level) is the sum of the two symmetric areas at the tails of a probability distribution. See the shaded areas of two standard normal distributions in Figure 1. These areas are called null hypothesis rejection regions in the sense that we reject the null hypothesis if a test statistic falls into these regions. The test size is a subjective criterion, although the .10, .05, and .01 levels are conventionally used.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 17

power - 2004 by Jeeshim and KUCC625 (11/28/2004)...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online