This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Sample Size Estimation 1. Continuous response variable – Parallel group comparisons • Comparison of response after a specified period of followup • Comparison of changes from baseline – Crossover study 2. Success/failure response variable (dichotomous response) – Impact of noncompliance, lag – Realistic estimates of control event rate (Pc) and event rate pattern – Use of epidemiological data to obtain realistic estimates of experimental group event rate (Pe) 3. Time to event designs and variable followup Survey of 71 “Negative” Trials (Freiman et al., NEJM , 299:690694,1978) • Authors stated “no difference” • P > 0.10 (2sided) • Success/failure endpoint • Expected number of events >5 in control and experimental groups Using the stated Type I error and control group event rate, power was determined corresponding to: • 25% difference between groups • 50% difference between groups 09 1019 2029 3039 4049 5059 6069 7079 8089 9099 5 10 15 20 25 Power (1  ß ) 5.63% 25% Reduction Frequency Distribution of Power Estimates for 71 “Negative” Trials References: Frieman et al, NEJM 1978. 09 1019 2029 3039 4049 5059 6069 7079 8089 9099 5 10 15 20 25 Power (1  ß ) 29.58% Frequency Distribution of Power Estimates for 71 “Negative” Trials 50% Reduction References: Frieman et al, NEJM 1978. Implications of Review by Frieman et al. • Many investigations do not estimate sample size in advance • Many studies should never have been initiated; some were stopped too soon • “Nonsignificant” difference does not mean there is not an important difference • Design estimates (in Methods) are important to interpret study findings • Confidence intervals should be used to summarize treatment differences Studies with Power to Detect 25% and 50% Differences 6 6 6 6 l l l l 5 10 15 20 25 30 35 40 45 50 Percent of Studies with at Least 80% Power 6 25% Difference l 50% Difference 1975 1980 1985 1990 Moher et al, JAMA , 272:122124,1994 Comparison of Sample Size Formulae for Means and Proportions (n per group) n = 1 (P c P e ) 2 z 1 α /2 2 P (1  P ) + z 1 β P c (1  P c ) + P e (1  P e ) [ ] 2 n = 1 (P c P e ) 2 P c (1  P c ) + P e (1  P e ) [ ] z 1 α /2 + z 1 β [ ] 2 P c = Co nt ro l gro up event ra t e P e = E xperiment a l gro up event ra t e P = (P c + P e ) 2 ∆ = P c P e For means: n = 2 σ 2 z 1 α /2 + z 1 β ( 29 2 ∆ 2 The formula above is sometimes just Derivation of Sample Size for Comparing 2 Proportions is Similar to that for Comparing 2 Means β α Z α σ pe – pc Z β σ pe – pc X pc  pe B A If pcpe > A, reject H o If pcpe ≤ A, accept H o Derivation of Sample Size for Comparing Two Proportions ( 29 2 2 2 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 ) ( 2 ) ( ) ( ) 2 ( ) ( ) 2 ( ) ( ) ( ) 2 ( ) ( Under / 2 2 and , Under c e e e e c c c c e e c c c c c e e e c c c c c e e e c c pe pc A c c pc pe pc pe pc e c o p p Z Z pq n p p q p q p Z q p Z n q p q p Z q p Z p p n n q...
View
Full
Document
This note was uploaded on 11/21/2011 for the course PUBH 7420 taught by Professor Ph7420 during the Spring '07 term at Minnesota.
 Spring '07
 ph7420

Click to edit the document details