lect4 - IE 410 Lecture 4 Details of the ANOVA In this...

Info iconThis preview shows pages 1–19. Sign up to view the full content.

View Full Document Right Arrow Icon
IE 410 Lecture 4 Details of the ANOVA In this lecture we will: Discuss power and the F stat Learn about SS and degrees of freedom Learn about decomposition of SS’s Try to understand Cochran’s theorem
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Recall The F statistic: a N y y a y y n MS MS F a i n j i ij a i i E trt - - - - = = ∑ ∑ = = = 1 1 2 1 2 0 .) ( 1 ..) . ( It compares variation in treatment averages to variation within treatments
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Under H 0 F 0 follows the F distribution with ν 1 =a-1 and ν 2 =N-a “degrees of freedom” ν 1 and ν 2 =are parameters of the F distribution F derived as the ratio of two independent χ 2 random variables E(MS E ) = E(MS TRT ) = σ 2
Background image of page 4
Is F the best test statistic? What are the alternatives? How do we define best? By best we generally mean that we seek the most powerful test: If H 0 is “just a little bit false”, we will still be able to reject H 0.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Alternatives to the F test Randomization test Kruskal-Wallis test Throw away some data, then do the F test Among all tests with specified α , we want the most powerful (smallest β )
Background image of page 6
Example Method 1. Let a=3, n=4, N=12 Compute F 0 Method 2. Throw away 2 randomly chosen observations from each treatment Compute F 0 from the remaining data
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
F 0 ~F 2,9 F 0 ~F 2,3 F 0 Rejects H 0, F 0 can’t Thus F 0 is more powerful 0.05
Background image of page 8
Under our assumptions (statistical model) the F-test has been proven to be the most powerful test. The randomization and Kruskal-Wallis tests are useful when our assumptions don’t seem to apply.
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Degrees of Freedom and SS’s Recall two measures of sample variance: n x x and n x x S n i i n n i i = = - = - - = 1 2 2 1 2 2 ) ( ˆ 1 ) ( σ Why do we traditionally use S 2 ? Why do we divide by N-1?
Background image of page 10
The traditional answer is: 2 2 2 2 ) ˆ ( ) ( σ < = n E whereas S E That is, S 2 is an unbiased estimate of σ 2 . But why is this true?
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Suppose the true population mean μ was known: Then we would estimate σ 2 using 2 2 1 2 2 ) ˆ ( ) ( ˆ σ μ = - = = E and n x n i i i.e. The second central moment.
Background image of page 12
[ ] [ ] [ ] [ ] [ ] 2 2 2 2 2 2 2 2 2 2 2 1 2 1 1 * 2 1 ) ( 2 ) ( 1 2 1 1 ) ( σ μ = = = + - + = + - = + - = - = - = n n n n x E x E n x x E n x E n n x E i i i i i i i i i i n i i
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
? ) ( ) ( 1 2 1 2 biased is n x x while unbiased n x is Why n i i n i i = = - μ = ) ( : x E all After
Background image of page 14
Suppose n=2 and μ is known Take 4 sets of samples of size n=2 μ from deviations than smaller are x from Deviations x Squared
Background image of page 15

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
In fact (given a data set), among all possible values k: = - = n 1 i 2 ) ( minimizes k x x k i And thus we expect: n x n x x i i n = = - - = n 1 i 2 n 1 i 2 2 ) ( ) ( ˆ μ σ
Background image of page 16
will be too small. (Simple Proof) [ ] x n x k nk x set nk x k x k x dk d n i i n i i n i i n i i n i i = = = = + - = - - = - = = = = = 1 1 1 1 1 2 2 2 0 2 2 ) ( 2 ) (
Background image of page 17

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Something happens when we replace μ with its estimate x .
Background image of page 18
Image of page 19
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 67

lect4 - IE 410 Lecture 4 Details of the ANOVA In this...

This preview shows document pages 1 - 19. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online