This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Statistical Methods I (EXST 7005) Page 80 Chi Square Goodness of Fit
This test is similar to the Chi Square test of independence, but instead of deriving the expected
values from the row and column sums, the expected values are derived from some theoretical
In a flowering plant with incomplete dominance a cross between a red flowered parent (RR) and a
white flowered parent (rr) is expected to yield offspring with pink flowers (Rr). If the
offspring are then crossed with each other, a ratio of 9:6:1 or red to pink to white flowered
offspring should result. In one particular experiment the offspring produced the observed
results of 153 red, 72 pink and 17 white plants. Does this result conform to the expected
1) H0: Results follow the expected proportions
2) H1: Results do not follow the expected proportions
3) Assume IID r.v.
4) Set α (at say 0.05 or 0.01 as before). Calculate the critical value where the chi square results
for this test have c–1 = 2 degrees of freedom (where c is the number of column or the number
of categories, 3 in this case). The critical value for α = 0.05 is 5.991 and for α = 0.01 it is
5) Conduct an experiment to obtain results. We got 242 flowers from our experiment.
Chi square Red
6.2 6) Compare the calculated chi square value to the test statistic. The calculated value (6.2) does
exceeds the test statistic value of α = 0.05, but not for α = 0.01.
Tabular Chi square values
d.f. = 2
Actual P(>χ2) alpha
0.0451 Chi square value
13.815 7) In this case we reject the null hypothesis and might term our result as a “statistically
significant” departure from the expected result under the null hypothesis. As a published
result we may wish to state that the P value was 0.0451, clearly indicating that our result was
significant at the 0.05 level of α, but not at a higher level. SAS example 3b of the Chi square Goodness of Fit test in SAS
See computer output SAS example 3c – testing a simple ratio with Chi square
See computer output James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 81 A final note on the Chi square tests of hypothesis.
Although tests of hypothesis about variances may be either directional or non-directional, the chi
square tests of independence are directional. Small values of the chi square statistics for these
tests indicate that the null hypothesis is met very well since the observed values are very close
to the expected values. It is only with larger departures of the expected values from the
observed values that the chi square statistic should be rejected. Therefore, it is only
excessively large chi square values that cause the null hypothesis to be rejected and this would
be a one tailed test. Summary
The Chi square distribution
The Chi square distribution can be derived as the square of the Z distribution.
Sample variances are Chi square distributed
The Chi square distribution can be used to test hypotheses about variances.
The distribution has only one parameter, γ, (and is different for every γ) the distribution is non–
negative and asymmetrical. The variance of the distribution is 2γ.
Hypothesis testing employs the form of the distribution χ 2 ≤ SS σ 02 For testing variance we assume the variable Yi is a Normally and Independently distributed
random variable (NID r.v.) In the Chi square tables
Degrees of freedom are on the left and a different distribution is given in each row.
Selected probabilities in the upper TAIL of the distribution is given in the row at the top of the
The distribution is NOT symmetric, so the probabilities at the top must be used for both upper
and lower limits. In addition to tests of variances, the Chi square can be used to do
Test of Independence
Test of Goodness of Fit
For these tests we assume Yi is an Identically and Independently distributed random variable
In SAS the Chi square test of independence can be done with PROC FREQ. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 82 The F test
This test can be used to either,
test the equality of population variances – our present topic
test the equality of population means (this will be discussed later under ANOVA)
The F test is the ratio of two variances (the ratio of two chi square distributions) Given two populations
Variance Population 2 μ1 μ2 σ 2
1 Draw a sample from each population
Population 1 n2
S12 Sample variance Population 2 n1
γ1 Sample size
Sample mean Y2
S2 To test the Hypothesis
H 0 : σ 12 = σ 22
H 1 : σ 12 ≠ σ 2 (directional, or one sided, hypotheses are also possible) The test statistic is F = σ 12 σ 22 which has an expected value of 1 under the null hypothesis. In practice there will be some variability, so we need to define some reasonable limits and
this will require another statistical distribution. The F distribution
1) The F distribution is another family of distributions, each specified by a PAIR of degrees of
freedom, γ1 and γ2.
γ1 is the d. f. for the numerator
γ2 is the d. f. for the denominator
Note: the two samples do not have to be of the same size and usually are not of the same size.
2) The F distribution is an asymmetrical distribution with values ranging from 0 to ∞, so [0 ≤ F ≤
3) There is a different F distribution for every possible pair of degrees of freedom.
4) In general, an F value with γ1 and γ2 d.f. is not the same as an F value with γ2 and γ1 d.f., so
order is important. i.e. Fγ1, γ2 ≠ F γ2, γ1 usually
5) The expected value of any F distribution is 1 if the null hypothesis is true. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 83 The F distribution F distribution with 1, 5 d.f. F distribution with 100, 100 d.f. F distribution with 5, 10 d.f.
0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 The F tables
• The numerator d.f. (γ1) are given along the top of the page, and the denominator d.f. (γ2)
are given along the left side of the page. • Some tables give only one F value at each intersection of γ1 and γ2. The whole page
would be for a single a value and usually several pages would be given. • Our tables will give four values at the intersection of each γ1 and γ2, each for a different a
value. These a values are given in the second column from the left. • Our tables will have two pages. • Only a very few probabilities will be available, usually 0.05, 0.025, 0.01 and 0.005, and
sometimes 0.100. • Only the upper tail of the distribution is given. Partial F table
1 2 3 P>F
41.83 James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 84 Working with F tables
The F tables are used in a fashion similar to other statistical tables.
Select a probability value at the intersection of the two degrees of freedom.
Be sure to keep track of which one is the numerator degrees of freedom (top of table) and
which is the denominator degrees of freedom (left side of table).
At the intersection of the degrees of freedom there are four F values corresponding to α values
of 0.05, 0.25, 0.10 and 0.005.
Find the corresponding F value for the desired α. Example of F table use, one tailed example:
Find F with (5,10) d.f. for α = 0.05
find F0.05 such that P[F ≥ F0.05, 5, 10 d.f.] = 0.050 where γ1 = 5 and γ2 = 10
For F5, 10 d.f. The tabular values are listed as 3.33, 4.24, 5.64, 6.87
P[F5, 10 d.f. ≥ 2.52] = 0.100 (Not in your table)
P[F5, 10 d.f. ≥ 3.33] = 0.050
P[F5, 10 d.f. d.f. ≥ 4.24] = 0.025
P[F5, 10 d.f. d.f. ≥ 5.64] = 0.010
P[F5, 10 d.f. d.f. ≥ 6.87] = 0.005
so the value we are looking for is 3.33, for this value P[F5, 10 d.f. ≥ 3.33] = 0.050
note that since this is a 1 tailed value, then
P[F5, 10 d.f. ≤ 3.33] = 0.950
so the two sides sum to 1
Also note that if we reverse the d.f., we find that P[F5, 10 d.f. ≥ 4.74] = 0.050, so the F values
generally differ when d.f. are reversed More working with F tables
Only the upper tail of the distribution is given. There are three reasons for this.
• Most F tests, including those for Analysis of Variance (ANOVA), are one tailed tests,
where the lower tail is not needed. • The need to calculate the lower tail can be eliminated in some two-tailed cases. • The value of F for the lower tail can be found by transformation of values from the
upper tail. Calculating lower tail values for the F distribution
To obtain the lower tail for a value F γ1, γ2 for a particular value of α
First obtain the value in the upper tail for F γ2, γ1 for the same value of a (note the change
in order of the d.f.) James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 85 Then calculate 1/F γ2, γ1 to get the lower tail.
F table : two-tailed example
Find both upper and lower limits for F with (8, 10) d.f. for α = 0.05
Find Fα/2, 8, 10 lower and Fα/2, 8, 10 such that
P[F0.975, 8, 10 ≤ F ≤ F0.025, 8, 10]=0.950 where γ1 = 8 and γ2 = 10
For the upper tail, the value we are looking for can be read directly from the table. It is
P[F8, 10 ≥ 3.85] = 0.025
note that we use only α/2 as the probability for one of the two tails
To find the lower tail, we reverse the d.f., we find that
P[F10, 8 ≥ 4.30] = 0.025 and then calculate
F8, 10 d.f. lower limit = 1/F10,8 = 1/4.30 = 0.2326
Note the reversal of the order of the degrees of freedom Numerical example of an F-test of hypothesis
The concentration of blue green algae was obtained for 7 phytoplankton-density samples taken
from each of two lake habitats. Determine if there is a difference in the variability of
phytoplankton density between the two habitats.
1) H 0 : σ 12 = σ 22
2) H 1 : σ 12 ≠ σ 22
3) Assume: Independence (randomly selected samples) and that BOTH populations are
4) α = 0.05 and the critical limit is
P[Flower ≤ F ≤ Fupper] = 1 – P[Flower ≤ F] – P[F ≥ Fupper] = 1 – 0.025 – 0.025 = 0.95
P[F ≥ Fupper] = 0.025 we can get directly from the table, Fα=0.025, 6, 6 = 5.82
P[Flower ≤ F] = 0.025 we calculate as
1/P[F ≥ Fupper] = 1/5.82 = 0.1718
P[0.1718 ≤ F ≤ 5.82] = 0.95
This case is uncommon because d.f. upper = d.f. lower, so the F values are the same.
5) Draw a sample of 7 from each habitat; calculate the variances and the F ratio. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 86 Sample data
Observation 7 Habitat 1
4.7 Habitat 2
7.6 Summary statistics
Statistic Habitat 1
27.6 Habitat 2
76.4 ΣYi 2 150.52 1074.6 SS
10.91 ΣYi Mean ( Y ) Then calculate the F value as F = 2
2 = 6.95 40.12 = 0.1732 6) Compare the calculated value (0.1732) to the critical region. Given α = 0.05 and a TWO
TAILED alternative, and knowing that the degrees of freedom are γ1 =6 and γ2=6, (note
that both are equal), the critical limits are P[0.1718 ≤ F ≤ 5.82] = 0.95. Since our
calculated F value is between these limit values we would fail to reject the null hypothesis,
concluding that the data is consistent with the null hypothesis.
But it was close. Maybe there is a difference and we did not have enough power. Some notes on F tests
NOTE that in this example the smaller value fell in the numerator. As a result, we were
comparing the F value to the lower limit.
However, for two tailed tests, it makes no difference which falls in the numerator, and which in
the denominator. As a result, we can ARBITRARILY decide to place the larger value in the
numerator, and compare the F value to the upper limit.
The need to calculate the lower limit can be eliminated if we calculate F = 2
S larger 2
S smaller . However, don't forget that this arbitrary placing of the larger variance estimate in the
numerator is done for TWO TAILED TESTS ONLY, and therefore we want to test against Fα/2.
There are three common cases in F testing (actually two common and one not so common). James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 87 1) Frequently, particularly in ANOVA (to be covered later), we will test H 0 : σ 12 = σ 22 against
the alternative, H 1 : σ 12 > σ 22 . In this case we ALWAYS form the F value as F = 2
2 . We put the variance that is expected to be larger in the numerator for a one tailed test!
Don’t forget that this is one tailed and all of α is placed in the upper tail. In the event that
F < 1 we don't even need to look up a value in the table, it cannot be “significant”.
2) Normal 2 tailed tests (used in 2 sample t-tests to be covered later) will test H 0 : σ 12 = σ 22 against
the alternative, H 1 : σ 12 ≠ σ 22 . Here we can form the F value as F = 2
S larger 2
S smaller . Don’t forget that this is a 2-tailed test and it is tested against the upper tail with only half of α
(i.e. α/2) in the upper tail.
When the larger value is placed in the numerator there is no way that we can get a calculated F
3) If both the upper and lower bounds are required (not common, found mostly on EXAMS in basic
statistics) then we will be testing H 0 : σ 12 = σ 22 against the alternative H 1 : σ 12 ≠ σ 22 . We can
form the F value any way we want, with either the larger or smaller variance in the numerator.
This is a 2 tailed test with α/2 in each tail, and F can assume any positive value (0 to ∞) Summary
The F distribution is ratio of two variances (i.e. two Chi square distributions) and is used test used
to test two variances for equality. The null hypothesis is H 1 : σ 12 = σ 22 .
The distribution is an asymmetrical distribution with values ranging from 0 to ∞, and an expected
value of 1 under the null hypothesis.
The F tables require two d.f. (numerator and denominator) and give only a very few critical
Many, perhaps most, F tests will be directional. For the tests the variance that is expected to be
larger and hypothesized to be larger goes in the numerator whether it is actually larger or not.
This value is tested against the upper tail with a probability equal to α.
For the non-directional alternative we may arbitrarily place the larger variance in the numerator
and test against the upper tail, but don't forget to test against α/2. Probability Distribution interrelationships
The probability tables that we have been examining are interrelated. One of these
interrelationships is actually pretty important!
If you examine the F table it turns out that in addition to F values the first column is equal to
values of t2, the last value in the first column corresponds to a Z2 and the last row is a Chi
square value divided by degrees of freedom. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005)
d.f Page 88 1 2 ... 1
. (two tailed)
. 10 F
F values Z2
(3.84) ∞ ∞ ... γ1=10
1.83 1 The distributions and relationships we have discussed are: Zi = 1) Yi − μ for observations and Z = σ Y − μ0 σY = Y −μ σ for testing hypothesis about means. n 2) χ2 = Z2 with 1 d.f. χ2 = ΣZ2 with n d.f.
χ2 = SS/σ2 with n–1 d.f. Y −μ Y −μ
with n–1 d.f.
n 3) t= 4) F= 2
S1 with n1–1, n2–1 d.f. S2
1) χ2/γ with γ d.f. = F with γ, ∞ d.f. χ2 ( SS σ )
2 γ γ ( ) ⎛ SS
⎝ ⎞⎛ 1
⎞ ⎛ SS
⎟ = ⎜ SS 2 ⎟ ⎜ γ
⎟=⎜ γ 2
1 ⎟ ⎝ γσ ⎠ ⎜
1 ⎟ ⎜
⎟ = S2 2
⎠ which follows an F distribution with γ, ∞ d.f. χ12
2) F = γ1 2
given χ χ 22 γ with γ1, γ2 d.f. if H0 is true γ2 =S 2 σ2 from part 1 above James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005)
then χ 2 = γ S 2 Page 89 2 2
and σ 2 χ 2 = γ S 2 and S 2 = σ χ
with γ d.f.
σ 2 therefore, F = S12
2 = σ 12 χ12 2
σ 2 χ 22 γ1 if H0 is true, then σ 12 = σ 22 , then F = χ12 γ 2 with n1 – 1, n2 – 1 = γ1, γ2 d.f.
χ2 γ1 γ 2 with γ1, γ2 or n1 – 1, n2 – 1 d.f. 3) t with γ = ∞ follows a Z distribution, since as γ increases the sample variance (S2) approaches
the population variance (σ2). That is, as the sample size approaches infinity the t distribution, t= (Y − Y )
i S approaches the Z distribution Z = (Yi − μ ) .
σ 2 4) Z = F with 1, ∞ d.f.
we saw that Z2 = χ2 with 1 d.f.
we saw that χ2/γ = F with γ, ∞ d.f.
then F = χ2/γ = Z2/γ = Z2/1 = Z2.
5) t2 with γ d.f. = F with 1, γ d.f.
This can be shown in several ways. First, we just saw that Z2 = F with 1, ∞ d.f. This suggests
that t2 = F with 1, γ d.f. Another type of proof is given below. F= S12 2
S2 with γ1, γ2 d.f. recall, SS = Σ (Yi − μ ) with n d.f. (or N d.f. for a population) since μ is known. This can
2 be partitioned into two parts for a sample. ( n ) n SS = Σ (Yi − μ ) = ∑ (Yi − Y ) + (Y − μ ) = ∑(Yi − Y ) + n (Y − μ )
2 i =1 2 2 2 2 i =1 d.f are n = (n – 1) + 1
Let S12 = n (Y − μ ) with 1 d.f.
2 n ∑ (Y − Y ) 2
Let S2 = i =1 2 i n −1 with n–1 d.f. both of which are unbiased estimates,
then F = and F = S12 2
S2 = n (Y − μ ) n (Y − μ )
2 S 2 = 2
S2 (Y − μ )
S2 n with 1, n–1 d.f.
2 2 ⎛
⎜ (Y − μ ) ⎟
⎟ = t with n–1 d.f.
James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 90 Summary
1) χ2/γ with γ d.f. = F with γ, ∞ d.f.
2) F = χ12 2
χ2 γ1 γ 2 with γ1, γ2 d.f. if Ho is true 3) t with γ = ∞ follows a Z distribution
4) Z2 = F with 1, ∞ d.f.
5) t2 with γ d.f. = F with 1, γ d.f.
Some Examples in the F tables (all α = 0.05, two tails for Z and t values since sign is lost in
F with 1, 10 d.f. = 4.96 = t2 with 10 d.f. = (2.228)2 = 4.96
F with 1, ∞ d.f. = 3.84 = Z2 = (1.96)2 = 3.84
F with 10, ∞ d.f. = 1.83 = χ2 / γ = 18.3 / 10 = 1.83 with 10 d.f. Confidence intervals and margin of error
The confidence interval is an expression of what we believe to be a range of values that is likely to
contain the true value of some parameter is called a confidence interval. The width of this
interval above and below the parameter estimate is called the margin of error.
We can calculate confidence intervals for means (μ) and variances (σ2). Confidence intervals for t and Z distributions
t and Z distribution confidence intervals start with a t or Z probability statement. P(−ta ≤ t ≤ ta ) = 1 − α can also be written
2 P(−ta ≤
2 2 Y −μ
≤ ta ) = 1 − α
SY which is modified to express an interval about μ instead of t (or Z). P(−ta SY ≤ Y − μ ≤ ta SY ) = 1 − α
2 2 P(−Y + ta SY ≥ −μ ≥ −Y − ta SY ) = 1 − α
2 2 The final form is given below. P(Y − ta SY ≤ μ ≤ Y + ta SY ) = 1 − α
2 2 The expression for Z has an identical derivation. P(Y − Z a σ Y ≤ μ ≤ Y + Z a σ Y ) = 1 − α
2 2 James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 91 A common short notation for the interval in the probability statement is given as Y ± ta SY , but
2 the probability statement is preferable as a final result. The value ta SY for intervals on
2 means and ta S for intervals on individual observations is half of the interval width from
2 the lower limit to the upper limit and is called the margin of error. Confidence intervals for variance
Variances follow a Chi square distribution. The confidence interval for variance is based on the
Chi Square distribution.
P(χlower ≤ χ 2 ≤ χupper ) = 1−α or
P( χlower ≤ SS σ 2 2
≤ χupper ) = 1 − α which is solved to isolate σ2.
P (χ 1 ≥ P (χ 1 ≤ 2
SS ≥ 1 χ ≤ 2
upper 1 χ 2
lower ) = 1−α ) = 1−α giving the expression, ( χSS P 2
upper ≤σ2 ≤ SS
χlower ) = 1−α Notice that the upper tabular Chi square value comes out in the lower bound and the lower Chi
square in the upper bound. ( χSS P 2
upper ≤σ2 ≤ SS
χlower ) = 1−α Notes on confidence intervals
One sided intervals are possible, but uncommon.
Confidence intervals are one of the most common expressions in statistics, frequently occurring in
Margins of error and confidence intervals are not always calculated in statistical software
programs, but they can easily be done by hand. From the previous SAS Example 2c
We receive a shipment of apples that are supposed to be “premium apples”, with a diameter of at
least 2.5 inches. We will take a sample of 12 apples, and place a confidence interval on the
mean. The sample values for the 12 apples are;
2.9, 2.1, 2.4, 2.8, 3.1, 2.8, 2.7, 3.0, 2.4, 3.2, 2.3, 3.4 James P. Geaghan Copyright 2010 ...
View Full Document
This note was uploaded on 12/29/2011 for the course EXST 7005 taught by Professor Geaghan,j during the Fall '08 term at LSU.
- Fall '08