HypothesisTesting

# HypothesisTesting - I-IANDOUT — Hypothesis Testing or...

This preview shows pages 1–9. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: I-IANDOUT — Hypothesis Testing or Inference Econ 32]. ~ Applied Econometrics Fall 2007 Cornell University Prof. Molin ari 1 Basic Ideas and Deﬁnitions In many cases we might want to determine whether a parameter is equal to a particular value, or is in some range. For example, in the case of the Florida Ballots we would like to determine if recounting the data would change the result of the elections, 73.8., we’d like to know if E [h] > %. We don’t know the exact answer because we can only estimate the parameter for a random sample (eg, 1, 000 ballots). Here is what we can do: 1. Compute an estimate; 2. Construct a TEST STATISTIC using the estimate; 3. Determine the sampling distribution of the test statistic under the NULL I-IYPOTHESIS; 4. Use a DECISION RULE to either reject or fail to reject the null hypothesis. What are the new concepts introduced? (a) a TEST STATISTIC is a function of the sample observations X1, ..., X”, which is used as evidence for testing the hypothesis. The test procedure partitions the possible values of the test statistics into two subsets: an ACCEPTANCE REGION for the hypothesis to be tested, and a REJECTION REGION for the hypothesis to be tested. (b) The NULL HYPOTHESIS is the hypothesized value, or range of values, for the para- meter. The name comes from the fact that often the hypothesis that we want to test is of the form Hal 2 n2”,i.e., NO DIFFERENCE. Denoted as H0. (c) a DECISION RULE tells us what to do, what decision to take for each possible outcome in the sample space of the test statistic. (d) To be able to construct suitable criteria for hypothesis testing, it is necessary that we also formulate ALTERNATIVE HYPOTHESES, denoted H A. In the Florida Ballot example: 537 H0 . > 1814-06 537 : < fiA 'Elhi—-1s,406 (e) An hypothesis is SIMPLE if it COMPLETELY SPECIFIES the distribution of the ran— dom. variable under analysis (eg. : 9 in the Bernoulli case), COMPOSITE if it does not. In the Florida Ballot example, both Ho and HA are composite. How do we proceed in practice? We ask whether the realization of the test statistic: is likely if the null hypothesis is true, using the sampling distribution of the test statistic. Example 1 H0: s=no H’A : ,Li = ,LtA Suppose H0 is true. We can estimate ,u using 2‘? from a sample with n observations. Assume that: XE- ~ N(,u., 02) We need a statistic, that is a function of the data, in terms of X that has a sampling distribution which under H0 is a distribution that is "welt-known” to us. It is essential that this sampling distribution does not depend on any unknown parameter (ea, 02). Let ’s took at X, assuming s:#w _ 2 X_ % Where t is the so—eatted "t-STATISTIC". Suppose for now that 02 is known. If H0 is true, we know that t is distributed NU], 1). The question is: o Is the estimated. value 0ft likely to be from a NU], l) (in which case H0 looks triad), or is it more likely to have come from another distribution. {in which case H0 looks falsel)? The problem here is that in principle ANY realized value 0ft could come from a N(O,1), because a mndom variable from NU], 1) takes on all the “values in (moo, +00). So how can we decide? We will have to choose something! We can make two types of errors: 0 TYPE I ERROR: Reject H0 when H0 is true. I TYPE II ERROR: Fail to reject H0 when HA is true. It turns out that we can directly compare type I errors, and then search for the test statistic that reduces type II errors as much as possible. Consider the following REJECTION RULE: I Reject H0 if |tl > 2% — where 2%; satisﬁes: Pr {Z > z%} m 92.5 I _ t > 2g lft > 0 — |t| > 23; 1s equlvalent to: 2 t<iz% ifﬁ<0 o FAIL to reject H0 if it) 3 22 or —Ze; g t < z 5 .95. 2 2 V 2 Remark: By construction, a E probability of Type I error. Why is that so? When H0 is true, tw NH], 1). Now: Pr {TYPE I ERROR} : Pr{reject Ha [ H0 is true} = Pr {m > 2% l t N N(0,1)} = a => Pr {Fail to reject H0 | H9 is true} 2 1 — a a is often called the SIGNIFICANCE LEVEL of the test, or the SIZE OF THE CRITI- CAL/REJECTION REGION. We usually want or to be smell1 e.g., or : 0.05, 0.01, ...etc. What about TYPE 11 error? Pr {reject H0 l HA is true} = l — Pr {Fail to reject Ho | HA is true} IE 1 — Pr {TYPE II ERROR} = 1 — s Power of the test B is called the probability or size of a type 11 error. The power of a test measures the ability of the test to detect when H0 is not true, when it is not. The power of a test depends on what ,u is equal to. If ,u. 75 no, for ,u. Close to [1,0, the power will be low; if ,Lt is far from ,Lto, the power will be high. Notice also that under the same H0, if or. l=3> [3‘ T=> 1 — 6 l, and power necessarily drops. Intuitively this is because a rejection becomes less likely under Hg or H A. Mathematically, a 2 Pr{R|H0}=>G<1S042=>R1CR2 => 5 = Pl'ierHA} 52S51¢=RECRII 2 Hypothesis Testing for the Mean of a Population Suppose we have a random sample X1, X2, ...,X,,, from a population with mean ,u. and variance 02. For illustration, assume that we know 02, but we do not know ,u.. As we have seen in lecture, Y = L1 X, provides an unbiased estimator of n. Given what we know, we would like to make decisions regarding what the true value of ,u. might be. Suppose we want to evaluate the claim that ,u = no, as opposed to ,u # no, which is written as: Hg : pa = no (null hypothesis) HA : it 7E no (alternative hypothesis) We need a method that will tell us how likely it is that ,LL = no, given that we know X1, X 2, ..., Xn and 02. Suppose X1, X2, ..., X.” come from a population that is N (a, 02). Then, each X,- is normally distributed with. mean ,u. and variance 02, and thus, Y N N (a, :72), as shown in lecture. Since we are trying to evaluate how likely it is that ,u. = no, we form a test statistic under H0. If ,a 2 #0, then :3? m N (no, 9113), and t 2 N N (0,1). The closer t is to its expected value under H0, which is zero, the more likely it is that ,a 2 no holds. But, 110w close is “close enough”? To answer this question, we use the fact that under H0, t N N (0,1). Observing a value of t N N (D, 1) in the far tails of the standard normal is less likely than observing a value closer to its mean, which is zero. Thus, if we get a value of t that is not very likely (out in the tails), then we could reject the null hypothesis that p. = no. This is because we constructed 15 under the null hypothesis. However, given a value of t, how far out in the tails is enough to reject the null? We need to come up with a cutoff point that we label 29/2. The subscript; corresponds to the fact that there is area under the standard normal density to the right of za/g, as well as to the left of *Zn/g. Then our decision rule is simple: reject HG if it] > 20/2, and do not reject H0 otherwise. Note that Pr > za/2|HO is true) 2 ~92! + g : a. As discussed before, or is the probability of rejecting Ho, given H0 is true. We say that or is the probability of making a Type I Error. We can control 0: by choosing it to be smaller or bigger, or in other words, by choosing our cutoff point Zia/2 to be bigger or smaller, respectively. Another kind of error we can possibly make is to fail to reject Ho, given Hg is not true. This is called a Type 11 Error. Notice that we cannot control for Type II Error. 0‘! 3 Trade-off between or and the power of the test Remember that power of the test:Pr (rejecting HO[HD is not true). When p: % ,LLD (Le. H0 is not true), E : ,u 75 um and Va?" z Then, Y-Mo) : NEW/we #-#o E 2E a (t) (1/52/71 «lJQ/n «62/10, "HXM— V0.7" X 02/77, Va'r(t)=Var(\/02_7£) =—E2~7gl= UQ/ﬂ =1. Consequently, under HA it does not hold that t N N (0,1). In fact, if ,u # ,LLO, then t m N (Jilin—,1). Therefore, and Veg/n 1 —— 6 = power of test 2 Pr (rejecting H0|HD is not true) = Pr > za/glu 75 #0) P,(r;m> Za/awEtt))+Pr(t—E<t) <—za/2~E<t)) M”P-0 #‘HO rufﬂe ILL—“’0 Pr t~ >2, — +Pr tw <72. "— ( Maia/n {CL/2 «lag/n) ( 1/02/71 0/2 ) _#—He _ fﬂtﬂo PI‘ (Z > 20/2 +131" < Eta/2 6 Now, let’s illustrate power graphically: feel)? of E3 UNDER HA= half of b were Hoi t’VJSCf—éiﬁ‘i (i) Under H0, the density of t is given by the clot—clashed, bellnshaped curve. We reject Hg for |t| > 2% (i.e., in the tails of the dot—clashed p.cl.f., that is, in the dotted areas). But what happens if H0 is not true and p, 7E ,LLO? Then the p.cl.f. of t will be given by the solid, bell—Shaped curve. Consequently, Since we always reject whenever |t| > zgé, the power will be given by the sum of the crossed—out areas in each tail of the solid, bell—shaped curve. 4 A Numerical Example Let no I O, 02 = 1, n = 25, and CE = 0.05. Suppose ,a 2 0.4. Then under H0 : ,u = no, t N N (0, 1), l a 5 and under HA : ,a 7"- ,a0, t N N 1) = N (04—0 1) = N (2, 1). Hence, 1 — ,8 = power 1’ Pr (Z < “z0_025 — 2) + P1"(Z > 20025 - 2) Pr (2 < —1.96 — 2) + Pr (2 > 1.96 e 2) m 0.51599 So, we get the following picture: CMDTE: THE AQEA 0:45am-ng LEFT TM» 0? ﬁfe”) \5 Tu“: < "' Echoes '- 3 xx) c:2: O. 1 What if we had Cu = 0.01? Then 20305 = 2.575, and our picture becomes: In this case, H J 1:5 || power 2 PI‘ (Z < “20,005 — 2) + PI (Z > 20.005 — 2) Pr (2 < —2.575 — 2) + Pr (Z > 2.575 — 2) m 0.28265 W 7.". O 8 Note that as we expected, the power has declined when a decreased from 0.05 to 0.01 (Recall: we showed that is C2 1 then B T which implies (1 — 6) 1). We could repeat this experiment for various values of u. The following tables contain the results in terms of power for different values of ,LL and CI’. 2 a = 0.05 0.00695 0.00154 0.00027 0.00004 0.00001 0.07215 0.16853 0.32276 0.51595 0.70540 0.07909 0.17007 0.32303 0.51599 0.70541 Oz = 0.01 11:01 ,LL=0.2 ,LL=0.3 ,Lt=0.4 #:05 0.00105 0.00018 000002 0.00000 0.00000 0.01899 0.05763 0.14119 0.28265 0.47011 0.02005 0.05780 0.28265 0.47011 We see that for any ,Lr. > no = 0, as or goes down, the power of the test decreases, or vice—verse. as or goes up, the power increases. Therefore there is a Clear trade—off between lower probability of Type 1 Error (i.e. lower oz) and higher power of the test. ...
View Full Document

## This note was uploaded on 04/14/2010 for the course ECON 3200 taught by Professor Neilsen during the Spring '08 term at Cornell University (Engineering School).

### Page1 / 9

HypothesisTesting - I-IANDOUT — Hypothesis Testing or...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online