Lecture 28: Chi-square tests and goodness of t tests
Let = ( p1 , ., pk ) and be a k k projection matrix.
(i) If = a , then
[Zn (p)] D(p)D(p)Zn (p) d r2 ,
where r2 has the chi-square distribution r2 with r = tr() a.
(ii) The same result holds
Lecture 24: UMPU tests in binomial, Poisson, and one
sample normal problems
A problem arising in many different contexts is the comparison of two
If the observations are integer-valued, the problem often reduces to
testing the equ
Lecture 26: Likelihood ratio tests
When both H0 and H1 are simple (i.e., 0 = cfw_0 and 1 = cfw_1 ),
Theorem 6.1 applies and a UMP test rejects H0 when
f1 (X )
f0 (X )
for some c0 > 0.
The following denition is a natural extension of
Lecture 25: UMPU tests in two sample normal
problems and linear models
The problem of comparing the parameters of two normal distributions
arises in the comparison of two treatments, products, and so on.
Suppose that we have two indepe
Lecture 37: Simultaneous condence intervals
So far we have studied condence sets for a real-valued or a
vector-valued with a nite dimension k.
In some applications, we need a condence set for real-valued t with
t T , where T is an index set that may conta
Lecture 36: Asymptotic condence sets and quantiles
We consider another example of asymptotic condence sets based on
likelihood discussed in the last lecture.
Let X1 , ., Xn be i.i.d. from N( , ) with unknown = ( , ).
Consider the problem of c
Lecture 35: Asymptotic condence sets and
In some problems, especially in nonparametric problems, it is difcult
to nd a reasonable condence set with a given condence coefcient
or condence level 1 .
A common approach is to n
Lecture 32: Lengths of condence intervals
For condence intervals of a real-valued with the same condence
coefcient, an apparent measure of their performance is the interval
Shorter condence intervals are preferred, since they are
Lecture 33: UMA and UMAU condence sets
Condence sets related to optimal tests
For a condence set obtained by inverting the acceptance regions of
some UMP or UMPU tests, it is expected that the condence set
inherits some optimality property.
Chapter 7. Condence Sets
Lecture 30: Pivotal quantities and condence sets
X : a sample from a population P P.
= (P): a functional from P to R k for a xed integer k.
C(X ): a condence set for , a set in B (the class of Borel sets on )
Lecture 34: Randomized condence sets
Applications of Theorems 7.4 and 7.5 require that C(X ) be obtained by
inverting acceptance regions of nonrandomized tests.
Thus, these results cannot be directly applied to discrete problems.
In fact, in
Lecture 31: Inverting acceptance regions of tests
Condence sets and hypothesis tests
Another popular method of constructing condence sets is to use a
close relationship between condence sets and hypothesis tests.
For any test T , the set cfw_x : T (x) = 1
Lecture 29: Kolmogorov-Smirnov tests and asymptotic
Let X1 , ., Xn be i.i.d. random variables from a continuous c.d.f. F .
H0 : F = F0 versus H1 : F = F0
with a xed F0 .
Let Fn be the empirical c.d.f. and
Dn (F ) =
Lecture 27: Asymptotic tests based on likelihoods
Asymptotic distribution of likelihood ratio
An LR test is often equivalent to a test based on a statistic Y (X )
whose distribution under H0 can be used to determine the rejection
region of the LR test wit
Lecture 23: UMPU tests in exponential families
Continuity of the power function
For a given test T , the power function T (P) is said to be continuous in
if and only if for any cfw_j : j = 0, 1, 2, . , j 0 implies
T (Pj ) T (P0 ), where Pj P satisfying (
Lecture 21: Monotone likelihood ratio and UMP tests
Monotone likelihood ratio
A simple hypothesis involves only one population.
If a hypothesis is not simple, it is called composite.
UMP tests for a composite H1 exist in Example 6.2.
We now extend this re
Chapter 6. Hypothesis Tests
Lecture 20: UMP tests and Neyman-Pearson lemma
Theory of testing hypotheses
X : a sample from a population P in P, a family of populations.
Based on the observed X , we test a given hypothesis
H0 : P P0
H1 : P P1
where P0 an
Lecture 19: Bootstrap
To evaluate and compare different estimators, we need consistent
estimators of variances or asymptotic variances of estimators.
This is also important for hypothesis testing and condence sets.
Let Var( ) be the variance or
Lecture 18: Prole likelihoods, GEE, and GMM
Let ( , ) be a likelihood (or empirical likelihood), where and are
not necessarily vector-valued.
It may be difcult to maximize the likelihood ( , ) simultaneously over
For each xed , le
Lecture 16: Robustness and efciency
Mean vs median
Let F be a c.d.f. on R symmetric about R with F ( ) > 0.
Then = 0.5 and is called the median of F .
If F has a nite mean, then is also equal to the mean.
We consider the estimation of based on i.i.d. Xi s
Lecture 17: L-estimators and trimmed sample mean
L-functional and L-estimator
For a function J(t) on [0,1], dene the L-functional as
T (G) =
If X1 , ., Xn are i.i.d. from F and T (F ) is the parameter of interest,
T (Fn ) is called an L
Lecture 15: Sample quantiles and their asymptotic
Estimation of quantiles (percentiles)
Suppose that X1 , ., Xn are i.i.d. random variables from an unknown
For p (0, 1),
G1 (p) = infcfw_x : G(x) p
is the pth quantile for any c.d
Lecture 14: Density estimation
Why do we estimate a density?
Suppose that X1 , ., Xn are i.i.d. random variables from F and that F is
unknown but has a Lebesgue p.d.f. f .
Estimation of F can be done by estimating f .
Note that estimators of F derived in
Chapter 5: Estimation in Non-Parametric Models
Lecture 12: Empirical c.d.f. and nonparametric MLE
Estimation in Nonparametric Models
Data X = (X1 , ., Xn ), where Xi s are random d -vectors i.i.d. from an
unknown c.d.f. F in a nonparametric family.
Lecture 10: Asymptotically efcient estimation I
Let cfw_n be a sequence of estimators of based on a sequence of
samples cfw_X = (X1 , ., Xn ) : n = 1, 2, ..
Suppose that as n , n is asymptotically normal (AN) in the sense
Lecture 11: Asymptotically efcient estimation II
Scoring and RLE
The method of estimating by solving sn ( ) = 0 over is called
scoring and the function sn ( ) is called the score function.
RLEs are not necessarily MLEs.
We may use the techniques discussed
Lecture 13: Empirical Likelihoods
From Theorem 5.3, Fn maximizes the likelihood
(G) = pi
over pi > 0, i = 1, ., n, and n pi = 1, where pi = PG (cfw_xi ).
This method of deriving an estimator of F can be extended to various
Lecture 8: MLE in generalized linear models (GLM)
MLE in exponential families
Suppose that X has a distribution from a natural exponential family so
that the likelihood function is
( ) = expcfw_ T (x) ( )h(x),
where is a vector of unknown pa
Lecture 9: Likelihood approach for incomplete data
Likelihood function when there are missing data
y: a variable or a vector of variables of interest.
x: a vector of covariates.
Y = (y1 , ., yn ): the complete data when there is no missing.
X = (x1 , ., x
Lecture 7: Methods of computing MLE
We need to use various methods to derive MLEs.
Let X be an observation from the hypergeometric distribution
HG(r , n, n) (Table 1.1, page 18) with known r , n, and an unknown
= n + 1, n + 2, .
In this case