Lecture 28: Chi-square tests and goodness of t tests
Theorem 6.8
Let = ( p1 , ., pk ) and be a k k projection matrix.
(i) If = a , then
[Zn (p)] D (p)D (p)Zn (p) d r2 ,
where r2 has the chi-square distribution r2 with r = tr() a.
(ii) The same result hold
Lecture 26: Likelihood ratio tests
Likelihood ratio
When both H0 and H1 are simple (i.e., 0 = cfw_0 and 1 = cfw_1 ),
Theorem 6.1 applies and a UMP test rejects H0 when
f1 (X )
> c0
f0 (X )
for some c0 > 0.
The following denition is a natural extension of
Lecture 27: Asymptotic tests based on likelihoods
Asymptotic distribution of likelihood ratio
An LR test is often equivalent to a test based on a statistic Y (X )
whose distribution under H0 can be used to determine the rejection
region of the LR test wit
Lecture 24: UMPU tests in binomial, Poisson, and one
sample normal problems
Example 6.11
A problem arising in many different contexts is the comparison of two
treatments.
If the observations are integer-valued, the problem often reduces to
testing the equ
Lecture 21: Monotone likelihood ratio and UMP tests
Monotone likelihood ratio
A simple hypothesis involves only one population.
If a hypothesis is not simple, it is called composite.
UMP tests for a composite H1 exist in Example 6.2.
We now extend this re
Lecture 25: UMPU tests in two sample normal
problems and linear models
Two-sample problems
The problem of comparing the parameters of two normal distributions
arises in the comparison of two treatments, products, and so on.
Suppose that we have two indepe
Lecture 22: UMP tests for two-sided hypotheses and
unbiased tests
Proposition 6.1 (Generalized Neyman-Pearson lemma)
Let f1 , ., fm+1 be Borel functions on R p integrable w.r.t. a -nite .
For given constants t1 , ., tm , let T be the class of Borel functi
Lecture 23: UMPU tests in exponential families
Continuity of the power function
For a given test T , the power function T (P ) is said to be continuous in
if and only if for any cfw_j : j = 0, 1, 2, . , j 0 implies
T (Pj ) T (P0 ), where Pj P satisfying
Lecture 31: Inverting acceptance regions of tests
Condence sets and hypothesis tests
Another popular method of constructing condence sets is to use a
close relationship between condence sets and hypothesis tests.
For any test T , the set cfw_x : T (x ) =
Lecture 29: Kolmogorov-Smirnov tests and asymptotic
tests
Kolmogorov-Smirnov tests
Let X1 , ., Xn be i.i.d. random variables from a continuous c.d.f. F .
Consider
H0 : F = F0 versus H1 : F = F0
with a xed F0 .
Let Fn be the empirical c.d.f. and
Dn (F ) =
Lecture 2: Generalized, empirical, and hierarchical
Bayes methods
Generalized Bayes action
The minimization in Denition 4.1 is the same as the minimizing
L( , (x)f (x)d = min
aA
L( , a)f (x)d
(x) is called a generalized Bayes action.
This is still dened
Lecture 37: Simultaneous condence intervals
So far we have studied condence sets for a real-valued or a
vector-valued with a nite dimension k .
In some applications, we need a condence set for real-valued t with
t T , where T is an index set that may cont
Lecture 36: Asymptotic condence sets and quantiles
We consider another example of asymptotic condence sets based on
likelihood discussed in the last lecture.
Example 7.24
Let X1 , ., Xn be i.i.d. from N ( , ) with unknown = ( , ).
Consider the problem of
Lecture 35: Asymptotic condence sets and
likelihoods
Asymptotic criterion
In some problems, especially in nonparametric problems, it is difcult
to nd a reasonable condence set with a given condence coefcient
or condence level 1 .
A common approach is to n
Lecture 33: UMA and UMAU condence sets
Condence sets related to optimal tests
For a condence set obtained by inverting the acceptance regions of
some UMP or UMPU tests, it is expected that the condence set
inherits some optimality property.
Denition 7.2
L
Lecture 32: Lengths of condence intervals
Length criterion
For condence intervals of a real-valued with the same condence
coefcient, an apparent measure of their performance is the interval
length.
Shorter condence intervals are preferred, since they are
Chapter 7. Condence Sets
Lecture 30: Pivotal quantities and condence sets
Condence sets
X : a sample from a population P P .
= (P ): a functional from P to R k for a xed integer k .
C (X ): a condence set for , a set in B (the class of Borel sets on )
de
Lecture 34: Randomized condence sets
Randomization
Applications of Theorems 7.4 and 7.5 require that C (X ) be obtained by
inverting acceptance regions of nonrandomized tests.
Thus, these results cannot be directly applied to discrete problems.
In fact, i
Chapter 6. Hypothesis Tests
Lecture 20: UMP tests and Neyman-Pearson lemma
Theory of testing hypotheses
X : a sample from a population P in P , a family of populations.
Based on the observed X , we test a given hypothesis
H0 : P P0
vs
H1 : P P1
where P0 a
Lecture 19: Bootstrap
Motivation
To evaluate and compare different estimators, we need consistent
estimators of variances or asymptotic variances of estimators.
This is also important for hypothesis testing and condence sets.
Let Var( ) be the variance or
Lecture 9: MLE in generalized linear models (GLM)
and quasi-MLE
MLE in exponential families
Suppose that X has a distribution from a natural exponential family so
that the likelihood function is
( ) = expcfw_ T (x ) ( )h(x ),
where is a vector of unknown
Lecture 4: Minimax estimators
Consider estimators of a real-valued = g ( ) based on a sample X
from P , , under loss L and risk RT ( ) = E [L(T (X ), )].
Minimax estimator
A minimax estimator minimizes sup RT ( ) over all estimators T
Discussion
A minimax
Lecture 8: Methods of computing MLE
We need to use various methods to derive MLEs.
Example 4.32
Let X be an observation from the hypergeometric distribution
HG(r , n, n) (Table 1.1, page 18) with known r , n, and an unknown
= n + 1, n + 2, .
In this case
Lecture 3: Bayes rules and estimators
Bayes estimators
In the frequentist approach, if a Bayes action (x ) is a measurable
function of x , then (X ) is a nonrandomized decision rule.
It can be shown that (X ) dened in Denition 4.1 (if it exists for
X = x
Lecture 7: Likelihood and maximum likelihood
estimator (MLE)
The maximum likelihood method is the most popular method for
deriving estimators in statistical inference that does not use any loss
function.
Example 4.28
Let X be a single observation taking v
Lecture 6: Shrinkage estimators
We re-state the main theorem and provide a proof
Theorem 4.15
Suppose that X is from Np ( , Ip ) with p 3. Then, under the squared
error loss, the risks of the following estimators of ,
c ,r = X
r (p 2)
(X c ),
X c 2
where
Chapter 4: Estimation in Parametric Models
Lecture 1: Bayesian approach
X is from a population in a parametric family P = cfw_P : , where
R k for a xed integer k 1
Bayes approach
Optimal rules in the Bayesian approach, which is fundamentally
different fr
Lecture 2: Generalized, empirical, and hierarchical
Bayes methods
Generalized Bayes action
The minimization in Denition 4.1 is the same as the minimizing
L( , (x )f (x )d = min
aA
L( , a)f (x )d
(x ) is called a generalized Bayes action.
This is still d
Lecture 5: Admissibility, minimaxcity, and
simultaneous estimation
Theorem 4.14 (Admissibility in one-parameter exponential
families)
Suppose that X has the p.d.f. c ( )e T (x ) w.r.t. a -nite measure ,
where T (x ) is real-valued and ( , + ) R .
Consider