FACULTY OF SCIENCE
SCHOOL OF MATHEMATICS AND
STATISTICS
MATH3811/MATH3911
STATISTICAL
INFERENCE/HIGHER
STATISTICAL INFERENCE
Session 1, 2007
MATH3811/MATH3911 Course Outline
Information about the course
Course Authority: Dr. S. Penev
e-mail S.Penev@unsw.e
Lecture 17
6.1. Measures of association for continuous observations. Tests of independence
near normal random variables.
It is well known that the correlation coefficient for two random variables X and Y
E(Y )]
= E[(XE(X)(Y
is defined as = (X, Y ) = Cov(X
Lecture 13
3. Some methods based on ranks.
3.1. Justification for using ranks. We already had a chance to outline the reasons
to use rank data in nonparametric data. We also outlined the main advantages of rankstatistics. Shortly, they were:
Rank-statist
Lecture 5
8. Asymptotic properties of estimators
8.1. Why asymptotics
We have realized that finding the UMVUE for a fixed sample size n could be difficult
in some cases especially when the CR bound is not attainable. Finding them sometimes
requires more o
Lecture 9
An Introduction to the Bootstrap
1. Basic idea of bootstrap
In Statistical Inference we are learning from experience: we observe a random
sample x = (x1 , x2 , . . . , xn ) and wish to infer properties of the complete population X
that yielded t
Lecture 2
Some Principles in Statistical Inference
5. Data Reduction in Statistical Inference
Suppose a vector X=(X1 , X2 , ., Xn ) of n i.i.d. random variables, each with a density
f (x; ) is to be observed, and inference on based on the observations x1
Lecture 14
4. Goodness of Fit
4.1. General remarks and motivation
Goodness of fit tests are a core part of the methods in Nonparametric Statistics. This is
explained by their essential applications in practice.
Suppose that we have a small sample from an
Lecture 6
9. Hypothesis testing
9.1. General formulation of the problem
9.1.1. Why is hypothesis testing and confidence set construction necessary.
Assume X = (X1 , X2 , ., Xn ) are i.i.d. from f (x, ), . Point estimation of will give
an estimated value o
Lecture 10
An Introduction to Robustness
1. Basic idea of robustness
All along this course, we have studied theories about how to construct optimal procedures (be they Likelihood-based or Bayesian) when certain parametric model F (X, )
is given. These the
Lecture 1
The Subject of Statistical Inference
1. Sampling from a population
The purpose in Statistical Inference is to draw relevant conclusions from given data.
These conclusions may be about predicting further outcomes, evaluating risks of events,
test
Lecture 3
6.3. Maximum Likelihood Estimation-introduction
We have discussed some important principles: sufficiency, likelihood principle on which
inference should be based. Each of these looks reasonable as a principle but it does not
give us a constructi
Lecture 12
ORDER STATISTICS
2.3. Distributions related to order statistics
Let X be a random variable with a density fX (x) and a cumulative distribution
function FX (x) and let there be n independent copies X1 , X2 , . . . , Xn of X.
Theorem 2.3.1 The jo
Lecture 15
4.3.5. Contingency tables. Two major situations.
Suppose we have a data that is presented in the form of a contingency table with r
rows and c columns. One may think of at least two illustrative and most commonly met
situations where such data
Lecture 8
10. Bayesian Inference
10.1. Reasons and goals in Bayesian Inference . In our considerations until now, we
have assumed that the parameter in the density f (x; ) was fixed (although unknown
to us). In some real world situations which the density
Lecture 7
9.8. Generalized Likelihood Ratio Tests
In 9.1-9.7 we have been concerned with defining optimality of tests and defending optimality properties of specifically constructed tests. We have also seen that uniformly most
powerful tests in the set of
Lecture 11
NONPARAMETRIC STATISTICAL INFERENCE
1. Motivation for using Nonparametric Inference.
1.1. General purpose of Nonparametric Procedures
There are several reasons that can be pointed out as a motivation to use Nonparametric
Procedures. Let us ment
Lecture 16
5.1. K-sample problems (K > 2).
The procedures in this lecture are considered generalizations of the corresponding procedures for comparing K = 2 groups of samples. Difference should be made between the
cases of related and unrelated (independe
Lecture 4
7. Classical Estimation Theory
7.1. Cramer-Rao Inequality
Obtaining a point estimator of the parameter of interest is usually the first step in inference. Suppose X = (X1 , X2 , ., Xn ) are i.i.d. from f (x, ), R and we use a statistic
Tn (X) to