18.443. Practice test 1.
Consider the family of distributions with p.d.f.
f (x| ) = x1 , for 0 < x < 1, and > 0.
(1)
Consider an i.i.d. sample X1 , . . . , Xn from this distribution. As always, the
un
Section 7
Testing hypotheses about parameters
of normal distribution.
T-tests and F-tests.
We will postpone a more systematic approach to hypotheses testing until the following
lectures and in this le
Section 14
Simple linear regression.
Let us look at the cigarette dataset from [1] (available to download from journals website)
and [2]. The cigarette dataset contains measurements of tar, nicotine,
Section 10
Chi-squared goodness-of-t test.
Example. Let us start with a Matlab example. Let us generate a vector X of 100 i.i.d.
uniform random variables on [0, 1] :
X=rand(100,1).
Parameters (100, 1)
Section 12
Tests of independence and
homogeneity.
In this lecture we will consider a situation when our observations are classied by two dierent
features and we would like to test if these features ar
Section 15
Multiple linear regression.
Let us consider a model
Yi = 1 Xi1 + . . . + p Xip + i
where random noise variables 1 , . . . , n are i.i.d. N(0, 2 ). We can write this in a matrix
form
Y = X +
Section 13
Kolmogorov-Smirnov test.
Suppose that we have an i.i.d. sample X1 , . . . , Xn with some unknown distribution P and we
would like to test the hypothesis that P is equal to a particular dist
Lecture 6
Gamma distribution, 2-distribution,
Student t-distribution,
Fisher F -distribution.
Gamma distribution. Let us take two parameters > 0 and > 0. Gamma function
() is dened by
() =
x1 ex dx.
0
Lecture 2
Maximum Likelihood Estimators.
Matlab example. As a motivation, let us look at one Matlab example. Let us generate
a random sample of size 100 from beta distribution Beta(5, 2). We will lear
Lecture 4
Multivariate normal distribution and
multivariate CLT.
We start with several simple observations. If X = (x1 , . . . , xk )T is a k 1 random vector
then its expectation is
EX = (Ex1 , . . .
Lecture 3
Properties of MLE: consistency,
asymptotic normality.
Fisher information.
In this section we will try to understand why MLEs are good.
Let us recall two facts from probability that we be use
Section 8
Testing simple hypotheses. Bayes
decision rules.
Let us consider an i.i.d. sample X1 , . . . , Xn X with unknown distribution P on X . Suppose
that the distribution P belongs to a set of k s
Lecture 5
Condence intervals for parameters of
normal distribution.
Let us consider a Matlab example based on the dataset of body temperature measurements
of 130 individuals from the article [1]. The
18.443. Practice test 2.
(1) Given a sample 5, 1, 4, 1, 2, 3 from Poisson distribution (), construct
the most p owerful test for
H0 : = 1 vs. H1 : = 2,
with level of signicance = 0.05. Test H0 .
(2) p
Section 11
Goodness-of-t for composite
hypotheses.
Example. Let us consider a Matlab example. Let us generate 50 observations from N(1, 2):
X=normrnd(1,2,50,1);
Then, running a chi-squared goodness-of