This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Sampling Distributions
Utku Suleymanoglu
UMich Utku Suleymanoglu (UMich) Sampling Distributions 1 / 21 Introduction Population (Interest)
e.g. 2023 year old
Americans who are not
in school RANDOM DRAW Sample (Data)
e.g. n=100, 2023 year
old randomly picked
Americans who are
not in school Sample Statistic
e.g. average yearly
income in the sample.
SAMPLE MEAN
(x )
¯ Population Parameter
e.g. Average yearly income
POPULATION MEAN (µ)
ESTIMATOR Utku Suleymanoglu (UMich) Sampling Distributions 2 / 21 Introduction Population, Sample, Parameter and Estimator Population: The group of objects we want to learn something about, but can’t
observe all.
Parameter: A particular characteristic of the population we are interested in. (µ,
σ 2 ). Because we never observe the whole population, we never know the true value
of population parameters.
Sample: The subset of the population we can observe.
Estimator: A rule or formula that uses sample values. It aims at getting a value
approximating the population parameter.
The estimator we will focus on for now is the sample mean. Under some conditions it is
successful to approximate the population mean:
n x=
¯
i =1 xi
x1 + x2 + · · · + xn
=
n
n Notice how x is a function of the the particular sample values we have.
¯ Utku Suleymanoglu (UMich) Sampling Distributions 3 / 21 Introduction Sampling We want x to be a good guess of the unknown µ. So it is important how the x ’s are
¯
selected.
Drawing a sample from a population is called sampling. So what is the best way to
sample?
A course on its own. We will focus on the basic but most important way.
Random Sampling.
Simple Random Sample: For ﬁnite populations. Each population member has the
same probability to be chosen.
Random Sample: For inﬁnite populations. Each draw is independent from the
others.
For most of our purposes we can think of random sampling as randomly picking
independent numbers from a population with some distribution (not known) with mean µ
and variance σ 2 (neither is known). Utku Suleymanoglu (UMich) Sampling Distributions 4 / 21 Introduction Point Estimation
We talked about the population parameters and estimators. We call this process getting
a guess for the population parameter via the use of an estimator Point Estimation. This
is as opposed to Interval Estimation (we will get to this.) It is called point estimation
because you get one number for a guess.
Point Estimation
An point estimator is a rule or formula which produces a point estimate for the unknown
population parameter. A (point) estimate is the numerical value produced by the
estimator with a speciﬁc sample: The estimate is the guess produced by the application
of the estimator with the sample values at hand.
Estimator is a formula or rule and estimate is a number.
Same estimator produces diﬀerent estimates with diﬀerent samples drawn from the
same population.
Example: x for the heights of this classroom (not a random sample, though) is an
¯
estimate for the average height of American youth. You go to the next classroom and
you get a diﬀerent x . Diﬀerent sample, diﬀerent number. . .
¯
Utku Suleymanoglu (UMich) Sampling Distributions 5 / 21 Introduction Examples for Estimators 1 2 Another estimator µ: x = xmin +xmax . There could be diﬀerent estimators for the same
˜
2
population parameter. Choosing between them is a part of statistics. For now, trust
me x is “better” than x .
¯
˜
Another population parameter one might be interested and its estimator: σ 2 and
s2 = 3 (xi −x )2
¯
n−1 . Yet another population parameter: Say you are wondering the proportion of the left
no of lh in pop
handed people in the population. Let’s call it p =
. An estimator for
N
no of lh in the sample
p could be p =
n Utku Suleymanoglu (UMich) Sampling Distributions 6 / 21 Sampling Distributions Estimators as Random Variables Estimators are formulas or rules which uses sample values in them.
They take diﬀerent values for diﬀerent samples for the same variable from same
population. Diﬀerent estimates . . .
If we randomly pick samples from a populations, then the outcome of the estimator
formula is random.
Therefore, estimators are random variables.
What do random variables have? distributions, means and variances . . .
For a given estimator, such as x , we can ﬁgure out what the mean and the variance
¯
is, and even the what the distribution is.
We will focus on x . Let’s change the notation so that we know we’re dealing with a
¯
random variable: X is a random variable, x is a realization of it for a given random
¯
sample. Utku Suleymanoglu (UMich) Sampling Distributions 7 / 21 Sampling Distributions Random Variables and Sampling Distributions
The way we will think about the random sampling is as following:
The aspect of the population that we are interested in (e.g. incomes or heights of
people) is generated by a random variable, X .
This random variable has an unknown distribution (maybe Normal, maybe not).
This random variable has a mean µ and variance σ 2 .
We are interested in producing a guess for µ. Expected value of the random
variable, µ, is the population parameter we are interested in.
We randomly choose a random sample from the population. Each of the n values we
draw is a realization of X .
So each number in your sample is comes from this random variable with mean µ and
variance σ 2 .
Another way to think about them as a collection n random variables which are
identical to the X .
The X we calculate is a weighted sum of n random variables, so a random variable
itself. Utku Suleymanoglu (UMich) Sampling Distributions 8 / 21 Sampling Distributions Mean of X Remember X = n
Xi
i =1 n = X1 +X2 +···+Xn
.
n We can take the expectation of X . X1 + X2 + · · · + Xn
)
n
1
1
1
= E ( X1 + X2 + · · · + Xn )
n
n
n
1
= E (X1 + X2 + · · · + Xn )
n
1
= (E (X1 ) + E (X2 ) + · · · + E (Xn ))
n
1
= ( µ + µ + · · · + µ)
n
1
= ( n × µ)
n
=µ µX = E ( X ) = E ( So the expected of the sample mean is the population mean if we do random sampling. Utku Suleymanoglu (UMich) Sampling Distributions 9 / 21 Sampling Distributions A key thing here: We don’t know what µ is. So how on earth the expected value of X
equals to an usually unknown thing and we claim the knowledge of this?
Notice that it is not some x = µ, it is the expected value of it equals to µ.
This result, E (X ) = µ tell us : if we keep on randomly getting samples from this
population of interest and building x estimates, the average of these diﬀerent x ’s will
¯
¯
tend to µ as we get more and more samples and produce estimates.
→ Some x estimate we have, say 21.54 is not exactly µ, but it is the best we can do with
¯
one single sample. What we know is: the way we form our estimate is good.
In general, a point estimator whose mean equals the population parameter is called an
unbiased estimator. X is an unbiased estimator of µ. We will come back to this. Utku Suleymanoglu (UMich) Sampling Distributions 10 / 21 Sampling Distributions Variance of X
n Again, starting from X = i =1 Xi = X1 +X2 +···+Xn , we can take the variance of X . Note
n
n
that there is no covariance term, because random sampling ensures covariance is zero.
X1 + X2 + · · · + Xn
)
n
1
1
1
= V ( X1 + X2 + · · · + Xn )
n
n
n
1
= 2 V (X1 + X2 + · · · + Xn )
n
1
= 2 (V (X1 ) + V (X2 ) + · · · + V (Xn ))
n
1
= 2 (σ 2 + σ 2 + · · · + σ 2 )
n
1
σ2
= 2 (n × σ 2 ) =
n
n 2
σx = V (X ) = V ( σx =
¯ σ
√
n is called the standard error of the mean. If population size, N , is ﬁnite and not too large compared n, a correction is required:
σx =
¯ N −n √
σ
.
N −1 n (You are not responsible of this last sentence in the exam.) Utku Suleymanoglu (UMich) Sampling Distributions 11 / 21 Sampling Distributions σ
σx = √
¯
n
Think about the relationship between variance of x and sample size!
¯
We are trying to estimate µ with X .
We know our estimator is unbiased, so it is aiming at the correct target.
What does the σx measure?
How dispersed the X numbers around µ!
Do you want a small SE or a big one?
SE of the mean determines the precision of the X .
Depends on two things:
σ 2 : The variance of the observations in the sample. We usually don’t observe this
either. Even if we do observe this, we can’t change this to make our estimates more
precise.
n: Sample size. More observations, more info, more precision. Utku Suleymanoglu (UMich) Sampling Distributions 12 / 21 Sampling Distributions Distribution of Random Variable that is X
We now know that X has a mean of µ and variance of σ 2 /n. Can we ﬁgure out its
distribution?
If X is normal: If data is produced by a normal distribution, that is the population
has a normal distribution, then X has a normal distribution? Why?
If X is not normal: If the population distribution is not normal, we don’t have a
normal distribution for x . But as n gets larger, things change . . .
¯
Central Limit Theorem
If a sample of size n is drawn from any distribution with mean µ and variance σ 2 then X
2
is approximately distributed with normal distribution N (µ, σ ) for large n. In other words:
n
√
n(¯ − µ)
x
x −µ
¯
Z=
=
2 /n
σ
σ
approaches to N (0, 1) as n → ∞
Practial meaning: Regardless of the distribution of values in the population, if n is large
enough (> 30) we can approximate distribution of the sample mean with normal
distribution.
Utku Suleymanoglu (UMich) Sampling Distributions 13 / 21 Sampling Distributions Utku Suleymanoglu (UMich) Sampling Distributions 14 / 21 Sampling Distributions Finally, an Example In this chapter, we will assume the knowledge of µ and σ 2 of the population and ﬁgure
out the distribution of x .
¯
Statistics is about estimation population parameters, so usually we don’t know µ or σ 2 .
We are mostly building up our arsenal for the next chapters.
Here is an example where it is plausible to know the population parameters:
Suppose we have a random sample of 100 Michigan students and we observe their
GPA’s. It is known that in the population (university) average GPA is 3 and standard
deviation of GPA is 0.4.
What is the probability of a randomly picked student having a GPA above 3.1 Can you
ﬁgure this out? NO, without further info about the population distribution of GPA’s. I
did not tell you what distribution population has. Utku Suleymanoglu (UMich) Sampling Distributions 15 / 21 Sampling Distributions But you can ﬁgure out the probability of the sample mean of 100 students’ GPA’s
exceeding 3.1! What is the sampling mean and the variance of x ?
¯
√
¯
E (X ) = µ = 3, V (X ) = σX = σ/ n = 0.4/10 = 0.04. So the average of the GPA’s
¯
for a sample of 100 students is distributed with mean 3 and standard error 0.04.
With n = 100 being large enough, CLT theorem tells us the sample mean will be
approximately normally distributed, even if the GPAs are not normally distributed. So we
can calculate the probability
P ( X > 3 .1 ) = 1 − P ( X < 3 .1 )
3 .1 − 3
= 1 − P (Z <
) = 1 − P (Z < 2.5)
0.04 = 1 − 0.9938 = 0.0062 Exercise: Now assume that population distribution is also normal. Calculate P (X > 3.1).
Explain the source of the diﬀerence. Utku Suleymanoglu (UMich) Sampling Distributions 16 / 21 Sampling Distribution of P P as an Estimator of Sample Proportion
Say there is a binary characteristic: being left handed, being a woman, being from Sri
Lanka, being a American made car. . .
And you have a sample with subjects with many characteristics. Let’s focus male vs
female issue. You want to estimate the proportion of the female students for all college
students in the US. And you have a sample of 100 students 58 of which are female.
What is your best guess for the proportion?
Sample Proportion Estimator
The proportion of individuals with a characteristic in a population, p , is estimated by a
sample proportion estimator, which simply calculates the proportion of that characteristic
in the sample:
C
n
where C is the number of subjects in the sample which is observed to have the
charactestic in question.
p= Utku Suleymanoglu (UMich) Sampling Distributions 17 / 21 Sampling Distribution of P ¯
As X , P changes sample to sample, so it is a random variable.
Sampling Distribution of P
P has a sampling mean of:
E (P ) = p
and sampling variance:
V (P ) = p (1 − p )
n where p is the population proportion.
Furthermore, CLT is helpful for sample proportions: As long as np (1 − p ) ≥ 5, you can
approximate the distribution of P with a normal distribution with mean p and variance
p (1 − p )/n.
In class, we talked about how the population values cannot be considered normally
distributed in this case, and that they are in fact distributed with a Bernoulli distribution.
¯
This makes C a Binomial(n,p) random variable. Using this we showed how E (P ) and
¯
V (P ) formulas above were derived.
¯
(Why does CLT work here: is P a sample mean? See the problem set question)
Utku Suleymanoglu (UMich) Sampling Distributions 18 / 21 Sampling Distribution of P Example If you know that 20% of all university students come from families with no university
education, what is the probability that a randomly chosen sample of 100 students have
sample proportion between 16% and 24%?
There is another way of solving this question. See homework.
We know that we can approximate the distribution of P with
N (p , p (1 − p )/100) = N (0.2, 0.0016) as long as np (1 − p ) > 5.
Then calculate the P (0.16 < P < 0.24) approximately by:
0.16 − 0.2
0.24 − 0.2
<Z <
)
0.04
0.04
= P (−1 < Z < 1) = F (1) − F (−1) P (0.16 < P < 0.24) ≈ P ( = 0.8413 − 0.1587 = 0.682 Utku Suleymanoglu (UMich) Sampling Distributions 19 / 21 Properties of Estimators Unbiased Estimators As mentioned before an estimator is unbiased if it averages at true population parameters
with repeated sampling:
Unbiasedness
Suppose θ is an estimator for the true population parameter θ. It is called unbiased if
E (θ) = θ. If this is not the case, it is called a biased estimator and it produces biased
estimates. E (θ) − θ is called the bias.
Unbiasedness of an estimator is a desired property. But not the only one. Utku Suleymanoglu (UMich) Sampling Distributions 20 / 21 Properties of Estimators Eﬃcieny of An Estimator If we have two unbiased estimators, we can compare their precision by comparing the
sampling variances. The one with the smaller SE measures the population parameter
more precisely. It is called to be a more eﬃcient estimator.
Eﬃcieny of An Estimator
˜
If two estimators θ and θ are both unbiased estimators of θ, the one with the smaller
variance or standard error is called a (relatively) more eﬃcient estimator.
Think about x and x = xmin +xmax . They are both unbiased estimators of µ. But we
¯
˜
2
usually use x because it has a smaller standard error.
¯
V (˜ ) = 1/4V (xmin + xmax ) =
x σ2
σ2
>
= V (¯ )
x
2
n Intuition: x uses more info, more precise.
¯ Utku Suleymanoglu (UMich) Sampling Distributions 21 / 21 ...
View
Full Document
 Spring '08
 STAFF
 Standard Deviation, Variance, Probability theory, Utku Suleymanoglu

Click to edit the document details