# Lecture8 - SAMPLE STATISTICS A random sample of size n from...

This preview shows pages 1–3. Sign up to view the full content.

SAMPLE STATISTICS A random sample of size n from a distribution f ( x ) is a set of n random variables x 1 ,x 2 ,...,x n which are independently and identically distributed with x i f ( x ) for all i . Thus, the joint p.d.f of the random sample is f ( x 1 2 n )= f ( x 1 ) f ( x 2 ) ··· f ( x 2 n Y i =1 f ( x i ) . A statistic is a function of the random variables of the sample, also know as the sample points. Examples are the sample mean ¯ x = x i /n and the sample variance s 2 = ( x i ¯ x ) 2 /n A random sample may be regarded as a microcosm of the population from which it is drawn. Therefore, we might attempt to estimate the moments of the population’s p.d.f f ( x ) by the corresponding moments of the sample statistics. To determine the worth of such estimates, we may determine their expected values and their variances. Beyond Fnding these simple measures, we might en- deavour to Fnd distributions of the statistics, which are described as their sampling distributions. We can show, for example, that the mean ¯ x of a random sample is an unbiased estimate of the population moment µ = E ( x ), since E x E ³ X x i n ´ = 1 n X E ( x i n n µ = µ. Its variance is V x V ³ X x i n ´ = 1 n 2 X V ( x i n n 2 σ 2 = σ 2 n . Here, we have used the fact that the variance of a sum of independent random variables is the sum of their variances, since the covariances are all zero. Observe that V x ) 0as n →∞ . Since E x µ , this implies that, as the sample size increases, the estimates become increasingly concentrated around the true population parameters. Such an estimate is said to be consistent The sample variance, however, does not provide an unbiased estimate of σ 2 = V ( x ), since E ( s 2 E ½ 1 n X ( x i ¯ x ) 2 ¾ = E · 1 n X © ( x i µ )+( µ ¯ x ) ± 2 ¸ = E · 1 n X © ( x i µ ) 2 +2( x i µ )( µ ¯ x µ ¯ x ) 2 ± ¸ = V ( x ) 2 E { x µ ) 2 } + E { x µ ) 2 } = V ( x ) V x ) . Here, we have used the result that E ½ 1 n X ( x i µ )( µ ¯ x ) ¾ = E { ( µ ¯ x ) 2 } = V x ) . 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
It follows that E ( s 2 )= V ( x ) V x σ 2 σ 2 n = σ 2 ( n 1) n . Therefore, s 2 is a biased estimator of the population variance and, for an unbiased estimate, we should use ˆ σ 2 = s 2 n n 1 = ( x i ¯ x ) 2 n 1 . However, s 2 is still a consistent estimator, since E ( s 2 ) σ 2 as n →∞ and also V ( s 2 ) 0. The value of V ( s 2 ) depends on the form of the underlying population distribu- tion. It would help us to know exactly how the estimates are distributed. For this, we need some assumption about the functional form of the probability distribution of the population. The assumption that the population has a normal distribution is a conventional one, in which case, the following theorem is of assistance: Theorem. Let x 1 ,x 2 ,...,x n be a random sample from the normal population N ( µ, σ 2 ). Then, y = a i x i is normally distributed with E ( y a i E ( x i µ a i and V ( y a 2 i V ( x i σ 2 a 2 i .
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 03/02/2012 for the course EC 2019 taught by Professor D.s.g.pollock during the Spring '12 term at Queen Mary, University of London.

### Page1 / 8

Lecture8 - SAMPLE STATISTICS A random sample of size n from...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online