{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

stat200ch6_winter10

# stat200ch6_winter10 - STAT 200 Chapter 6 Introduction to...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: STAT 200 Chapter 6 Introduction to Inferences Basic concepts: estimation, estimators and estimates 1. Estimators vs. Estimates: An estimator is a function of random variables X 1 ,X 2 , ··· X n in a random sample of size n ) while an estimate is the realized value of an estimator (i.e., a number) that is obtained when a sample is actually taken. 2. Point Estimation: The process of estimating a single number as an estimate of the population parameter is called point estimation . Parameter Point Estimator Point Estimate Population mean μ Sample mean X x Population SD σ Sample SD S s Population proportion p Sample proportion ˆ p ˆ p (a value) 1 3. Interval Estimation: We might want to specify a range of values within which a population parameter is likely to fall. The process of estimating this range of values is called interval estimation . An interval estimator of a population parameter is a function of the observations from a sample which gives an interval that estimates the population parameter. An interval estimate is the realized range of values (an interval) of its corresponding interval estimator. 4. A desirable property of a point estimator: Unbiasedness If a point estimator has a mean equal to the population parameter it is intended to estimate, the point estimator is said to be an unbiased estimator of the parameter. Otherwise, it is said to be a biased estimator of the parameter. For example, X and ˆ p are unbiased estimators of μ and p , respectively. E ( X ) = μ X = μ and E (ˆ p ) = μ ˆ p = p . Interval estimation of population parameters – confidence intervals (Section 6.1) Let X 1 ,X 2 , ··· ,X n be a random sample from a population with mean μ and standard devi- ation σ . Suppose μ is unknown and σ is known. We want to obtain an interval estimate of μ . With X ∼ N ( μ,σ ), we have X ∼ N ( μ X = μ,σ X = σ √ n ). If X does not follow the normal distribution, as long as the conditions under the CLT are satisfied, X approx. ∼ N ( μ X = μ,σ X = σ √ n ) By the 68-95-99.7 rule, P ( μ- 2 σ √ n < X < μ + 2 σ √ n ) ≈ 95% P (- X- 2 σ √ n <- μ <- X + 2 σ √ n ) ≈ 95% P ( X + 2 σ √ n > μ > X- 2 σ √ n ) ≈ 95% P ( X- 2 σ √ n < μ < X + 2 σ √ n ) ≈ 95% 2 There is an approximately 95% chance that the interval ( X- 2 σ √ n , X + 2 σ √ n ) captures the true value of μ . We call this interval the 95% confidence interval for μ . Confidence intervals for μ : The level C confidence interval for μ can be constructed using x ± z * σ √ n where x is the sample mean, σ is the known population standard deviation, and z * is the z-score such that the area (in %) to its right under the z-curve is equal to 100%- C 2 ....
View Full Document

{[ snackBarMessage ]}

### Page1 / 9

stat200ch6_winter10 - STAT 200 Chapter 6 Introduction to...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online