{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

STA 301 - Ch 09a - pp 97-114

# STA 301 - Ch 09a - pp 97-114 - Chapter 9a One-Sample...

This preview shows pages 1–4. Sign up to view the full content.

Chapter 9a: One-Sample Estimation Problems 97 CHAPTER 9a — ONE-SAMPLE ESTIMATION PROBLEMS In our development of probability theory leading to the Central Limit Theorem, we have assumed complete knowledge of the population of interest. Based on this knowledge, we considered the sampling distribution of the sample mean. Now we turn to a more realistic situation: the population (or at least some aspect of the population) is unknown to us. We now explore the ways in which knowledge of the behavior of common statistics (i.e., their sampling distributions) provides insight when we wish to use sample information to draw conclusions about a population. In short, we now begin the study of statistical inference . Statistical inference refers to the process of making generalizations (drawing conclusions) about an unknown population. There are two pervasive types of statistical inference: Classical – population parameters are estimated using only sample data; no outside information is considered Bayesian – population parameters are estimated using a combination of sample data and prior subjective knowledge Our focus shall be on classical inference. ESTIMATION Suppose we are seeking the value of an unknown population parameter, θ . To estimate the value of this parameter, we propose gathering a sample from the population: n X X X , , , 2 1 K , and calculating a statistic (or estimator ) , ( 29 n X X X g , , , ˆ 2 1 K = Θ . Notation: Recall that random variables are denoted using uppercase letters, and particular observations are lowercase. Therefore, one particular sample that we observe would be represented by n x x x , , , 2 1 K , and its related estimate is ( 29 n x x x g , , , ˆ 2 1 K = . What are some desirable properties of Θ ˆ ?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 9a: One-Sample Estimation Problems 98 Unbiased The statistic Θ ˆ is said to be unbiased for θ if ( 29 = Θ ˆ E . That is, the “balancing point” of the sampling distribution of Θ ˆ is located at the value we’re seeking, . Example : Suppose we take a sample of size n population with unknown mean, μ . Then X is unbiased for . Example : Suppose we take a sample of size n from a population with unknown mean, , and unknown variance, 2 σ . Following are two estimators for 2 . One is unbiased, the other is not: ( 29 = - = n i i n X X 1 2 1 2 ˆ ( 29 = - - = n i i n X X S 1 2 1 1 2
Chapter 9a: One-Sample Estimation Problems 99 EFFICIENCY Given two or more unbiased estimators for θ , how do we choose one? Consider the variance of the estimators. The estimator with the smaller variance will be closer to , on average, than the estimator with the larger variance. Definition – If we consider all possible unbiased estimators of some parameter , the one with the smallest variance is called the most efficient estimator of . Example

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 18

STA 301 - Ch 09a - pp 97-114 - Chapter 9a One-Sample...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online