chapter8 - Introduction to Probability and Statistics...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Introduction to Probability and Statistics Thirteenth Edition Chapter 8 Large-Sample Estimation Introduction • Populations are described by their probability distributions and parameters. –For quantitative populations, the location and shape are described by m and s . –For a binomial populations, the location and shape are determined by p . • If the values of parameters are unknown, we make inferences about them using sample information. Types of Inference Estimation: –Estimating or predicting the value of the parameter “What is (are) the most likely values of m or p ?” Hypothesis Testing: –Deciding about the value of a parameter based on some preconceived idea. –“Did the sample come from a population with m = 5 or p = .2?” Types of Inference Examples: –A consumer wants to estimate the average price of similar homes in her city before putting her home on the market. Estimation: Estimate m , the average home price. Hypothesis test: Is the new average resistance, m N equal to the old average resistance, m O ? –A manufacturer wants to know if a new type of steel is more resistant to high temperatures than an old type was.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Types of Inference • Whether you are estimating parameters or testing hypotheses, statistical methods are important because they provide: Methods for making the inference A numerical measure of the goodness or reliability of the inference Definitions • An estimator is a rule, usually a formula, that tells you how to calculate the estimate based on the sample. Point estimation: A single number is calculated to estimate the parameter. Interval estimation: Two numbers are calculated to create an interval within which the parameter is expected to lie. Properties of Point Estimators • Since an estimator is calculated from sample values, it varies from sample to sample according to its sampling distribution . • An estimator is unbiased if the mean of its sampling distribution equals the parameter of interest. –It does not systematically overestimate or underestimate the target parameter. Properties of Point Estimators • Of all the unbiased estimators, we prefer the estimator whose sampling distribution has the smallest spread or variability .
Background image of page 2
3 Measuring the Goodness of an Estimator • The distance between an estimate and the true value of the parameter is the error of estimation. The distance between the bullet and the bull’s-eye. • In this chapter, the sample sizes are large, so that our unbiased estimators will have normal distributions. Because of the Central Limit Theorem. The Margin of Error • For unbiased estimators with normal sampling distributions, 95% of all point estimates will lie within 1.96 standard deviations of the parameter of interest.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/19/2011 for the course MTH 1250C taught by Professor Any during the Fall '08 term at St. Johns Duplicate.

Page1 / 10

chapter8 - Introduction to Probability and Statistics...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online