{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture+7+Bayesian+Statistics+IV

# Lecture+7+Bayesian+Statistics+IV - ECON 123A Fall 2011...

This preview shows pages 1–9. Sign up to view the full content.

ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-1 7 Prediction Prediction provides discipline and pragmatic importance to empirical research. C Suppose a deity told you the values of all unknown parameters in your model so that estimation and hypothesis testing became moot. B What would you “do” with your model? B The obvious answer is to use it for what it is intended to do: make ex ante probability statements about future observables.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-2 Suppose the information set consists of the union of past data Y = y, yielding the parametric likelihood function ( 2 ; y), and other information in the form of a prior distribution p( 2 ). C The sampling distribution of an out-of-sample (possibly a vector) given Y = y and 2 , would be an acceptable predictive distribution if 2 was known, but without knowledge of 2 it cannot be used. C The Bayesian predictive probability function is
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-3 (7.1)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-4 C In other words, the Bayesian predictive probability function is the posterior expectation of which involves the unknown 2 . C If the past and future are independent conditional on 2 (as in random sampling), then
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-5 (7.2) (7.3) Given the predictive distribution, point prediction proceeds analogous to point estimation discussed in Chapter 3. Letting denote a predictive cost (loss) function measuring the performance of a predictor of , the optimal point predictor is defined to be the solution For example, if is a scalar and predictive loss is quadratic in prediction (forecast) error , i.e., then the optimal point estimate according to (7.2) is

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-6 C Results for generating point predictions analogous to those for point estimates in Chapter 3 can be derived for other familiar loss functions. C Similarly, prediction (forecast) intervals for can be constructed from similarly to the HPD intervals for 2 from the posterior density p( 2 |y) in Chapter 5.
ECON 123A, Fall 2011, Lecture 7 Dale J. Poirier 9-7 In effect predictive density (7.1) treats all parameters as nuisance parameters and integrates them out of the predictive problem. A similar strategy is used when adding parametric hypotheses to the analysis. C Suppose that the parameter space 1 is partitioned into J mutually exclusive subsets 1 j (j = 1, 2, . .., J) such that 1 = 1 1 c 1 2 c ... c 1 J . B This partitioning gives rise to J hypotheses H j : 2 0 1 j (j = 1, 2, . .., J). B Also suppose that the likelihood ( 2 ; y), prior probabilities B j = Pr(H j ) (j =1, 2, . .., J), and conditional prior pdfs p( 2 * H j ) (j = 1, 2, . .., J) are given.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Dale J. Poirier 9-8 (7.4) (7.5) B Conditional on hypothesis H j , and given data y leading to the posterior pdf p( 2 * y, H j ), the j th conditional predictive density of is B Using the posterior probabilities
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 50

Lecture+7+Bayesian+Statistics+IV - ECON 123A Fall 2011...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online