For a b 1 that is the exponential1 distribution it

This preview shows page 14 - 17 out of 25 pages.

Bayes factor when this prior is used. For a = b = 1, that is, the Exponential(1) distribution, it can be checked that B 10 = n j =0 1 j + 1 , which is thus our recommended default Bayes factor when observing only zero counts. Note that B 10 ( 0 ) log ( n +1) for large n ; indeed n j =0 1 / ( j +1) 1+ n j =1 j +1 j dx/x = 14
Image of page 14

Subscribe to view the full document.

1 + log ( n + 1). Also, n j =0 1 / ( j + 1) n - 1 j =0 j +1 j dx/ ( x + 1) = log ( n + 1). Thus B 10 is bounded between log( n + 1) and log( n + 1) + 1. So a large string of all zero counts in a sample will lead to a Bayes factor approaching infinity at the slow rate of log( n ). The large sample behavior of the Bayes factor for this type of sample seems intuitively reasonable. 5.1.2 Training the improper prior Another approach to the problem of obtaining a default proper prior is the intrinsic Bayes factor (IBF) approach of Berger and Pericchi (1996). This approach is based on the utilization of training samples or, more precisely, minimal training samples. A training sample is a portion of the data used to convert an improper prior to a proper posterior, which can then be used to combine with the remaining data to calculate the marginal density (of the remaining data). A minimal training sample (MTS) is the smallest sample for which the posterior (based on the MTS) is proper. So that the marginal density, and hence the Bayes factor, does not depend on a particular MTS selected, Berger and Pericchi (1996) recommended averaging the Bayes factors over all possible MTS’s. A related alternative is the fractional Bayes factor of O’Hagan (1995). Developing training samples for mixture models (as in the ZIP model) is not as clear as in many other situations, as was discussed in P´ erez and Berger (2001). Since the first component of the mixture does not involve any parameters and the inflation parameter p has a proper distribution, following their recommendation here would result in the minimal training sample being a single observation, considered to be from the Poisson component of the mixture. This was independently suggested by Professor J.K. Ghosh (2006). Thus, we update the improper prior π I 1 ( p, λ ) = 1 1 / 2 to a proper posterior by treating one of the zeros as coming from the Poisson( λ ) distribution under model M 1 . The resulting posterior, that is the ‘trained’ prior, is then π 1 ( λ, p ) = 2(1 - p ) e - λ λ - 1 / 2 / Γ(1 / 2) . (Note that now the data is x 1 = 0 and “ x 1 comes from the Poisson component”.) This corresponds to assuming that, independently, λ Ga (1 / 2 , 1) and p Beta (1 , 2). The prior mean for λ ia now 0 . 5 (and not 1 as before) and the prior mean for p is 1 / 3 (and not 1 / 2 as before). We utilize the the same Ga (1 / 2 , 1) prior for λ under model M 0 (noting that this prior also results from a training sample consisting of a 0 under model M 0 ). Utilizing these prior specifications for the n - 1 zero’s left in the sample, we compute the Bayes factor B 10 ( 0 ) to be B 10 = 2 n + 1 n - 1 j =0 (1 - j n ) 1 / 2 . 15
Image of page 15
Similarly to the results in Section 5.1.1, it is easy to see that B 10 ( 0 ) 1. However, for large n the Bayes factor is approximately 2 1 0 (1 - u ) 1 / 2 du = 4 / 3, which only slightly favors the ZIP model. This result is much different, and intuitively less convincing, than the log n behavior seen in the previous subsection. The discrepancy perhaps arises from
Image of page 16

Subscribe to view the full document.

Image of page 17

{[ snackBarMessage ]}

Get FREE access by uploading your study materials

Upload your study materials now and get free access to over 25 million documents.

Upload now for FREE access Or pay now for instant access
Christopher Reinemann
"Before using Course Hero my grade was at 78%. By the end of the semester my grade was at 90%. I could not have done it without all the class material I found."
— Christopher R., University of Rhode Island '15, Course Hero Intern

Ask a question for free

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern