T ˆ β as ˆ se ˆ μ x h x T H T H 1 h x 1 2 ˆ σ c 2019 The Trustees of the

# T ˆ β as ˆ se ˆ μ x h x t h t h 1 h x 1 2 ˆ σ

• Notes
• 32

This preview shows page 10 - 18 out of 32 pages.

T ˆ β as: ˆ se μ ( x )) = ( h ( x ) T ( H T H ) - 1 h ( x )) 1 2 ˆ σ c 2019 The Trustees of the Stevens Institute of Technology

Subscribe to view the full document.

Bootstrap Methods EM Algorithm To bootstrap, draw B datasets each of size N with replacement from the training data. To each bootstrapped data set Z * , fit a cubic spline ˆ μ * ( x ) . Using B = 200, we can determine a 95 % confidence interval. This is an example of a non-parametric bootstrap. c 2019 The Trustees of the Stevens Institute of Technology
Elements of Statistical Learning (2nd Ed.) c Hastie, Tibshirani & Friedman 2009 Chap 8 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -1 0 1 2 3 4 5 •• • ••• •••• •• •• x y 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -1 0 1 2 3 4 5 x y •• • ••• •••• •• •• 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -1 0 1 2 3 4 5 x y •• • ••• •••• •• •• 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -1 0 1 2 3 4 5 x y •• • ••• •••• •• •• FIGURE 8.2. (Top left:) B -spline smooth of data. (Top right:) B -spline smooth plus and minus 1 . 96 × standard error bands. (Bottom left:) Ten bootstrap replicates of the B -spline smooth. (Bottom right:) B -spline smooth with 95 % standard error bands com- puted from the bootstrap distribution.

Subscribe to view the full document.

Bootstrap Methods EM Algorithm Parametric Bootstrap Lets assume that the model errors are Gaussian, this leads us to: Y = μ ( X ) + ε ; ε N ( 0 , σ 2 ) , μ ( x ) = 7 X j = 1 β j h j ( x ) c 2019 The Trustees of the Stevens Institute of Technology
Bootstrap Methods EM Algorithm We simulate new responses by adding Gaussian noise to the predicted values, y * i = ˆ μ * ( x i ) + ε * i ; ε * i N ( 0 , ˆ σ 2 ); i = 1 , . . . , N The process is repeated B times and the the resulting bootstrap datasets, ( x 1 , y * 1 ) , . . . , ( x N , y * N ) have the smoothing spline fit on each. c 2019 The Trustees of the Stevens Institute of Technology

Subscribe to view the full document.

Bootstrap Methods EM Algorithm Simple Example Assume we have a mixture of two normal random variables, X 1 and X 2 . Looking at their histogram, we can see that a single normal would be a very bad fit. c 2019 The Trustees of the Stevens Institute of Technology
Bootstrap Methods EM Algorithm We attempt to model this using a mixture of two normal distributions given by: Y = ( 1 - Δ) Y 1 + Δ Y 2 where Δ ∈ { 0 , 1 } , with P (Δ = 1 ) = π . If we let φ θ ( x ) be the normal density with parameters θ , then the density of Y is: g Y ( y ) = ( 1 - π ) φ θ 1 ( y ) + πφ θ 2 ( y ) c 2019 The Trustees of the Stevens Institute of Technology

Subscribe to view the full document.

Bootstrap Methods EM Algorithm If we want to fit this using MLE, we need to estimate the parameters: θ = ( π, θ 1 , θ 2 ) = ( π, μ 1 , σ 2 1 , μ 2 , σ 2 2 ) The log-likelihood based on the N cases is: ( θ ; Z ) = N X i = 1 log[( 1 - π ) φ θ 1 ( y
• Fall '16
• alec schimdt

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern