{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# 8 - Introduction to Time Series Analysis Lecture 8 1 Review...

This preview shows pages 1–9. Sign up to view the full content.

Introduction to Time Series Analysis. Lecture 8. 1. Review: Linear prediction, projection in Hilbert space. 2. Forecasting and backcasting. 3. Prediction operator. 4. Partial autocorrelation function. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Linear prediction Given X 1 , X 2 , . . . , X n , the best linear predictor X n n + m = α 0 + n summationdisplay i =1 α i X i of X n + m satisfies the prediction equations E ( X n + m X n n + m ) = 0 E bracketleftbig( X n + m X n n + m ) X i bracketrightbig = 0 for i = 1 , . . . , n . This is a special case of the projection theorem . 2
Projection theorem If H is a Hilbert space, M is a closed subspace of H , and y ∈ H , then there is a point Py ∈ M (the projection of y on M ) satisfying 1. bardbl Py y bardbl ≤ bardbl w y bardbl 2. ( y Py, w ) = 0 for w ∈ M . y y-Py Py M 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Projection theorem for linear forecasting Given 1 , X 1 , X 2 , . . . , X n braceleftbig r.v.s X : E X 2 < bracerightbig , choose α 0 , α 1 , . . . , α n R so that Z = α 0 + n i =1 α i X i minimizes E ( X n + m Z ) 2 . Here, ( X, Y ) = E ( XY ) , M = { Z = α 0 + n i =1 α i X i : α i R } = ¯ sp { 1 , X 1 , . . . , X n } , and y = X n + m . 4
Projection theorem: Linear prediction Let X n n + m denote the best linear predictor: bardbl X n n + m X n + m bardbl 2 ≤ bardbl Z X n + m bardbl 2 for all Z ∈ M . The projection theorem implies the orthogonality ( X n n + m X n + m , Z ) = 0 for all Z ∈ M ( X n n + m X n + m , Z ) = 0 for all Z ∈ { 1 , X 1 , . . . , X n } E ( X n n + m X n + m ) = 0 E bracketleftbig( X n n + m X n + m ) X i bracketrightbig = 0 5

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Linear prediction That is, the prediction errors ( X n n + m X n + m ) are orthogonal to the prediction variables ( 1 , X 1 , . . . , X n ) . Orthogonality of prediction error and 1 implies we can subtract μ from all variables ( X n n + m and X i ). Thus, for forecasting, we can assume μ = 0 . 6
One-step-ahead linear prediction Write X n n +1 = φ n 1 X n + φ n 2 X n 1 + · · · + φ nn X 1 Prediction equations: E ( ( X n n +1 X n +1 ) X i ) = 0 , for i = 1 , . . . , n n summationdisplay j =1 φ nj E ( X n +1 j X i ) = E ( X n +1 X i ) n summationdisplay j =1 φ nj γ ( i j ) = γ ( i ) Γ n φ n = γ n , 7

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
One-step-ahead linear prediction Prediction equations: Γ n φ n = γ n .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern