answerkey5_5

1 2(ans given our model y i = β β 1 x i u i e u 2 i

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 2. (Ans) Given our model Y i = β + β 1 X i + u i , E ( u 2 i | X i ) = V ar ( u i | X i ) = V ar ( Y i | X i ). The first equality comes from V ar ( u i | X i ) = E ( u 2 i | X i )- E ( u i | X i ) 2 and the assumption E ( u i | X i ) = 0 while the second equality comes from the fact that given X i (i.e., if we know X i ), the variation of Y i only comes from the variation of u i . Therefore, we only need to find P ( Y i = 1 | X i ) and P ( Y i = 0 | X i ) to calculate E ( u 2 i | X i ). Let us take the conditional expectations on both sides of the model. From the model Y i = β + β 1 X i + u i , we get E ( Y i | X i ) = E ( β + β 1 X i + u i | X i ) = E ( β | X i ) + E ( β 1 X i | X i ) + E ( u i | X i ) = β + β 1 X i . Here we exploited the properties of conditional expectations (which should be familiar) and the assumption E ( u i | X i ) = 0. On the other hand, since Y i is a Bernoulli random variable given X i , E ( Y i | X i ) = 1 · P ( Y i = 1 | X i ) + 0 · P ( Y i = 0 | X i ) = P ( Y i = 1 | X i ). This combined with the above result yields, P ( Y i = 1 | X i ) = β + β 1 X i P ( Y i = 0 | X i ) = 1- P ( Y i = 1 | X i ) = 1- β- β 1 X i ....
View Full Document

{[ snackBarMessage ]}

Page1 / 3

1 2(Ans Given our model Y i = β β 1 X i u i E u 2 i | X i...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online