{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# ps2 - Thus extremely small and large errors occur much more...

This preview shows page 1. Sign up to view the full content.

Problem set 2 1. Consider coin tossing with a single possibly biased coin. The density function for the ran- dom variable y = 1 ( heads ) is f Y ( y , p 0 ) = p y 0 ( 1 - p 0 ) 1 - y , y ∈ { 0,1 } = 0, y / ∈ { 0,1 } Suppose that we have a sample of size n . We know from above that the ML estimator is b p 0 = ¯ y . We also know from the theory above that n ( ¯ y - p 0 ) a N h 0, H ( p 0 ) - 1 I ( p 0 ) H ( p 0 ) - 1 i a) find the analytic expression for g t ( θ ) and show that E θ [ g t ( θ )] = 0 b) find the analytical expressions for H ( p 0 ) and I ( p 0 ) for this problem c) Write an Octave program that does a Monte Carlo study that shows that n ( ¯ y - p 0 ) is approximately normally distributed when n is large. Please give me histograms that show the sampling frequency of n ( ¯ y - p 0 ) for several values of n . 2. Consider the model y t = x 0 t β + αe t where the errors follow the Cauchy (Student-t with 1 degree of freedom) density. So f ( e t ) = 1 π ( 1 + e 2 t ) , - < e t < The Cauchy density has a shape similar to a normal density, but with much thicker tails.
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Thus, extremely small and large errors occur much more frequently with this density than would happen if the errors were normally distributed. ±ind the score function g n ( θ ) where θ = ± β α ² . 3. Consider the model classical linear regression model y t = x t β + e t where e t ∼ IIN ( 0, σ 2 ) . ±ind the score function g n ( θ ) where θ = ± β σ ² . 4. Compare the Frst order conditional that deFne the ML estimators of problems 2 and 3 and interpret the differences. Why are the Frst order conditions that deFne an efFcient estimator different in the two cases? 1...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online