This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: The joint solution is = 3.2301 and = 2.9354. It might not seem obvious, but we can also derive asymptotic standard errors for these estimates by constructing them as method of moments estimators. Observe, first, that the two estimates are based on moment estimators of the probabilities. Let x i denote one of the 500 observations drawn from the normal distribution. Then, the two proportions are obtained as follows: Let z i (2.1) = 1 [ x i < 2.1] and z i (3.6) = 1 [ x i < 3.6] be indicator functions. Then, the proportion of 35% has been obtained as μ ∧ σ ∧ z (2.1) and .55 is z (3.6). So, the two proportions are simply the means of functions of the sample observations. Each z i is a draw from a Bernoulli distribution with success probability π (2.1) = Φ ((2.1 μ )/ σ ) for z i (2.1) and π (3.6) = Φ ((3.6 μ )/ σ ) for z i (3.6). Therefore, E [ z (2.1)] = π (2.1), and E [ z (3.6)] = π (3.6). The variances in each case are Var[ z (.)] = 1/ n [ π (.)(1 π (.))]. The covariance of the two sample means is a bit trickier, but we can deduce it from the results of random sampling. Cov[ z (2.1), z (3.6)]] = 1/ n Cov[ z i (2.1), z i (3.6)], and, since in random sampling sample moments will converge to their population counterparts, Cov[ z i (2.1), z i (3.6)] = plim [{(1/ n ) i (2.1) z i (3.6)}  π (2.1) π (3.6)]. But, z i (2.1) z i (3.6) must equal [ z i (2.1)] 2 which, in turn, equals z i (2.1). It follows, then, that z i n = ∑ 1 Cov[ z i (2.1), z i (3.6)] = π (2.1)[1  π (3.6)]. Therefore, the asymptotic covariance matrix for the two sample proportions is Asy Var p p n . [ ( . ), ( . )] ( . )( ( . )) ( . )( ( . )) ( . )( ( . )) ( . )( ( . )) 21 36 1 21 1 21 21 1 36 21 1 36 36 1 36 = = − − − − ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ Σ π π π π π π π π . If we insert our sample estimates, we obtain Now, ultimately, our estimates of μ and σ are found as functions of p (2.1) and p (3.6), using the method of moments. The moment equations are Est Asy Var p p . . [ ( . ) , ( . ) ] . . . . . 21 36 0 000455 0 000315 0 000315 0 000495 = = ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ S m n z i i n 2 1 1 1 21 21 . ( . ) . = ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ − ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ = ∑ = Φ μ σ , m n z i i n 3 1 1 36 36 .6 ( . ) . = ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ − ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ = ∑ = Φ μ σ . Now, let Γ = and let G be the sample estimate of Γ . Then, the estimator of the asymptotic covariance matrix of ( , ) is [ GS1 G ′ ]1 . The remaining detail is the derivatives, which are just ∂ ∂ μ ∂ ∂ σ ∂ ∂ μ ∂ ∂ σ m m m m 2 1 2 1 3 6 3 61 . . . . / / / / ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ μ ∧ σ ∧ ∂ m 2.1 / ∂μ = (1/ σ ) φ ((2.1 μ )/ σ ) and ∂ m 2.1 / ∂σ = (2.1 μ )/ σ [M m 2.1 /M σ ] and likewise for m 3.6 . Inserting our sample estimates produces G = . Finally, multiplying the matrices and computing the necessary inverses produces [ GS1 G ′ ]1 = . The asymptotic distribution would be normal, as usual. Based on these results, a 95% confidence interval for...
View
Full Document
 Spring '10
 Dr.Fang
 Normal Distribution, Variance, Probability theory, Maximum likelihood, Wald

Click to edit the document details