Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

# Jackknife Estimation of Variance

Joint Distribution Theorem 7.3 gives the joint asymptotic distribution of the coe¢ cient estimators. We can use the result to study the covariance between the coe¢ cient estimators. For simplicity, suppose k = 2 with no intercept, both regressors are mean zero and the error is homoskedastic. Let  2 1 and  2 2 be the variances of x1i and x2i ; and  be their correlation. Then using the formula for inversion of a 2 2 matrix, V 0 =  2Q1 xx =  2  2 1  2 2 (1  2)   2 2 12 12  2 1  : Thus if x1i and x2i are positively correlated ( > 0) then b1 and b2 are negatively correlated (and vice-versa). For illustration, Figure 7.4 displays the probability contours of the joint asymptotic distribution of b1 1 and b2 2 when 1 = 2 = 0; 2 1 =  2 2 =  2 = 1; and  = 0:5: The coe¢ cient estimators are negatively correlated since the regressors are positively correlated. This means that if b1 is unusually negative, it is likely that b2 is unusually positive, or conversely. It is also unlikely that we will observe both b1 and b2 unusually large and of the same sign. This Önding that the correlation of the regressors is of opposite sign of the correlation of the coefÖcient estimates is sensitive to the assumption of homoskedasticity. If the errors are heteroskedastic then this relationship is not guar CHAPTER 7. ASYMPTOTIC THEORY FOR LEAST SQUARES 227 −2 −1 0 1 2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 N(0,1) N(0,1)4 N(0,1)6 N(0,1)8 Figure 7.3: Density of Normalized OLS Estimator with Error Process (7.12) This can be seen through a simple constructed example. Suppose that x1i and x2i only take the values f1; +1g; symmetrically, with P (x1i = x2i = 1) = P (x1i = x2i = 1) = 3=8; and P (x1i = 1; x2i = 1) = P (x1i = 1; x2i = 1) = 1=8: You can check that the regressors are mean zero, unit variance and correlation 0.5, which is identical with the setting displayed in Figure 7.4. Now suppose that the error is heteroskedastic. SpeciÖcally, suppose that E e 2 i j x1i = x2i  = 5 4 and E e 2 i j x1i 6= x2i  = 1 4 : You can check that E e 2 i  = 1; E x 2 1i e 2 i  = E x 2 2i e 2 i  = 1 and E x1ix2ie 2 i  = 7 8 : Therefore V = Q1 xx Q1 xx = 9 16 2 6 4 1 1 2 1 2 1 3 7 5 2 6 4 1 7 8 7 8 1 3 7 5 2 6 4 1 1 2 1 2 1 3 7 5 = 4 3 2 6 4 1 1 4 1 4 1 3 7 5 : Thus the coe¢ cient estimators b1 and b2 are positively correlated (their correlation is 1=4:) The joint probability contours of their asymptotic distribution is displayed in Figure 7.5. We can see how the two estimators are posit