# Kronecker Products and the Vec Operator

ize : In the special case that Ab = A (X; Z) and vi jxi ; zi N 0; 2 then there is a Önite sample version of the previous result. Let W0 be the Wald statistic constructed with a homoskedastic variance matrix estimator, and let F = W=q (12.55) be the the F statistic, where q = dim(2 ). Theorem 12.10 Take model (12.50) with Ab = A (X; Z), vi jxi ; zi N 0; 2 and wb i = (w1i ; wb 2i). Under H0 : 2 = 0, t-statistics have exact N (0; 1) distributions, and the F statistic (12.55) has an exact Fq;nk distribution, where q = dim(2 ) and k = dim(). To summarize, in the model yi = w0 1i1 + w0 2i2 + vi where w2i is not observed but replaced with an estimate wb 2i , conventional signiÖcance tests for H0 : 2 = 0 are asymptotically valid without adjustment. While this theory allows tests of H0 : 2 = 0, it unfortunately does just justify conventional standard errors or conÖdence intervals. For this, we need to work out the distribution without imposing the simpliÖcation 2 = 0. This often needs to be worked out case-by-case, or by using methods based on the generalized method of moments to be introduced in Chapter 13. However, in some important set of examples it is straightforward to work out the asymptotic distribution. For the remainder of this section we examine the setting where the estimators Ab take a leastsquares form, so for some X can be written as Ab = (Z 0Z) 1 (Z 0X). Such estimators correspond to the multivariate projection model xi = A0zi + ui (12.56) E ziu CHAPTER 12. INSTRUMENTAL VARIABLES 439 This class of estimators directly includes 2SLS and the expectation model described above. We can write the matrix of generated regressors as Wc = ZAb and then (12.52) as b = Wc0Wc 1 Wc0 W Wc + v = Ab 0 Z 0ZAb 1 Ab 0 Z 0 Z Z 0Z 1 Z 0U + v = Ab 0 Z 0ZAb 1 Ab 0 Z 0 (U + v) = Ab 0 Z 0ZAb 1 Ab 0 Z 0e where ei = vi u 0 i = yi x 0 i: (12.57) This estimator has the asymptotic distribution p n b d ! N (0;V ) where V = A0E ziz 0 i A 1 A0E ziz 0 i e 2 i A A0E ziz 0 i A 1 : (12.58) Under conditional homoskedasticity the covariance matrix simpliÖes to V = A0E ziz 0 i A 1 E e 2 i : An appropriate estimator of V is Vb = 1 n Wc0Wc 1 1 n Xn i=1 wb iwb 0 i eb 2 i ! 1 n Wc0Wc 1 (12.59) ebi = yi x 0 ib: Under the assumption of conditional homoskedasticity this can be simpliÖed as usual. This appears to be the usual covariance matrix estimator, but it is not, because the least-squares residuals vbi = yi wb 0 ib have been replaced with ebi = yi x 0 ib. This is exactly the substitution made by the 2SLS covariance matrix formula. Indeed, the covariance matrix estimator Vb precisely equals the estimator (12.42). Theorem 12.11 Take model (12.50) and (12.56) with E y 4 i < 1, E kzik 4 < 1, A0E (ziz 0 i ) A > 0, and Ab = (Z 0Z) 1 (Z 0X). As n ! 1, p n b d ! N (0;V ) where V is given in (12.58) with ei deÖned in (12.57). For Vb given in (12.59), Vb p ! V : Since the parameter estimates are asymptotically normal and the covariance matrix is consistently estimated, standard errors and test statistics constructed from Vb are asymptotically vali