Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Problems with Tests of Nonlinear Hypotheses

! 1: 7.6 Homoskedastic Covariance Matrix Estimation Theorem 7.3 shows that p n  b  is asymptotically normal with asymptotic covariance matrix V . For asymptotic inference (conÖdence intervals and tests) we need a consistent estimator of V . Under homoskedasticity, V simpliÖes to V 0 = Q1 xx 2 ; and in this section we consider the simpliÖed problem of estimating V 0 : The standard moment estimator of Qxx is Qb xx deÖned in (7.1), and thus an estimator for Q1 xx is Qb 1 xx. Also, the standard estimator of  2 is the unbiased estimator s 2 deÖned in (4.26). Thus a natural plug-in estimator for V 0 = Q1 xx 2 is Vb 0 = Qb 1 xxs 2 : Consistency of Vb 0 for V 0 follows from consistency of the moment estimators Qb xx and s 2 ; and an application of the continuous mapping theorem. SpeciÖcally, Theorem 7.1 established that Qb xx p ! Qxx; and Theorem 7.4 established s 2 p !  2 : The function V 0 = Q1 xx 2 is a continuous function of Qxx and  2 so long as Qxx > 0; which holds true under Assumption 7.1.4. It follows by the CMT that Vb 0 = Qb 1 xxs 2 p ! Q1 xx 2 = V 0 so that Vb 0 is consistent for V 0 ; as desired. Theorem 7.5 Under Assumption 7.1, Vb 0 p ! V 0 as n ! 1: It is instructive to notice that Theorem 7.5 does not require the assumption of homoskedasticity. That is, Vb 0 is consistent for V 0 regardless if the regression is homoskedastic or heteroskedastic. However, V 0 = V = avar( b) only under homoskedasticity. Thus in the general case, Vb 0 is consistent for a well-deÖne CHAPTER 7. ASYMPTOTIC THEORY FOR LEAST SQUARES 231 7.7 Heteroskedastic Covariance Matrix Estimation Theorems 7.3 established that the asymptotic covariance matrix of p n  b  is V = Q1 xx Q1 xx: We now consider estimation of this covariance matrix without imposing homoskedasticity. The standard approach is to use a plug-in estimator which replaces the unknowns with sample moments. As described in the previous section, a natural estimator for Q1 xx is Qb 1 xx, where Qb xx deÖned in (7.1). The moment estimator for is b = 1 n Xn i=1 xix 0 i eb 2 i ; leading to the plug-in covariance matrix estimator Vb HC0 = Qb 1 xx bQb 1 xx: (7.19) You can check that Vb HC0 = nVb HC0 b where Vb HC0 b is the HC0 covariance matrix estimator introduced in (4.31). As shown in Theorem 7.1, Qb 1 xx p ! Q1 xx ; so we just need to verify the consistency of b. The key is to replace the squared residual eb 2 i with the squared error e 2 i ; and then show that the di§erence is asymptotically negligible. SpeciÖcally, observe that b = 1 n Xn i=1 xix 0 i eb 2 i = 1 n Xn i=1 xix 0 i e 2 i + 1 n Xn i=1 xix 0 i eb 2 i e 2 i  : The Örst term is an average of the i.i.d. random variables xix 0 i e 2 i ; and therefore by the WLLN converges in probability to its expectation, namely, 1 n Xn i=1 xix 0 i e 2 i p ! E xix 0 i e 2 i  = : Technically, this requires that has Önite elements, which was shown in (7.6). So to establish that b is consistent for it remains to show that 1 n Xn i=1 xix 0 i eb 2 i e 2 i  p ! 0: (7.20) There are multiple ways to do this. A reasonable straightforward yet slightly tedious derivation is to start by applying the triangle inequality (B.16) using a matrix norm: 1 n Xn i=1 xix 0 i eb 2 i e 2 i   1 n Xn i=1 xix 0 i eb 2 i e 2 i  = 1 n Xn i=1 kxik 2 eb 2 i e CHAPTER 7. ASYMPTOTIC THEORY FOR LEAST SQUARES 232 Then recalling the expression for the squared residual (7.17), apply the triangle inequality (B.1) and then the Schwarz inequality (B.12) twice eb 2 i e 2 i  2 eix 0 i  b  +  b 0 xix 0 i  b  = 2 jei j x 0 i  b  +  b 0 xi 2  2 jei j kxik b + kxik 2 b 2 : (7.22) Combining (7.21) and (7.22), we Önd 1 n Xn i=1 xix 0 i eb 2 i e 2 i   2 1 n Xn i=1 kxik 3 jei j ! b + 1 n Xn i=1 kxik 4 ! b 2 = op(1): (7.23) The expression is op(1) because b p ! 0 and both averages in parenthesis are averages of random variables with Önite mean under Assumption 7.2 (and are thus Op(1)). Indeed, by Hˆlder