# Bootstrap Variance and Standard Errors

Asymptotic Normality We started this chapter discussing the need for an approximation to the distribution of the OLS estimator b: In Section 7.2 we showed that b converges in probability to . Consistency is a good Örst step, but in itself does not describe the distribution of the estimator. In this section we derive an approximation typically called the asymptotic distribution. The derivation starts by writing the estimator as a function of sample moments. One of the moments must be written as a sum of zero-mean random vectors and normalized so that the central limit theorem can be applied. The steps are as follows. Take equation (7.3) and multiply it by p n: This yields the expression p n b = 1 n Xn i=1 xix 0 i !1 1 p n Xn i=1 xiei ! : (7.5) This shows that the normalized and centered estimator p n b is a function of the sample average 1 n Pn i=1 xix 0 i and the normalized sample average p 1 n Pn i=1 xiei : Furthermore, the latter has mean zero so the central limit theorem (CLT, Theorem 6.11) applies. The product xiei is i.i.d. (since the observations (yi ; xi) are i.i.d.) and mean zero (since E (xiei) = 0): DeÖne the k k covariance matrix = E xix 0 i e 2 i : The CLT requires the elements of to be Önite, written < 1: This requires a strengthing of Assumption 7.1. We state the required conditions here. Assumption 7.2 1. The observations (yi ; xi); i = 1; :::; n; are independent and identically distributed. 2. E y 4 < 1: 3. E kxk 4 < 1: 4. Qxx = E (xx0 ) is positive deÖnite. Assumption 7.2 implies that < 1. To see this, take the j`th element of , E xjix`ie 2 i . By the expectation inequality (B.29), the j`th element of is bounded by E xjix`ie 2 i E xjix`ie 2 i = E jxjij jx`ij e 2 i : By two applications of the Cauchy-Schwarz inequality (B.31), this is smaller than E x 2 jix 2 `i1=2 E e 4 i 1=2 E x 4 ji1=4 E x 4 `i1=4 E e 4 i 1=2 < 1 where the Öniteness holds under Assumption 7.2.2 and 7.2.3. Thus < 1. An alternative way to show that the elements of are Önite is by using a matrix norm kk (See Appendix A.23). Then by the expectation inequality, the Cauchy-Schwarz inequality, and Assumption 7.2 k k E xix 0 i e 2 i = E kxik 2 e 2 i E kxik 4 1=2 E