Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Econometric Terms and Notation

Linear Regression Model We now consider the linear regression model. Throughout this chapter we maintain the following. Assumption 4.2 Linear Regression Model The observations (yi ; xi) satisfy the linear regression equation yi = x 0 i + ei (4.1) E (ei j xi) = 0: (4.2) The variables have Önite second moments E y 2 i  < 1; E kxik 2 < 1; and an invertible design matrix Qxx = E xix 0 i  > 0: We will consider both the general case of heteroskedastic regression, where the conditional variance E e 2 i j xi  =  2 (xi) =  2 i is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance is constant. In the latter case we add the following assumption. Assumption 4.3 Homoskedastic Linear Regression Model In addition to Assumption 4.2, E e 2 i j xi  =  2 (xi) =  2 (4.3) is independent of xi : 4.5 Mean of Least-Squares Estimator In this section we show that the OLS estimator is unbiased in the linear regression model. This calculation can be done using either summation notation or matrix notation. We will use both. First take summation notation. Observe that under (4.1)-(4.2) E (yi j X) = E (yi j xi) = x 0 i : (4.4) The Örst equality states that the conditional expectation of yi given fx1; :::; xng only depends on xi ; since the observations are independent across i: The second equality is the assumption of a linear conditional m LEAST SQUARES REGRESSION 107 Using deÖnition (3.12), the conditioning theorem (Theorem 2.3), the linearity of expectations, (4.4), and properties of the matrix inverse, E  b j X  = E 0 @ Xn i=1 xix 0 i !1 Xn i=1 xiyi ! j X 1 A = Xn i=1 xix 0 i !1 E Xn i=1 xiyi ! j X ! = Xn i=1 xix 0 i !1 Xn i=1 E (xiyi j X) = Xn i=1 xix 0 i !1 Xn i=1 xiE (yi j X) = Xn i=1 xix 0 i !1 Xn i=1 xix 0 i = : Now letís show the same result using matrix notation. (4.4) implies E (y j X) = 0 BB@ . . . E (yi j X) . . . 1 CCA = 0 BB@ . . . x 0 i . . . 1 CCA = X : (4.5) Similarly E (e j X) = 0 BB@ . . . E (ei j X) . . . 1 CCA = 0 BB@ . . . E (ei j xi) . . . 1 CCA = 0: Using b = (X0X) 1 (X0y), the conditioning theorem, the linearity of expectations, (4.5), and the properties of the matrix inverse, E  b j X  = E  X0X 1 X0y j X  = X0X 1 X0E (y j X) = X0X 1 X0X = : At the risk of belaboring the derivation, another way to calculate the same result is as follows. Insert y = X + e into the formula for b to obtain b = X0X 1 X0 (X + e)  = X0X 1 X0X + X0X 1 X0e  = + X0X 1 X0e: (4.6) This is a useful linear decomposition of the estimator b into the true parameter and the stochastic component (X0X) 1 X0e: Once again, we can calculate that E  b j X  = E  X0X 1 X0e j X  = CHAPTER 4. LEAST SQUARES REGRESSION 108 Regardless of the method, we have shown that E  b j X  = : We have shown the following theorem. Theorem 4.1 Mean of Least-Squares Estimator In the linear regression model (Assumption 4.2) and i.i.d. sampling (Assumption 4.1) E  b j X  = : (4.7) Equation (4.7) says that the estimator b is unbiased for , conditional on X. This means that the conditional distribution of b is centered at . By ìconditional on Xîthis means that the distribution is unbiased (centered at ) for any realization of the regressor matrix X. In conditional models, we simply refer to this as saying ì b is unbiased for î