Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Econometric Software

Gauss-Markov Theorem Now consider the class of estimators of which are linear functions of the vector y; and thus can be written as e = A0y where A is an n k function of X. As noted before, the least-squares estimator is the special case obtained by setting A = X(X0X) 1 : What is the best choice of A? The Gauss-Markov theorem1 , which we now present, says that the least-squares estimator is the best choice among linear unbiased estimators when the errors are homoskedastic, in the sense that the least-squares estimator has the smallest variance among all unbiased linear estimators. To see this, since E (y j X) = X ; then for any linear estimator e = A0y we have E  e j X  = A0E (y j X) = A0X ; so e is unbiased if (and only if) A0X = Ik: Furthermore, we saw in (4.9) that var  e j X  = var A0y j X  = A0DA = A0A 2 1Named after the mathematicians Carl Friedrich Gauss and Andrey Ma CHAPTER 4. LEAST SQUARES REGRESSION 111 the last equality using the homoskedasticity assumption D = In 2 . The ìbestî unbiased linear estimator is obtained by Önding the matrix A0 satisfying A0 0X = Ik such that A0 0A0 is minimized in the positive deÖnite sense, in that for any other matrix A satisfying A0X = Ik; then A0AA0 0A0 is positive semi-deÖnite. Theorem 4.4 Gauss-Markov. In the homoskedastic linear regression model (Assumption 4.3) and i.i.d. sampling (Assumption 4.1), if e is a linear unbiased estimator of then var  e j X    2 X0X 1 : The Gauss-Markov theorem provides a lower bound on the variance matrix of unbiased linear estimators under the assumption of homoskedasticity. It says that no unbiased linear estimator can have a variance matrix smaller (in the positive deÖnite sense) than  2 (X0X) 1 . Since the variance of the OLS estimator is exactly equal to this bound, this means that the OLS estimator is e¢ cient in the class of linear unbiased estimator. This gives rise to the description of OLS as BLUE, standing for ìbest linear unbiased estimatorî. This is an e¢ ciency justiÖcation for the least-squares estimator. The justiÖcation is limited because the class of models is restricted to homoskedastic linear regression and the class of potential estimators is restricted to linear unbiased estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the possibility that a non-linear or biased estimator could have lower mean squared error than the least-squares estimator. We complete this section with a proof of the Gauss-Markov theorem. Let A be any nk function of X such that A0X = Ik: The estimator A0y is unbiased for and has variance A0A 2 : Since the least-squares estimator is unbiased and has variance (X0X) 1  2 , it is su¢ cient to show that the di§erence in the two variance matrices is positive semi-deÖnite, or A0A X0X 1 > 0: (4.11) Set C = A X (X0X) 1 : Note that X0C = 0: Then we calculate that A0A X0X 1 =  C + X X0X 1 0  C + X X0X 1  X0X 1 = C 0C + C 0X X0X 1 + X0X 1 X0C + X0X 1 X0X X0X 1 X0X 1 = C 0C > 0: The Önal inequality states that the matrix C0C is positive semi-deÖnite, which is