# Reading the Manuscript .

shown (4.11) as requred. 4.9 Generalized Least Squares Take the linear regression model in ma CHAPTER 4. LEAST SQUARES REGRESSION 112 Consider a generalized situation where the observation errors are possibly correlated and/or heteroskedastic. SpeciÖcally, suppose that E (e j X) = 0 (4.13) var(e j X) = (4.14) for some n n covariance matrix , possibly a function of X. This includes the i.i.d. sampling framework where = D as deÖned in (4.8) but allows for non-diagonal covariance matrices as well. As a covariance matrix, is necessarily symmetric and positive semi-deÖnite. Under these assumptions, by similar arguments we can calculate the mean and variance of the OLS estimator: E b j X = (4.15) var(b j X) = X0X 1 X0 X X0X 1 (4.16) (see Exercise 4.5). We have an analog of the Gauss-Markov Theorem. Theorem 4.5 If (4.13)-(4.14) hold and if e is a linear unbiased estimator of then var e j X X0 1X 1 : We leave the proof for Exercise 4.6. The theorem provides a lower bound on the variance matrix of unbiased linear estimators. The bound is di§erent from the variance matrix of the OLS estimator as stated in (4.16) except when = In 2 . This suggests that we may be able to improve on the OLS estimator. This is indeed the case when is known up to scale. That is, suppose that = c 2 where c 2 > 0 is real and is n n and known. Take the linear model (4.12) and pre-multiply by 1=2 . This produces the equation ye = X f + ee where ye = 1=2y, Xf = 1=2X, and ee = 1=2e. Consider OLS estimation of in this equation. e gls = Xf0 Xf 1 Xf0 ye = 1=2X 0 1=2X 1 1=2X 0 1=2y = X01X 1 X01y: (4.17) This is called the Generalized Least Squares (GLS) estimator of and was introduced by Aitken (1935). You can calculate that E e gls j X = (4.18) var(e gls j X) = X0 1X 1 : (4.19) This shows that the GLS estimator is unbiased, and has a covariance matrix which equals the lower bound from Theorem 4.5. This shows that the lower bound is sharp when is known. GLS is thus e¢ cient in the class of li