Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Homoskedasticity and Heteroskedasticity

4 Covariance Matrix Estimation Under Heteroskedasticity
In the previous section we showed that that the classic covariance matrix estimator can be
highly biased if homoskedasticity fails. In this section we show how to construct covariance matrix
estimators which do not require homoskedasticity.
Recall that the general form for the covariance matrix is
V b =

X0X
1

X0DX X0X
1
:
with D deÖned in (4.8). This depends on the unknown matrix D which we can write as
D = diag

2
1
; :::; 2
n

= E

ee0
j X

= E

De j X

where De = diag
e
2
1
; :::; e2
n

: Thus De is a conditionally unbiased estimator for D: If the squared
errors e
2
i were observable, we could construct an unbiased estimator for V b as
Vb
ideal
b =

X0X
1

X0DXe

X0X

1

X0X
1
Xn
i=1
xix
0
i
e
2
i
!

X0X
1
:
Indeed,
E

Vb
ideal
b j X



X0X
1
Xn
i=1
xix
0
iE

e
2
i
j X

!

X0X

1

X0X
1
Xn
i=1
xix
0
i
2
i
!

X0X

1

X0X
1

X0DX X0X
1
= V b
verifying that Vb
ideal
b is unbiased for V b:
Since the errors e
2
i
are unobserved, Vb
ideal
b is not a feasible estimator. However, we can replace
the errors ei with the least-squares residuals ebi
: Making this substitution we obtain the estimator
Vb
HC0
b =

X0X
1
Xn
i=1
xix
0
i
eb
2
i
!

X0X
1
: (4.31)
The label ìHCîrefers to ìheteroskedasticity-consistentî. The label ìHC0îrefers to this being the
baseline heteroskedasticity-consistent covariance matrix estimator.
We know, however, that eb
2
i
is biased towards zero (recall equation (4.22)). To estimate the
variance 
2
the unbiased estimator s
2
scales the moment estim
CHAPTER 4. LEAST SQUARES REGRESSION 119
same adjustment we obtain the estimator
Vb
HC1
b =

n
n k


X0X
1
Xn
i=1
xix
0
i
eb
2
i
!

X0X
1
: (4.32)
While the scaling by n=(n k) is ad hoc, HC1 is often recommended over the unscaled HC0
estimator.
Alternatively, we could use the standardized residuals ei or the prediction errors eei
; yielding the
estimators
Vb
HC2
b =

X0X
1
Xn
i=1
xix
0
i
e
2
i
!

X0X

1

X0X
1
Xn
i=1
(1 hii)
1
xix
0
i
eb
2
i
!

X0X
1
(4.33)
and
Vb
HC3
b =

X0X
1
Xn
i=1
xix
0
i
ee
2
i
!

X0X

1

X0X
1
Xn
i=1
(1 hii)
2
xix
0
i
eb
2
i
!

X0X
1
: (4.34)
These are often called the ìHC2îand ìHC3î estimators, as labeled.
The four estimators HC0, HC1, HC2 and HC3 are collectively called robust, heteroskedasticityconsistent, or heteroskedasticity-robust covariance matrix estimators. The HC0 estimator was
Örst developed by Eicker (1963) and introduced to econometrics by White (1980), and is sometimes
called the Eicker-White or White covariance matrix estimator. The degree-of-freedom adjustment in HC1 was recommended by Hinkley (1977), and is the default robust covariance matrix
estimator implemented in Stata. It is implement by the ì,rî option, for example by a regression
executed with the command ìreg y x, rî. In applied econometric practice, this is the currently
most popular covariance matrix estimator. The HC2 estimator was introduced by Horn, Horn and
Duncan (1975) (and is implemented using the vce(hc2) option in Stata). The HC3 estimator was
derived by MacKinnon and White (1985) from the jackknife principle (see Section 10.3), and by
Andrews (1991a) based on the principle of leave-one-out cross-validation (and is implemented using
the vce(hc3) option in Stata).
Si