Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

# econometric methods

Intuitively, m (x) is the mean of y for the idealized subpopulation where the conditioning variables
are Öxed at x. This is idealized since x is continuously distributed so this subpopulation is inÖnitely
small.
9Here, experience is deÖned as potential labor market experience, equal to age education
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 20
This deÖnition (2.4) is appropriate when the conditional density (2.3) is well deÖned. However,
the conditional mean m(x) exists quite generally. In Theorem 2.13 in Section 2.34 we show that
m(x) exists so long as E jyj < 1.
In Figure 2.6 the CEF of log(wage) given experience is plotted as the solid line. We can see
that the CEF is a smooth but nonlinear function. The CEF is initially increasing in experience,
áattens out around experience = 30, and then decreases for high levels of experience.
2.7 Law of Iterated Expectations
An extremely useful tool from probability theory is the law of iterated expectations. An
important special case is the known as the Simple Law.
Theorem 2.1 Simple Law of Iterated Expectations
If E jyj < 1 then for any random vector x,
E (E (y j x)) = E (y):
The simple law states that the expectation of the conditional expectation is the unconditional
expectation. In other words, the average of the conditional averages is the unconditional average.
When x is discrete
E (E (y j x)) = X1
j=1
E (y j xj ) P (x = xj )
and when x is continuous
E (E (y j x)) = Z
Rk
E (y j x) fx(x)dx:
Going back to our investigation of average log wages for men and women, the simple law states
that
E (log(wage) j sex = man) P (sex = man)

• E (log(wage) j sex = woman) P (sex = woman)
= E (log(wage)):
Or numerically,
3:05 0:57 + 2:79 0:43 = 2:92:
The general law of iterated expectations allows two sets of conditioning variables.
Theorem 2.2 Law of Iterated Expectations
If E jyj < 1 then for any random vectors x1 and x2,
E (E (y j x1; x2) j x1) = E (y j x1):
Notice the way the law is applied. The inner expectation conditions on x1 and x2, while
the outer expectation conditions only on x1: The iterated expectation yields the simple answer
E (y j x1); the expectation conditional on x1 alone. Sometimes we phrase this as: ìThe smaller
information set wins.