Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Solving for Least Squares with Multiple Regressors

.7 Joint Normality and Linear Regression Suppose the variables (y; x) are jointly normally distributed. Consider the best linear predictor of y given x y = x 0 + + e: By the properties of the best linear predictor, E (xe) = 0 and E (e) = 0, so x and e are uncorrelated. Since (e; x) is an a¢ ne transformation of the normal vector (y; x); it follows that (e; x) is jointly normal (Theorem 5.4). Since (e; x) is jointly normal and uncorrelated they are independent (Theorem 5.5). Independence implies that E (e j x) = E (e) = 0 and E e 2 j x  = E e 2  =  2 which are properties of a homoskedastic linear CEF. We have shown that when (y; x) are jointly normally distributed, they satisfy a normal linear CEF y = x 0 + + e where e  N(0; 2 ) is independent of x. This is a classical motivation for the linear regression m