Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Ináuential Observations

Normal Regression and Maximum Likelihood 5.1 Introduction This chapter introduces the normal regression model and the method of maximum likelihood. The normal regression model is a special case of the linear regression model. It is important as normality allows precise distributional characterizations and sharp inferences. It also provides a baseline for comparison with alternative inference methods, such as asymptotic approximations and the bootstrap. The method of maximum likelihood is a powerful statistical method for parametric models (such as the normal regression model) and is widely used in econometric practice. 5.2 The Normal Distribution We say that a random variable X has the standard normal distribution, or Gaussian, written X  N (0; 1); if it has the density (x) = 1 p 2 exp  x 2 2  ; 1 < x < 1: The standard normal density is typically written with the symbol  (x) and the corresponding distribution function by (x). It is a valid density function by the following result. Theorem 5.1 Z 1 0 exp x 2 =2  dx = r  2 : (5.1) The proof is presented in Section 5.20. Plots of the standard normal density function (x) and distribution function (x) are displayed in Figure 5.1. All moments of the normal distribution are Önite. Since the density is symmetric about zero all odd moments are zero. By integration by parts, you can show (see Exercises 5.2 and 5.3) that E X2  = 1 and E X4  = 3: In fact, for any positive integer m, E X2m  = (2m 1)!! = (2m 1)  (2m 3)  CHAPTER 5. NORMAL REGRESSION AND MAXIMUM LIKELIHOOD 145 −4 −2 0 2 4 0.0 0.1 0.2 0.3 0.4 φ(x) −4 −3 −2 −1 0 1 2 3 4 (a) Normal Density −4 −2 0 2 4 0.0 0.2 0.4 0.6 0.8 1.0 Φ(x) −4 −3 −2 −1 0 1 2 3 4 (b) Normal Distribution Figure 5.1: Standard Normal Density and Distribution The notation k!! = k  (k 2)   1 is known as the double factorial. For example, E X6  = 15; E X8  = 105; and E X10 = 945: The absolute moments are also straightforward to calculate. Theorem 5.2 If X  N (0; 1) then for any r > 0 E jXj r = 2 r=2 p   r + 1 2  where (t) = R 1 0 u t1 e udu is the gamma function (Section 5.19). The proof is presented in Section 5.20. We say that X has a univariate normal distribution, written X  N ; 2  ; for  2 R and  2 > 0, if it has the density f(x) = 1 p 22 exp (x ) 2 2 2 ! ; 1 < x < 1: The mean and variance of X are  and  2 , respectively. We say that the k-vector X has a multivariate standard normal distribution, written X  N (0; Ik); if it has the joint density f(x) = 1 (2) k=2 exp  x 0x 2  ; x 2 R k : The mean and covariance matrix of X are 0 and Ik, respectively. Since this joint density factors, you can check that the elements of X are independent standard normal random variables. We say that the k-vector X has a multivariate normal distribution, written X  N (; ); for  2 R k and  > 0, if it has the joint density f(x) = 1 (2) k=2 det () 1=2 exp  (x ) 0 1 (x ) 2 CHAPTER 5. NORMAL REGRESSION AND MAXIMUM LIKELIHOOD 146 The mean and covariance matrix of X are  and , respectively. By setting k = 1 you can check that the multivariate normal simpliÖes to the univariate normal.