Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Minimum Distance Tests Under Homoskedasticity .

totic limits. 7.10 Functions of Parameters In most serious applications the researcher is actually interested in a speciÖc transformation of the coe¢ cient vector = ( 1; :::; k): For example, he or she may be interested in a single coe¢ cient j or a ratio j= l : More generally, interest may focus on a quantity such as consumer surplus which could be a complicated function of the coe¢ cients. In any of these cases we can write the parameter of interest  as a function of the coe¢ cients, e.g.  = r( ) for some function r : R k ! R q . The estimate of  is b = r( b): By the continuous mapping theorem (Theorem 6.19) and the fact b p ! we can deduce that b is consistent for  (if the function r() is continuous). Theorem 7.8 Under Assumption 7.1, if r( ) is continuous at the true value of ; then CHAPTER 7. ASYMPTOTIC THEORY FOR LEAST SQUARES 235 Furthermore, if the transformation is su¢ ciently smooth, by the Delta Method (Theorem 6.23) we can show that b is asymptotically normal. Assumption 7.3 r( ) : R k ! R q is continuously di§erentiable at the true value of and R = @ @ r( ) 0 has rank q: Theorem 7.9 Asymptotic Distribution of Functions of Parameters Under Assumptions 7.2 and 7.3, as n ! 1; p n  b   d ! N (0;V ) (7.25) where V  = R0V R: In many cases, the function r( ) is linear: r( ) = R0 for some k q matrix R: In particular, if R is a ìselector matrixî R =  I 0  then we can partition = ( 0 1 ; 0 2 ) 0 so that R0 = 1 for = ( 0 1 ; 0 2 ) 0 : Then V  = I 0  V  I 0  = V 11; the upper-left sub-matrix of V 11 given in (7.14). In this case (7.25) states that p n  b 1 1  d ! N (0;V 11): That is, subsets of b are approximately normal with variances given by the conformable subcomponents of V . To illustrate the case of a nonlinear transformation, take the example  = j= l for j 6= l: Then R = @ @ r( ) = 0 BBBBBBBBBBBB@ @ @ 1 ( j= l) . . . @ @ j ( j= l) . . . @ @ ` ( j= l) . . . @ @ k ( j= l) 1 CCCCCCCCCCCCA = 0 BBBBBBBBBBB@ 0 . . . 1= l . . . j= 2 l . . . 0 1 CCCCCCCCCCCA CHAPTER 7. ASYMPTOTIC THEORY FOR LEAST SQUARES 236 so V  = V jj= 2 l + V ll 2 j = 4 l 2V jl j= 3 l where V ab denotes the abth element of V : For inference we need an estimator of the asymptotic variance matrix V  = R0V R, and for this it is typical to use a plug-in estimator. The natural estimator of R is the derivative evaluated at the point estimator Rb = @ @ r( b) 0 : (7.27) The derivative in (7.27) may be calculated analytically or numerically. By analytically, we mean working out for the formula for the derivative and replacing the unknowns by point estimates. For example, if  = j= l ; then @ @ r( ) is (7.26). However in some cases the function r( ) may be extremely complicated and a formula for the analytic derivative may not be easily available. In this case calculation by numerical di§erentiation may be preferable. Let l = (0    1    0)0 be the unit vector with the ì1îin the l th place. Then the jlth element of a numerical derivative Rb is Rb jl = rj ( b + l”) rj ( b) ” for some small