Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Power and Test Consistency

n (7.11) we deÖne V 0 = Q1 xx  2 whether (7.10) is true or false. When (7.10) is true then V = V 0 ; otherwise V 6= V 0 : We call V 0 the homoskedastic asymptotic covariance matrix. Theorem 7.3 states that the sampling distribution of the least-squares estimator, after rescaling, is approximately normal when the sample size n is su¢ ciently large. This holds true for all joint distributions of (yi ; xi) which satisfy the conditions of Assumption 7.2, and is therefore broadly applicable. Consequently, asymptotic normality is routinely used to approximate the Önite sample distribution of p n  b  : A di¢ culty is that for any Öxed n the sampling distribution of b can be arbitrarily far from the normal distribution. In Figure 6.1 we have already seen a simple example where the least-squares estimate is quite asymmetric and non-normal even for reasonably large sample sizes. The normal approximation improves as n increases, but how large should n be in order for the approximation to be useful? Unfortunately, there is no simple answer to this reasonable question. The trouble is that no matter how large is the sample size, the normal approximation is arbitrarily poor for some data distribution satisfying the assumptions. We illustrate this problem using a simulation. Let yi = 1xi + 2 + ei where xi is N (0; 1); and ei is independent of xi with the Double Pareto density f(e) = 2 jej 1 ; jej  1: If > 2 the error ei has zero mean and variance =( 2): As approaches 2, however, its variance diverges to inÖnity. In this context the normalized leastsquares slope estimator q n 2  b1 1  has the N(0; 1) asymptotic distribution for any > 2. In Figure 7.2 we display the Önite sample densities of the normalized estimator q n 2  b1 1  ; setting n = 100 and varying the parameter . For = 3:0 the density is very close to the N(0; 1) density. As diminishes the density changes signiÖcantly, concentrating most of the probability mass around zero. Another example is shown in Figure 7.3. Here the model is yi = + ei where ei = u r i E (u r i )  E u 2r i