Email: support@essaywriterpros.com
Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

# conditional mean of zero

Least Squares Regression 4.1 Introduction In this chapter we investigate some Önite-sample properties of the least-squares estimator in the linear regression model. In particular, we calculate the Önite-sample mean and covariance matrix and propose standard errors for the coe¢ cient estimates. 4.2 Random Sampling Assumption 3.1 speciÖed that the observations have identical distributions. To derive the Önitesample properties of the estimators we will need to additionally specify the dependence structure across the observations. The simplest context is when the observations are mutually independent, in which case we say that they are independent and identically distributed, or i.i.d. It is also common to describe i.i.d. observations as a random sample. Traditionally, random sampling has been the default assumption in cross-section (e.g. survey) contexts. It is quite conveneint as i.i.d. sampling leads to straightforward expressions for estimation variance. The assumption seems appropriate (meaning that it should be approximately valid) when samples are small and relatively dispersed. That is, if you randomly sample 1000 people from a large country such as the United States it seems reasonable to model their responses as mutually independent. Assumption 4.1 The observations f(y1; x1); :::;(yi ; xi); :::;(yn; xn)g are independent and identically distributed. For most of this chapter, we will use Assumption 4.1 to derive properties of the OLS estimator. Assumption 4.1 means that if you take any two individuals i 6= j in a sample, the values (yi ; xi) are independent of the values (yj ; xj ) yet have the same distribution. Independence means that the decisions and choices of individual i do not a§ect the decisions of individual j, and conversely. This assumption may be violated if individuals in the sample are connected in some way, for example if they are neighbors, members of the same village, classmates at a school, or even Örms within a speciÖc industry. In this case, it seems plausible that decisions may be inter-connected and thus mutually dependent rather than independent. Allowing for such interactions complicates inference and requires specialized treatment. A currently popular approach which allows for mutual dependence is known as clustered dependence, which assumes that that observations are grouped into ìclustersî (for example, schools).