Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Evaluating the impacts of development projects

Existing program evaluation methods such as difference-in-difference estimators or propensity score matching are designed to examine the average impact of a program. By design they can only examine changes in a particular summary statistic of an outcome indicator, most commonly the mean or the median or a particular quantile. However, we are often interested not only in the mean impact of an intervention, or the average treatment effect, but also the differential impact on different subpopulations such as the rich and the poor, the well-nourished and the malnourished, or some finer disaggregation of the welfare domain. In principle, one could examine the program impact on various subpopulations by applying existing program evaluation techniques on smaller and smaller subsamples of the data. In practice, this approach faces three main problems. First, it is cumbersome both for carrying out the analysis and for interpreting the results. Second, one faces arbitrary choices of how to split the sample. And third, increasing the number of subgroups leads smaller sample sizes and wider confidence intervals in the regression estimates. To circumvent these problems this article suggests a novel approach to program evaluation which combines stochastic dominance with difference-indifference methods. The program evaluation literature has evolved separately from the stochastic dominance literature. Reviews of the state-of-the-art in program evaluation (Todd 2008) and best practice guides (Baker 2000) do not contain any reference to stochastic dominance. To date Verme (2010) is the only study that has started to show how stochastic dominance techniques can be used for program evaluation. He uses simulated income data to show that a program can have no average treatment effect while impacting the rich and the poor quite differently. Drawing on the analogies between poverty and stochastic dominance orderings (Foster and Shorrocks 1988) he proposes a simple method for program evaluation for the case of randomized assignment of treatment. This article extends the method to difference-indifference evaluation to make it applicable to cases where treatment and control populations do not share the same initial distribution. It also provides the first empirical application of this technique, highlighting the importance to look beyond average treatment effects.