This paper reviews six popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, discontinuity design, matching, instrumental variables, and control functions. It discusses the identification of average and distributional parameters, the assumptions and data requirements for each method, and their application in evaluating welfare, training, wage subsidy, and tax-credit programs. The paper also presents an education evaluation model to illustrate each method and provides datasets and STATA code for replication.
The paper begins by introducing the concept of treatment effects and the challenges of identifying them in non-experimental settings. It then discusses the six methods, explaining their assumptions, data requirements, and how they can be used to estimate average treatment effects (ATE), average treatment effects on the treated (ATT), and other parameters. The paper emphasizes the importance of understanding the nature of the question, the data available, and the assignment rule in choosing an appropriate evaluation method.
Social experiments are discussed as the most convincing method, relying on random assignment to create a counterfactual. Natural experiments use before-and-after comparisons to estimate treatment effects, while discontinuity design exploits natural discontinuities in assignment rules. Matching attempts to replicate treatment groups in non-experimental settings, instrumental variables use exclusion restrictions to identify treatment effects, and control functions directly model the assignment rule to control for selection bias.
The paper also highlights the importance of understanding heterogeneous treatment effects and the limitations of each method in identifying these effects. It provides an example of an education evaluation model to illustrate how each method can be applied and how they perform in recovering treatment effects. The paper concludes by emphasizing the need for careful consideration of the assumptions and data requirements when choosing an evaluation method and the importance of using multiple methods to assess the validity of results.This paper reviews six popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, discontinuity design, matching, instrumental variables, and control functions. It discusses the identification of average and distributional parameters, the assumptions and data requirements for each method, and their application in evaluating welfare, training, wage subsidy, and tax-credit programs. The paper also presents an education evaluation model to illustrate each method and provides datasets and STATA code for replication.
The paper begins by introducing the concept of treatment effects and the challenges of identifying them in non-experimental settings. It then discusses the six methods, explaining their assumptions, data requirements, and how they can be used to estimate average treatment effects (ATE), average treatment effects on the treated (ATT), and other parameters. The paper emphasizes the importance of understanding the nature of the question, the data available, and the assignment rule in choosing an appropriate evaluation method.
Social experiments are discussed as the most convincing method, relying on random assignment to create a counterfactual. Natural experiments use before-and-after comparisons to estimate treatment effects, while discontinuity design exploits natural discontinuities in assignment rules. Matching attempts to replicate treatment groups in non-experimental settings, instrumental variables use exclusion restrictions to identify treatment effects, and control functions directly model the assignment rule to control for selection bias.
The paper also highlights the importance of understanding heterogeneous treatment effects and the limitations of each method in identifying these effects. It provides an example of an education evaluation model to illustrate how each method can be applied and how they perform in recovering treatment effects. The paper concludes by emphasizing the need for careful consideration of the assumptions and data requirements when choosing an evaluation method and the importance of using multiple methods to assess the validity of results.