Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/180646 
Year of Publication: 
2018
Citation: 
[Journal:] IZA World of Labor [ISSN:] 2054-9571 [Article No.:] 436 [Publisher:] Institute of Labor Economics (IZA) [Place:] Bonn [Year:] 2018
Publisher: 
Institute of Labor Economics (IZA), Bonn
Abstract: 
Non-experimental evaluations of programs compare individuals who choose to participate in a program to individuals who do not. Such comparisons run the risk of conflating non-random selection into the program with its causal effects. By randomly assigning individuals to participate in the program or not, experimental evaluations remove the potential for non-random selection to bias comparisons of participants and non-participants. In so doing, they provide compelling causal evidence of program effects. At the same time, experiments are not a panacea, and require careful design and interpretation.
Subjects: 
experiment
random assignment
causality
evaluation
JEL: 
C52
C90
Persistent Identifier of the first edition: 
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.