Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/185256 
Year of Publication: 
2018
Series/Report no.: 
IZA Discussion Papers No. 11796
Publisher: 
Institute of Labor Economics (IZA), Bonn
Abstract: 
The economics 'credibility revolution' has promoted the identification of causal relationships using difference-in-differences (DID), instrumental variables (IV), randomized control trials (RCT) and regression discontinuity design (RDD) methods. The extent to which a reader should trust claims about the statistical significance of results proves very sensitive to method. Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.
Subjects: 
research methods
causal inference
p-curves
p-hacking
publication bias
JEL: 
A11
B41
C13
C44
Document Type: 
Working Paper

Files in This Item:
File
Size
543.16 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.