Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/313185 
Erscheinungsjahr: 
2025
Schriftenreihe/Nr.: 
I4R Discussion Paper Series No. 212
Verlag: 
Institute for Replication (I4R), s.l.
Zusammenfassung: 
Leib et al. (2024) examine how artificial intelligence (AI) generated advice affects dishonesty compared to equivalent human advice in a laboratory experiment. In their preferred empirical specification, the authors report that dishonesty-promoting advice increases dishonest behavior by approximately 15% compared to a baseline without advice, while honesty-promoting advice has no significant effect. Additionally, they find that algorithmic transparency - disclosing whether advice comes from AI or humans - does not affect behavior. We computationally reproduce the main results of the paper using the same procedures and original data. Our results confirm the sign, magnitude, and statistical significance of the authors' reported estimates across each of their main findings. Additional robustness checks show that the significance of the results remains stable under alternative specifications and methodological choices.
Schlagwörter: 
artificial intelligence
dishonesty
laboratory experiment
computational reproducibility
JEL: 
D01
D91
C91
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.2 MB





Publikationen in EconStor sind urheberrechtlich geschützt.