Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/268427 
Year of Publication: 
2022
Series/Report no.: 
ZEW Discussion Papers No. 22-071
Publisher: 
ZEW - Leibniz-Zentrum für Europäische Wirtschaftsforschung, Mannheim
Abstract: 
Actors in various settings have been increasingly relying on algorithmic tools to support their decision-making. Much of the public debate concerning algorithms - especially the associated regulation of new technologies - rests on the assumption that humans can assess the quality of algorithms. We test this assumption by conducting an online experiment with 1263 participants. Subjects perform an estimation task and are supported by algorithmic advice. Our first finding is that, in our setting, humans cannot verify the algorithm's quality. We, therefore, argue that algorithms exhibit traits of a credence good - decision-makers cannot verify the quality of such goods, even after "consuming" them. Based on this finding, we test two interventions to improve the individual's ability to make good decisions in algorithmically supported situations. In the first intervention, we explain the way the algorithm functions. We find that while explanation helps participants recognize bias in the algorithm, it remarkably decreases human decision-making performance. In the second treatment, we reveal the task's correct answer after every round and find that this intervention improves human decision-making performance. Our findings have implications for policy initiatives and managerial practice.
Subjects: 
Human-algorithm decision making
algorithmic advice
credence goods
JEL: 
C91
D79
D80
M21
O30
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.