Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/259795 
Erscheinungsjahr: 
2022
Schriftenreihe/Nr.: 
ESMT Working Paper No. 22-02
Verlag: 
European School of Management and Technology (ESMT), Berlin
Zusammenfassung: 
Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet, recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stake decisions can properly assess whether the machine produces better recommendations. To that end, we study a set-up, in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM's supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine's prescriptions across tasks, she updates her belief about the machine. However, the DM observes the machine's correctness only if she ultimately decides to act on the task. Further, the DM sometimes overrides the machine depending on her belief, which affects learning. In this set-up, we characterize the evolution of the DM's belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better, i.e., she never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. Our results highlight some fundamental limitations in determining whether machines make better decisions than experts and provide a novel explanation for human-machine complementarity.
Schlagwörter: 
machine accuracy
decision making
human-in-the-loop
algorithm aversion
dynamic learning
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
757.06 kB





Publikationen in EconStor sind urheberrechtlich geschützt.