Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/267687 
Erscheinungsjahr: 
2022
Schriftenreihe/Nr.: 
ESMT Working Paper No. 22-02 (R1)
Versionsangabe: 
Dec 8, 2022
Verlag: 
European School of Management and Technology (ESMT), Berlin
Zusammenfassung: 
Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet, recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stake decisions can properly assess whether the machine produces better recommendations. To that end, we study a set-up in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM's supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine's prescriptions across tasks, she updates her belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine's correctness and updates her belief accordingly only if she ultimately decides to act on the task. In this set-up, we characterize the evolution of the DM's belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better, i.e., she never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human-machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.
Schlagwörter: 
machine accuracy
decision making
human-in-the-loop
algorithm aversion
dynamic learning
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
764.87 kB





Publikationen in EconStor sind urheberrechtlich geschützt.