Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/266691 
Year of Publication: 
2022
Series/Report no.: 
SAFE Working Paper No. 370
Publisher: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Abstract: 
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees' decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals "blindly" trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.