Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/271832 
Year of Publication: 
2022
Series/Report no.: 
CESifo Working Paper No. 10188
Publisher: 
Center for Economic Studies and ifo Institute (CESifo), Munich
Abstract: 
Algorithm-based decision support systems play an increasingly important role in decisions involving exploration tasks, such as product searches, portfolio choices, and human resource procurement. These tasks often involve a trade-off between exploration and exploitation, which can be highly dependent on individual preferences. In an online experiment, we study whether the willingness of participants to follow the advice of a reinforcement learning algorithm depends on the fit between their own exploration preferences and the algorithm's advice. We vary the weight that the algorithm places on exploration rather than exploitation, and model the participants' decision-making processes using a learning model comparable to the algorithm's. This allows us to measure the degree to which one's willingness to accept the algorithm's advice depends on the weight it places on exploration and on the similarity between the exploration tendencies of the algorithm and the participant. We find that the algorithm's advice affects and improves participants' choices in all treatments. However, the degree to which participants are willing to follow the advice depends heavily on the algorithm's exploration tendency. Participants are more likely to follow an algorithm that is more exploitative than they are, possibly interpreting the algorithm's relative consistency over time as a signal of expertise. Similarity between human choices and the algorithm's recommendations does not increase humans' willingness to follow the recommendations. Hence, our results suggest that the consistency of an algorithm's recommendations over time is key to inducing people to follow algorithmic advice in exploration tasks.
Subjects: 
algorithms
decision support systems
recommender systems
advice-taking
multi-armed bandit
search
exploration-exploitation
cognitive modeling
JEL: 
C91
D83
Document Type: 
Working Paper
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.