Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/278622 
Year of Publication: 
2023
Series/Report no.: 
ECONtribute Discussion Paper No. 251
Publisher: 
University of Bonn and University of Cologne, Reinhard Selten Institute (RSI), Bonn and Cologne
Abstract: 
Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AIand human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
Subjects: 
Artificial Intelligence
Machine Behaviour
Behavioural Ethics
Advice
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.