Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/278622 
Erscheinungsjahr: 
2023
Schriftenreihe/Nr.: 
ECONtribute Discussion Paper No. 251
Verlag: 
University of Bonn and University of Cologne, Reinhard Selten Institute (RSI), Bonn and Cologne
Zusammenfassung: 
Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AIand human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
Schlagwörter: 
Artificial Intelligence
Machine Behaviour
Behavioural Ethics
Advice
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.65 MB





Publikationen in EconStor sind urheberrechtlich geschützt.