Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/249179 
Erscheinungsjahr: 
2022
Schriftenreihe/Nr.: 
SAFE Working Paper No. 336
Verlag: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Zusammenfassung: 
With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual's decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one's and others' fate.
Schlagwörter: 
Artificial Intelligence
Big Data
Pivotality
Ethics
Experiment
JEL: 
C49
C91
D10
D63
D64
O33
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.92 MB





Publikationen in EconStor sind urheberrechtlich geschützt.