Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/311960 
Erscheinungsjahr: 
2024
Schriftenreihe/Nr.: 
IZA Discussion Papers No. 17521
Verlag: 
Institute of Labor Economics (IZA), Bonn
Zusammenfassung: 
Do large language models (LLMs)—such as ChatGPT 3.5, ChatGPT 4.0, and Google's Gemini 1.0 Pro—simulate human behavior in the context of the Prisoner's Dilemma (PD) game with varying stake sizes? This paper investigates this question, examining how LLMs navigate scenarios where self-interested behavior of all players results in less preferred outcomes, offering insights into how LLMs might "perceive" human decision-making. Through a replication of Yamagishi et al. (2016) "Study 2," we analyze LLM responses to different payoff stakes and the influence of stake order on cooperation rates. LLMs demonstrate sensitivity to these factors, and some LLMs mirror human behavior only under very specific circumstances, implying the need for cautious application of LLMs in behavioral research.
Schlagwörter: 
Prisoner's Dilemma
cooperation
payoff stakes
artificial intelligence
JEL: 
D01
C72
C90
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.14 MB





Publikationen in EconStor sind urheberrechtlich geschützt.