Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/311960 
Year of Publication: 
2024
Series/Report no.: 
IZA Discussion Papers No. 17521
Publisher: 
Institute of Labor Economics (IZA), Bonn
Abstract: 
Do large language models (LLMs)—such as ChatGPT 3.5, ChatGPT 4.0, and Google's Gemini 1.0 Pro—simulate human behavior in the context of the Prisoner's Dilemma (PD) game with varying stake sizes? This paper investigates this question, examining how LLMs navigate scenarios where self-interested behavior of all players results in less preferred outcomes, offering insights into how LLMs might "perceive" human decision-making. Through a replication of Yamagishi et al. (2016) "Study 2," we analyze LLM responses to different payoff stakes and the influence of stake order on cooperation rates. LLMs demonstrate sensitivity to these factors, and some LLMs mirror human behavior only under very specific circumstances, implying the need for cautious application of LLMs in behavioral research.
Subjects: 
Prisoner's Dilemma
cooperation
payoff stakes
artificial intelligence
JEL: 
D01
C72
C90
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.