Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/318541 
Year of Publication: 
2025
Series/Report no.: 
Discussion Papers of the Max Planck Institute for Research on Collective Goods No. 2025/3
Publisher: 
Max Planck Institute for Research on Collective Goods, Bonn
Abstract: 
Recent advances in AI create possibilities for delegating legal decision-making to machines or enhancing human adjudication through AI assistance. Using classic normative conflicts - the trolley problem and similar moral dilemmas - as a proof of concept, we examine the alignment between AI legal reasoning and human judgment. In our baseline experiment, we find a pronounced mismatch between decisions made by GPT and those of human subjects. This misalignment raises substantive concerns for AI-powered legal decision-aids. We investigate whether explicit normative guidance can address this misalignment, with mixed results. GPT-3.5 is susceptible to such intervention, but frequently refuses to decide when faced with a moral dilemma. GPT-4 is outright utilitarian, and essentially ignores the instruction to decide on deontological grounds. GPT-o3-mini faithfully implements this instruction, but is unwilling to balance deontological and utilitarian concerns if instructed to do so. At least for the time being, explicit normative instructions are not fully able to realign AI advice with the normative convictions of the legislator.
Subjects: 
large language models
human-AI alignment
rule of law
moral dilemmas
trolley problems
JEL: 
C99
D63
D81
K10
K40
Z13
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.