Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/318541 
Erscheinungsjahr: 
2025
Schriftenreihe/Nr.: 
Discussion Papers of the Max Planck Institute for Research on Collective Goods No. 2025/3
Verlag: 
Max Planck Institute for Research on Collective Goods, Bonn
Zusammenfassung: 
Recent advances in AI create possibilities for delegating legal decision-making to machines or enhancing human adjudication through AI assistance. Using classic normative conflicts - the trolley problem and similar moral dilemmas - as a proof of concept, we examine the alignment between AI legal reasoning and human judgment. In our baseline experiment, we find a pronounced mismatch between decisions made by GPT and those of human subjects. This misalignment raises substantive concerns for AI-powered legal decision-aids. We investigate whether explicit normative guidance can address this misalignment, with mixed results. GPT-3.5 is susceptible to such intervention, but frequently refuses to decide when faced with a moral dilemma. GPT-4 is outright utilitarian, and essentially ignores the instruction to decide on deontological grounds. GPT-o3-mini faithfully implements this instruction, but is unwilling to balance deontological and utilitarian concerns if instructed to do so. At least for the time being, explicit normative instructions are not fully able to realign AI advice with the normative convictions of the legislator.
Schlagwörter: 
large language models
human-AI alignment
rule of law
moral dilemmas
trolley problems
JEL: 
C99
D63
D81
K10
K40
Z13
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.66 MB





Publikationen in EconStor sind urheberrechtlich geschützt.