Abstract:
Recent advances in AI create possibilities for delegating legal decision-making to machines or enhancing human adjudication through AI assistance. Using classic normative conflicts - the trolley problem and similar moral dilemmas - as a proof of concept, we examine the alignment between AI legal reasoning and human judgment. In our baseline experiment, we find a pronounced mismatch between decisions made by GPT and those of human subjects. This misalignment raises substantive concerns for AI-powered legal decision-aids. We investigate whether explicit normative guidance can address this misalignment, with mixed results. GPT-3.5 is susceptible to such intervention, but frequently refuses to decide when faced with a moral dilemma. GPT-4 is outright utilitarian, and essentially ignores the instruction to decide on deontological grounds. GPT-o3-mini faithfully implements this instruction, but is unwilling to balance deontological and utilitarian concerns if instructed to do so. At least for the time being, explicit normative instructions are not fully able to realign AI advice with the normative convictions of the legislator.