Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/266692 
Year of Publication: 
2022
Series/Report no.: 
SAFE Working Paper No. 369
Publisher: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Abstract: 
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of antidiscrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine.0F 1 The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Subjects: 
credit scoring methodology
AI enabled credit scoring
AI borrower classification
responsible lending
credit scoring regulation
financial privacy
statistical discrimination
JEL: 
C18
C32
K12
K23
K33
K40
J14
O31
O33
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.