Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/266692 
Autor:innen: 
Erscheinungsjahr: 
2022
Schriftenreihe/Nr.: 
SAFE Working Paper No. 369
Verlag: 
Leibniz Institute for Financial Research SAFE, Frankfurt a. M.
Zusammenfassung: 
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of antidiscrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine.0F 1 The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Schlagwörter: 
credit scoring methodology
AI enabled credit scoring
AI borrower classification
responsible lending
credit scoring regulation
financial privacy
statistical discrimination
JEL: 
C18
C32
K12
K23
K33
K40
J14
O31
O33
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.39 MB





Publikationen in EconStor sind urheberrechtlich geschützt.