Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/274687 
Authors: 
Year of Publication: 
2022
Citation: 
[Journal:] Journal of Risk and Financial Management [ISSN:] 1911-8074 [Volume:] 15 [Issue:] 4 [Article No.:] 165 [Year:] 2022 [Pages:] 1-10
Publisher: 
MDPI, Basel
Abstract: 
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML applications in loan underwriting and pricing, the fair lending assessments typically led by compliance and legal functions will likely continue to evolve. In this paper, the author discusses unique considerations around ML in the existing fair lending risk assessment practice for underwriting and pricing models and proposes consideration of additional evaluations to be added in the present practice.
Subjects: 
algorithm
bias
discrimination
disparate
fair lending
ML
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.