Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/227696 
Autor:innen: 
Erscheinungsjahr: 
2018
Schriftenreihe/Nr.: 
JRC Digital Economy Working Paper No. 2018-10
Verlag: 
European Commission, Joint Research Centre (JRC), Seville
Zusammenfassung: 
Machine learning algorithms are now frequently used in sensitive contexts that substantially affect the course of human lives, such as credit lending or criminal justice. This is driven by the idea that 'objective' machines base their decisions solely on facts and remain unaffected by human cognitive biases, discriminatory tendencies or emotions. Yet, there is overwhelming evidence showing that algorithms can inherit or even perpetuate human biases in their decision making when they are based on data that contains biased human decisions. This has led to a call for fairness-aware machine learning. However, fairness is a complex concept which is also reflected in the attempts to formalize fairness for algorithmic decision making. Statistical formalizations of fairness lead to a long list of criteria that are each flawed (or harmful even) in different contexts. Moreover, inherent tradeoffs in these criteria make it impossible to unify them in one general framework. Thus, fairness constraints in algorithms have to be specific to the domains to which the algorithms are applied. In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits.
Schlagwörter: 
fairness
machine learning
algorithmic bias
algorithmic transparency
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.13 MB





Publikationen in EconStor sind urheberrechtlich geschützt.