Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/288701 
Year of Publication: 
2020
Citation: 
[Journal:] Business & Information Systems Engineering [ISSN:] 1867-0202 [Volume:] 63 [Issue:] 1 [Publisher:] Springer Fachmedien Wiesbaden [Place:] Wiesbaden [Year:] 2020 [Pages:] 39-54
Publisher: 
Springer Fachmedien Wiesbaden, Wiesbaden
Abstract: 
The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced.
Subjects: 
Fairness
Bias
Artificial algorithm decision making
Recruitment
Asynchronous video interview
Ethics
HR analytics
Artificial intelligence
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.