Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/327601 
Year of Publication: 
2025
Citation: 
[Journal:] Journal of Innovation & Knowledge (JIK) [ISSN:] 2444-569X [Volume:] 10 [Issue:] 3 [Article No.:] 100700 [Year:] 2025 [Pages:] 1-12
Publisher: 
Elsevier, Amsterdam
Abstract: 
Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.
Subjects: 
Explainable AI
Logistic regression
Random forest
Machine learning
Employee churn
JEL: 
O33
O39
M51
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by-nc-nd Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.