Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/308118 
Authors: 
Year of Publication: 
2023
Citation: 
[Journal:] AStA Advances in Statistical Analysis [ISSN:] 1863-818X [Volume:] 108 [Issue:] 2 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2023 [Pages:] 427-440
Publisher: 
Springer, Berlin, Heidelberg
Abstract: 
Black box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in variable importance algorithms are known to be strongly biased toward high-entropy features. It was recently shown that the increasingly popular SHAP (SHapley Additive exPlanations) values suffer from a similar bias. We propose debiased or "shrunk" SHAP scores based on sample splitting which additionally enable the detection of overfitting issues at the feature level.
Subjects: 
Interpretable machine learning
Feature importance
Random forests
SHAP values
Explainable artificial intelligence
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.