Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/308118 
Autor:innen: 
Erscheinungsjahr: 
2023
Quellenangabe: 
[Journal:] AStA Advances in Statistical Analysis [ISSN:] 1863-818X [Volume:] 108 [Issue:] 2 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2023 [Pages:] 427-440
Verlag: 
Springer, Berlin, Heidelberg
Zusammenfassung: 
Black box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in variable importance algorithms are known to be strongly biased toward high-entropy features. It was recently shown that the increasingly popular SHAP (SHapley Additive exPlanations) values suffer from a similar bias. We propose debiased or "shrunk" SHAP scores based on sample splitting which additionally enable the detection of overfitting issues at the feature level.
Schlagwörter: 
Interpretable machine learning
Feature importance
Random forests
SHAP values
Explainable artificial intelligence
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.