Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/314760 
Erscheinungsjahr: 
2025
Schriftenreihe/Nr.: 
CESifo Working Paper No. 11721
Verlag: 
CESifo GmbH, Munich
Zusammenfassung: 
We examine predictive machine learning studies from 50 top business and economic journals published between 2010 and 2023. We investigate their transparency regarding the predictive performance of machine learning models compared to less complex traditional statistical models that require fewer resources in terms of time and energy. We find that the adoption of machine learning varies by discipline, and is most frequently used in information systems, marketing, and operations research journals. Our analysis also reveals that 28% of studies do not benchmark the predictive performance of machine learning models against traditional statistical models. These studies receive fewer citations, arguably due to a less rigorous analysis. Studies including traditional statistical models as benchmarks typically report high outperformance for the best machine learning model. However, the performance improvement is substantially lower for the average reported machine learning model. We contend that, due to opaque reporting practices, it often remains unclear whether the predictive gains justify the increased costs of more complex models. We advocate for standardized, transparent model reporting that relates predictive gains to the efficiency of machine learning models compared to less-costly traditional statistical models.
Schlagwörter: 
machine learning
predictive modelling
transparent model reporting
JEL: 
C18
C40
C52
Dokumentart: 
Working Paper
Erscheint in der Sammlung:

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.