Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/307763 
Autor:innen: 
Erscheinungsjahr: 
2023
Quellenangabe: 
[Journal:] AStA Advances in Statistical Analysis [ISSN:] 1863-818X [Volume:] 108 [Issue:] 2 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2023 [Pages:] 231-258
Verlag: 
Springer, Berlin, Heidelberg
Zusammenfassung: 
Neural networks are becoming increasingly popular in applications, but our mathematical understanding of their potential and limitations is still limited. In this paper, we further this understanding by developing statistical guarantees for sparse deep learning. In contrast to previous work, we consider different types of sparsity, such as few active connections, few active nodes, and other norm-based types of sparsity. Moreover, our theories cover important aspects that previous theories have neglected, such as multiple outputs, regularization, and ℓ2-loss. The guarantees have a mild dependence on network widths and depths, which means that they support the application of sparse but wide and deep networks from a statistical perspective. Some of the concepts and tools that we use in our derivations are uncommon in deep learning and, hence, might be of additional interest.
Schlagwörter: 
Sparsity
Regularization
Oracle inequalities
High-dimensionality
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.