Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/313173 
Erscheinungsjahr: 
2022
Quellenangabe: 
[Journal:] Computational Optimization and Applications [ISSN:] 1573-2894 [Volume:] 84 [Issue:] 1 [Publisher:] Springer US [Place:] New York, NY [Year:] 2022 [Pages:] 265-294
Verlag: 
Springer US, New York, NY
Zusammenfassung: 
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.
Schlagwörter: 
Implicit regularization
Kernel methods
Statistical learning
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.