Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/313173 
Year of Publication: 
2022
Citation: 
[Journal:] Computational Optimization and Applications [ISSN:] 1573-2894 [Volume:] 84 [Issue:] 1 [Publisher:] Springer US [Place:] New York, NY [Year:] 2022 [Pages:] 265-294
Publisher: 
Springer US, New York, NY
Abstract: 
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.
Subjects: 
Implicit regularization
Kernel methods
Statistical learning
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.