Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/315696 
Erscheinungsjahr: 
2024
Quellenangabe: 
[Journal:] TOP [ISSN:] 1863-8279 [Volume:] 32 [Issue:] 3 [Publisher:] Springer Berlin Heidelberg [Place:] Berlin/Heidelberg [Year:] 2024 [Pages:] 517-536
Verlag: 
Springer Berlin Heidelberg, Berlin/Heidelberg
Zusammenfassung: 
Abstract Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be reduced by line search methods that iteratively adjust the step length. We propose an alternative approach to stochastic line search by using a new algorithm based on forward step model building. This model building step incorporates second-order information that allows adjusting not only the step length but also the search direction. Noting that deep learning model parameters come in groups (layers of tensors), our method builds its model and calculates a new step for each parameter group. This novel diagonalization approach makes the selected step lengths adaptive. We provide convergence rate analysis, and experimentally show that the proposed algorithm achieves faster convergence and better generalization in well-known test problems. More precisely, SMB requires less tuning, and shows comparable performance to other adaptive methods.
Schlagwörter: 
Model building
Second-order information
Stochastic gradient descent
Convergence analysis
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version
Erscheint in der Sammlung:

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.