Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/315247 
Erscheinungsjahr: 
2024
Quellenangabe: 
[Journal:] Computational Optimization and Applications [ISSN:] 1573-2894 [Volume:] 89 [Issue:] 3 [Publisher:] Springer US [Place:] New York [Year:] 2024 [Pages:] 585-624
Verlag: 
Springer US, New York
Zusammenfassung: 
Abstract In this paper, we introduce an inexact regularized proximal Newton method (IRPNM) that does not require any line search. The method is designed to minimize the sum of a twice continuously differentiable function f and a convex (possibly non-smooth and extended-valued) function φ. Instead of controlling a step size by a line search procedure, we update the regularization parameter in a suitable way, based on the success of the previous iteration. The global convergence of the sequence of iterations and its superlinear convergence rate under a local Hölderian error bound assumption are shown. Notably, these convergence results are obtained without requiring a global Lipschitz property for ∇f, which, to the best of the authors’ knowledge, is a novel contribution for proximal Newton methods. To highlight the efficiency of our approach, we provide numerical comparisons with an IRPNM using a line search globalization and a modern FISTA-type method.
Schlagwörter: 
Nonsmooth and nonconvex optimization
Global and local convergence
Regularized proximal Newton method
Hölderian local error bound
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version
Erscheint in der Sammlung:

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.