TY - JOUR
T1 - On the convergence of augmented Lagrangian strategies for nonlinear programming
AU - Andreani, Roberto
AU - Ramos, Alberto
AU - Ribeiro, Ademir A.
AU - Secchin, Leonardo D.
AU - Velazco, Ariel R.
N1 - Publisher Copyright:
© 2021 The Author(s) 2021. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - Augmented Lagrangian (AL) algorithms are very popular and successful methods for solving constrained optimization problems. Recently, global convergence analysis of these methods has been dramatically improved by using the notion of sequential optimality conditions. Such conditions are necessary for optimality, regardless of the fulfillment of any constraint qualifications, and provide theoretical tools to justify stopping criteria of several numerical optimization methods. Here, we introduce a new sequential optimality condition stronger than previously stated in the literature. We show that a well-established safeguarded Powell-Hestenes-Rockafellar (PHR) AL algorithm generates points that satisfy the new condition under a Lojasiewicz-type assumption, improving and unifying all the previous convergence results. Furthermore, we introduce a new primal-dual AL method capable of achieving such points without the Lojasiewicz hypothesis. We then propose a hybrid method in which the new strategy acts to help the safeguarded PHR method when it tends to fail. We show by preliminary numerical tests that all the problems already successfully solved by the safeguarded PHR method remain unchanged, while others where the PHR method failed are now solved with an acceptable additional computational cost.
AB - Augmented Lagrangian (AL) algorithms are very popular and successful methods for solving constrained optimization problems. Recently, global convergence analysis of these methods has been dramatically improved by using the notion of sequential optimality conditions. Such conditions are necessary for optimality, regardless of the fulfillment of any constraint qualifications, and provide theoretical tools to justify stopping criteria of several numerical optimization methods. Here, we introduce a new sequential optimality condition stronger than previously stated in the literature. We show that a well-established safeguarded Powell-Hestenes-Rockafellar (PHR) AL algorithm generates points that satisfy the new condition under a Lojasiewicz-type assumption, improving and unifying all the previous convergence results. Furthermore, we introduce a new primal-dual AL method capable of achieving such points without the Lojasiewicz hypothesis. We then propose a hybrid method in which the new strategy acts to help the safeguarded PHR method when it tends to fail. We show by preliminary numerical tests that all the problems already successfully solved by the safeguarded PHR method remain unchanged, while others where the PHR method failed are now solved with an acceptable additional computational cost.
KW - approximate KKT conditions
KW - augmented Lagrangian methods
KW - nonlinear optimization
KW - optimality conditions
KW - stopping criteria
UR - https://www.scopus.com/pages/publications/85130069681
U2 - 10.1093/imanum/drab021
DO - 10.1093/imanum/drab021
M3 - Article
AN - SCOPUS:85130069681
SN - 0272-4979
VL - 42
SP - 1735
EP - 1765
JO - IMA Journal of Numerical Analysis
JF - IMA Journal of Numerical Analysis
IS - 2
ER -