Conditions for linear convergence of the gradient method for non-convex optimization

Research output: Contribution to journalArticleScientificpeer-review

Abstract

In this paper, we derive a new linear convergence rate for the gradient method with
fxed step lengths for non-convex smooth optimization problems satisfying the
Polyak-Łojasiewicz (PŁ) inequality. We establish that the PŁ inequality is a necessary and sufcient condition for linear convergence to the optimal value for this
class of problems. We list some related classes of functions for which the gradient
method may enjoy linear convergence rate. Moreover, we investigate their relationship with the PŁ inequality.
Original languageEnglish
Pages (from-to)1105–1125
JournalOptimization Letters
Volume17
DOIs
Publication statusPublished - 2023

Keywords

  • weakly convex optimization
  • gradient method
  • performance estimation problem
  • Polyak-Lojasiewicz inequality
  • semidefinite programming

Fingerprint

Dive into the research topics of 'Conditions for linear convergence of the gradient method for non-convex optimization'. Together they form a unique fingerprint.

Cite this