The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions

Hadi Abbaszadehpeivasti, Etienne de Klerk, Moslem Zamani*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)
112 Downloads (Pure)

Abstract

In this paper, we study the convergence rate of the gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases, in particular when all step lengths lie in the interval (0,1/L]. In addition, we derive an optimal step length with respect to the new bound.
Original languageEnglish
Pages (from-to)1649–1661
JournalOptimization Letters
Volume16
Issue number6
DOIs
Publication statusPublished - Jul 2022

Keywords

  • l-smooth optimization
  • gradient method
  • performance estomation prolem
  • semidefinite programming

Fingerprint

Dive into the research topics of 'The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions'. Together they form a unique fingerprint.

Cite this