Abstract
In this paper, we study the convergence rate of the gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases, in particular when all step lengths lie in the interval (0,1/L]. In addition, we derive an optimal step length with respect to the new bound.
| Original language | English |
|---|---|
| Pages (from-to) | 1649–1661 |
| Journal | Optimization Letters |
| Volume | 16 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - Jul 2022 |
Keywords
- l-smooth optimization
- gradient method
- performance estomation prolem
- semidefinite programming
Fingerprint
Dive into the research topics of 'The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver