On the rate of convergence of the difference-of-convex algorithm (DCA)

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)

Abstract

In this paper, we study the non-asymptotic convergence rate of the DCA (difference-of-convex algorithm), also known as the convex–concave procedure, with two different termination criteria that are suitable for smooth and non-smooth decompositions, respectively. The DCA is a popular algorithm for difference-of-convex (DC) problems and known to converge to a stationary point of the objective under some assumptions. We derive a worst-case convergence rate of O(1/N) after N iterations of the objective gradient norm for certain classes of DC problems, without assuming strong convexity in the DC decomposition and give an example which shows the convergence rate is exact. We also provide a new convergence rate of O(1/N) for the DCA with the second termination criterion. Moreover, we derive a new linear convergence rate result for the DCA under the assumption of the Polyak–Łojasiewicz inequality. The novel aspect of our analysis is that it employs semidefinite programming performance estimation.
Original languageEnglish
Pages (from-to)475-496
JournalJournal of Optimization Theory and Applications
Volume202
DOIs
Publication statusPublished - Jul 2024

Keywords

  • Convex–concave procedure
  • Difference-of-convex problems
  • Performance estimation
  • Semidefinite programming
  • Worst-case convergence

Fingerprint

Dive into the research topics of 'On the rate of convergence of the difference-of-convex algorithm (DCA)'. Together they form a unique fingerprint.

Cite this