Convergence rate analysis of the gradient descent-ascent method for convex-concave saddle-point problems

Research output: Contribution to journalArticleScientificpeer-review

Abstract

In this paper, we study the gradient descent-ascent method for convex-concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class of strongly convex-strongly concave saddle-point problems for one iteration. We also investigate the algorithm without strong convexity and we provide some necessary and sufficient conditions under which the gradient descent-ascent enjoys linear convergence.
Original languageEnglish
Pages (from-to)967-989
Number of pages23
JournalOptimization Methods & Software
Volume39
Issue number5
DOIs
Publication statusPublished - 2 Sept 2024

Keywords

  • Saddle-point problems
  • Convergence rate
  • Gradient descent-ascent method
  • Minimax optimization problem
  • Performance estimation
  • Semidefinite programming

Fingerprint

Dive into the research topics of 'Convergence rate analysis of the gradient descent-ascent method for convex-concave saddle-point problems'. Together they form a unique fingerprint.

Cite this