Abstract
In this paper, we study the gradient descent-ascent method for convex-concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class of strongly convex-strongly concave saddle-point problems for one iteration. We also investigate the algorithm without strong convexity and we provide some necessary and sufficient conditions under which the gradient descent-ascent enjoys linear convergence.
Original language | English |
---|---|
Pages (from-to) | 967-989 |
Number of pages | 23 |
Journal | Optimization Methods & Software |
Volume | 39 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2 Sept 2024 |
Keywords
- Saddle-point problems
- Convergence rate
- Gradient descent-ascent method
- Minimax optimization problem
- Performance estimation
- Semidefinite programming