On convergence of a q-random coordinate constrained algorithm for non-convex problems

Alireza Ghaffari Hadigheh, Lennart Sinjorgo*, Renata Sotirov

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

16 Downloads (Pure)

Abstract

We propose a random coordinate descent algorithm for optimizing a non-convex objective function subject to one linear constraint and simple bounds on the variables. Although it is common use to update only two random coordinates simultaneously in each iteration of a coordinate descent algorithm, our algorithm allows updating arbitrary number of coordinates. We provide a proof of convergence of the algorithm. The convergence rate of the algorithm improves when we update more coordinates per iteration. Numerical experiments on large scale instances of different optimization problems show the benefit of updating many coordinates simultaneously.
Original languageEnglish
Pages (from-to)843-868
Number of pages26
JournalJournal of Global Optimization
Volume90
Issue number4
DOIs
Publication statusPublished - Dec 2024

Keywords

  • random coordinate descent algorithm
  • convergence analysis
  • densest k-subgraph problem
  • Eigenvalue complementarity problem

Fingerprint

Dive into the research topics of 'On convergence of a q-random coordinate constrained algorithm for non-convex problems'. Together they form a unique fingerprint.

Cite this