# Asequential parametric method for solving constrained minimization problems

A. Beck, A. Ben-Tal, L. Tetruashvili

Research output: Contribution to journalArticleScientificpeer-review

### Abstract

In this paper, a method for solving constrained convex optimization problems is introduced. The problem is cast equivalently as a parametric unconstrained one, the (single) parameter being the optimal value of the original problem. At each stage of the algorithm the parameter is updated, and the resulting subproblem is only approximately solved. A linear rate of convergence of the parameter sequence is established. Using an optimal gradient method due to Nesterov [Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547] to solve the arising subproblems, it is proved that the resulting gradient-based algorithm requires an overall of $O({\log(1/\varepsilon)}/ {\sqrt{\varepsilon}})$ inner iterations to obtain an $\varepsilon$-optimal and feasible solution. An image deblurring problem is solved, demonstrating the capability of the algorithm to solve large-scale problems within reasonable accuracy.
Original language English 244-260 SIAM Journal on Optimization 22 https://doi.org/10.1137/100800580 Published - 2012

### Fingerprint

Constrained Minimization
Minimization Problem
Image Deblurring
Convex optimization
Constrained optimization
Large-scale Problems
Constrained Optimization
Convex Optimization
Rate of Convergence
Optimization Problem
Iteration

### Cite this

Beck, A. ; Ben-Tal, A. ; Tetruashvili, L. / Asequential parametric method for solving constrained minimization problems. In: SIAM Journal on Optimization. 2012 ; Vol. 22. pp. 244-260.
@article{1b7c12072d0348debc64e232161fbfcc,
title = "Asequential parametric method for solving constrained minimization problems",
abstract = "In this paper, a method for solving constrained convex optimization problems is introduced. The problem is cast equivalently as a parametric unconstrained one, the (single) parameter being the optimal value of the original problem. At each stage of the algorithm the parameter is updated, and the resulting subproblem is only approximately solved. A linear rate of convergence of the parameter sequence is established. Using an optimal gradient method due to Nesterov [Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547] to solve the arising subproblems, it is proved that the resulting gradient-based algorithm requires an overall of $O({\log(1/\varepsilon)}/ {\sqrt{\varepsilon}})$ inner iterations to obtain an $\varepsilon$-optimal and feasible solution. An image deblurring problem is solved, demonstrating the capability of the algorithm to solve large-scale problems within reasonable accuracy.",
author = "A. Beck and A. Ben-Tal and L. Tetruashvili",
year = "2012",
doi = "10.1137/100800580",
language = "English",
volume = "22",
pages = "244--260",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics Publications",

}

Asequential parametric method for solving constrained minimization problems. / Beck, A.; Ben-Tal, A.; Tetruashvili, L.

In: SIAM Journal on Optimization, Vol. 22, 2012, p. 244-260.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Asequential parametric method for solving constrained minimization problems

AU - Beck, A.

AU - Ben-Tal, A.

AU - Tetruashvili, L.

PY - 2012

Y1 - 2012

N2 - In this paper, a method for solving constrained convex optimization problems is introduced. The problem is cast equivalently as a parametric unconstrained one, the (single) parameter being the optimal value of the original problem. At each stage of the algorithm the parameter is updated, and the resulting subproblem is only approximately solved. A linear rate of convergence of the parameter sequence is established. Using an optimal gradient method due to Nesterov [Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547] to solve the arising subproblems, it is proved that the resulting gradient-based algorithm requires an overall of $O({\log(1/\varepsilon)}/ {\sqrt{\varepsilon}})$ inner iterations to obtain an $\varepsilon$-optimal and feasible solution. An image deblurring problem is solved, demonstrating the capability of the algorithm to solve large-scale problems within reasonable accuracy.

AB - In this paper, a method for solving constrained convex optimization problems is introduced. The problem is cast equivalently as a parametric unconstrained one, the (single) parameter being the optimal value of the original problem. At each stage of the algorithm the parameter is updated, and the resulting subproblem is only approximately solved. A linear rate of convergence of the parameter sequence is established. Using an optimal gradient method due to Nesterov [Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547] to solve the arising subproblems, it is proved that the resulting gradient-based algorithm requires an overall of $O({\log(1/\varepsilon)}/ {\sqrt{\varepsilon}})$ inner iterations to obtain an $\varepsilon$-optimal and feasible solution. An image deblurring problem is solved, demonstrating the capability of the algorithm to solve large-scale problems within reasonable accuracy.

U2 - 10.1137/100800580

DO - 10.1137/100800580

M3 - Article

VL - 22

SP - 244

EP - 260

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

ER -