Reducing conservatism in robust optimization

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Although Robust Optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative when it leads to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint wise approach of Robust Optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from the
NETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared to the nominal solution, whereas we find solutions that safeguard against over half the violation while only a tenth of the uncertainty is reflected in the objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.
Original languageEnglish
JournalINFORMS Journal on Computing
Publication statusAccepted/In press - Jun 2019

Fingerprint

Uncertainty
Conservatism
Robust optimization
Violations
Safeguards
Scenarios

Keywords

  • robust optimization
  • non-constraint wise uncertainty
  • ambiguity

Cite this

@article{ad0238cdde7a4366b487be577584669e,
title = "Reducing conservatism in robust optimization",
abstract = "Although Robust Optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative when it leads to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint wise approach of Robust Optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from theNETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared to the nominal solution, whereas we find solutions that safeguard against over half the violation while only a tenth of the uncertainty is reflected in the objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.",
keywords = "robust optimization, non-constraint wise uncertainty, ambiguity",
author = "Ernst Roos and {den Hertog}, Dick",
year = "2019",
month = "6",
language = "English",
journal = "INFORMS Journal on Computing",
issn = "1091-9856",
publisher = "INFORMS Inst.for Operations Res.and the Management Sciences",

}

Reducing conservatism in robust optimization. / Roos, Ernst; den Hertog, Dick.

In: INFORMS Journal on Computing, 06.2019.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Reducing conservatism in robust optimization

AU - Roos, Ernst

AU - den Hertog, Dick

PY - 2019/6

Y1 - 2019/6

N2 - Although Robust Optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative when it leads to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint wise approach of Robust Optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from theNETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared to the nominal solution, whereas we find solutions that safeguard against over half the violation while only a tenth of the uncertainty is reflected in the objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.

AB - Although Robust Optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative when it leads to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint wise approach of Robust Optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from theNETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared to the nominal solution, whereas we find solutions that safeguard against over half the violation while only a tenth of the uncertainty is reflected in the objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.

KW - robust optimization

KW - non-constraint wise uncertainty

KW - ambiguity

M3 - Article

JO - INFORMS Journal on Computing

JF - INFORMS Journal on Computing

SN - 1091-9856

ER -