Reducing conservatism in robust optimization

Research output: Contribution to journalArticleScientificpeer-review

18 Downloads (Pure)

Abstract

Although Robust Optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative when it leads to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint wise approach of Robust Optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from the
NETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared to the nominal solution, whereas we find solutions that safeguard against over half the violation while only a tenth of the uncertainty is reflected in the objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.
Original languageEnglish
JournalINFORMS Journal on Computing
Publication statusAccepted/In press - Jun 2019

    Fingerprint

Keywords

  • robust optimization
  • non-constraint wise uncertainty
  • ambiguity

Cite this