TY - UNPB
T1 - Constrained Optimization in Random Simulation
T2 - Efficient Global Optimization and Karush-Kuhn-Tucker Conditions
AU - Angun, Ebru
AU - Kleijnen, Jack
N1 - CentER Discussion Paper Nr. 2022-022
PY - 2022/8/30
Y1 - 2022/8/30
N2 - We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation; i.e., our method minimizes the goal output subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "effciient global optimization" (EGO), which is also known as "Bayesian optimization" and is related to “active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model, treating the simulation as a black box. Though there are many variants of EGO - for these non-constrained deterministic problems and for variants of these problems - none of these EGO-variants use the KKT conditions - even though these conditions are well-known (first-order necessary) optimality conditions in white-box problems. Because the simulation is random, we apply stochastic Kriging. Furthermore, we allow for variance heterogeneity and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. Moreover, our algorithm can take advantage of parallel computing. We numerically compare the performance of our algorithm and the popular proprietary OptQuest algorithm, in two familiar examples (namely, a mathematical toy example and a practical inventory system with a service-level constraint); we conclude that our algorithm is more efficient (requires fewer expensive simulation runs) and effective (gives better estimates of the true global optimum).
AB - We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation; i.e., our method minimizes the goal output subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "effciient global optimization" (EGO), which is also known as "Bayesian optimization" and is related to “active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model, treating the simulation as a black box. Though there are many variants of EGO - for these non-constrained deterministic problems and for variants of these problems - none of these EGO-variants use the KKT conditions - even though these conditions are well-known (first-order necessary) optimality conditions in white-box problems. Because the simulation is random, we apply stochastic Kriging. Furthermore, we allow for variance heterogeneity and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. Moreover, our algorithm can take advantage of parallel computing. We numerically compare the performance of our algorithm and the popular proprietary OptQuest algorithm, in two familiar examples (namely, a mathematical toy example and a practical inventory system with a service-level constraint); we conclude that our algorithm is more efficient (requires fewer expensive simulation runs) and effective (gives better estimates of the true global optimum).
KW - Simulation
KW - design of experiments
KW - simulation
KW - statistical analysis
KW - artificial intelligence
KW - computational- experiments
KW - inventory-production
M3 - Discussion paper
VL - 2022-022
T3 - CentER Discussion Paper
BT - Constrained Optimization in Random Simulation
PB - CentER, Center for Economic Research
CY - Tilburg
ER -