TY - UNPB
T1 - Constrained optimization in Random Simulation
T2 - Efficient Global Optimization and Karush-Kuhn-Tucker Conditions (Revision of CentER DP 2022-022)
AU - Angun, Ebru
AU - Kleijnen, Jack
N1 - CentER Discussion Paper Nr. 2023-010
PY - 2023/4/20
Y1 - 2023/4/20
N2 - We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation. In these problems, the goal output is minimized, subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "efficient global optimization" (EGO); EGO is also known as "Bayesian optimization" and is related to \active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model; EGO treats the simulation as a black box. Though there are many variants of EGO, no variant uses the KKT conditions|even though these conditions are well-known (first-order necessary) optimality conditions in white-box mathematical optimization. Because we assume that the simulation is random, we apply stochastic Kriging. Furthermore, we admit variance heterogeneity, and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. We numerically compare the performance of our algorithm and three alternative algorithms, in four familiar examples. We conclude that our algorithm is often more efficient (fewer expensive simulation runs) and effective (better estimates of the true global optimum).
AB - We develop a novel method for solving constrained optimization problems in random (or stochastic) simulation. In these problems, the goal output is minimized, subject to one or more output constraints and input constraints. Our method is indeed novel, as it combines the Karush-Kuhn-Tucker (KKT) conditions with the popular algorithm called "efficient global optimization" (EGO); EGO is also known as "Bayesian optimization" and is related to \active learning". Originally, EGO solves non-constrained optimization problems in deterministic simulation; EGO is a sequential algorithm that uses Kriging (or Gaussian process) metamodeling of the underlying simulation model; EGO treats the simulation as a black box. Though there are many variants of EGO, no variant uses the KKT conditions|even though these conditions are well-known (first-order necessary) optimality conditions in white-box mathematical optimization. Because we assume that the simulation is random, we apply stochastic Kriging. Furthermore, we admit variance heterogeneity, and apply a popular sample allocation rule to determine the number of replicated simulation outputs for selected combinations of simulation inputs. We numerically compare the performance of our algorithm and three alternative algorithms, in four familiar examples. We conclude that our algorithm is often more efficient (fewer expensive simulation runs) and effective (better estimates of the true global optimum).
KW - Simulation
KW - design of experiments
KW - simulation
KW - statistical analysis
KW - artificial intelligence
KW - computational experiments
KW - inventory-production
M3 - Discussion paper
VL - 2023-010
T3 - CentER Discussion Paper
BT - Constrained optimization in Random Simulation
PB - CentER, Center for Economic Research
CY - Tilburg
ER -