TY - UNPB
T1 - Constrained Optimization in Simulation
T2 - Efficient Global Optimization and Karush-Kuhn-Tucker Conditions (revision of 2021-031)
AU - Kleijnen, Jack
AU - van Nieuwenhuyse, I.
AU - van Beers, W.C.M.
N1 - CentER Discussion Paper Nr. 2022-015
PY - 2022/6
Y1 - 2022/6
N2 - An important goal of simulation is optimization of the corresponding real system. We focus on simulation models with multiple responses (out-puts), selecting one response as the variable to be maximized or minimized while the remaining responses satisfy prespecified thresholds; i.e., we focus on constrained optimization problems. To solve this type of problem, we treat the simulation model as a black box. We assume that the simulation is computationally expensive; therefore, we use an inexpensive metamodel (approximation, emulator, surrogate) of the simulation model. A popular metamodel type is a Kriging or Gaussian process (GP) model (which is also used in supervised learning). For optimization with a single response, this GP is used in "efficient global optimization" (EGO) (which is also used in Bayesian optimization and is related to active learning). For simulation with multiple responses, there are several EGO variants. We develop an innovative EGO variant that uses the "Karush-Kuhn-Tucker" (KKT) conditions for constrained optimization. We combine these conditions with the "expected improvement" (EI) criterion, which is popular in EGO. To evaluate the performance of our KT-EGO variant, we apply this variant to several examples. These examples give promising numerical results.
AB - An important goal of simulation is optimization of the corresponding real system. We focus on simulation models with multiple responses (out-puts), selecting one response as the variable to be maximized or minimized while the remaining responses satisfy prespecified thresholds; i.e., we focus on constrained optimization problems. To solve this type of problem, we treat the simulation model as a black box. We assume that the simulation is computationally expensive; therefore, we use an inexpensive metamodel (approximation, emulator, surrogate) of the simulation model. A popular metamodel type is a Kriging or Gaussian process (GP) model (which is also used in supervised learning). For optimization with a single response, this GP is used in "efficient global optimization" (EGO) (which is also used in Bayesian optimization and is related to active learning). For simulation with multiple responses, there are several EGO variants. We develop an innovative EGO variant that uses the "Karush-Kuhn-Tucker" (KKT) conditions for constrained optimization. We combine these conditions with the "expected improvement" (EI) criterion, which is popular in EGO. To evaluate the performance of our KT-EGO variant, we apply this variant to several examples. These examples give promising numerical results.
KW - kriging
KW - efficient global optimization
KW - Karush-Kuhn-Tucker conditions
M3 - Discussion paper
VL - 2022-015
T3 - CentER Discussion Paper
BT - Constrained Optimization in Simulation
PB - CentER, Center for Economic Research
CY - Tilburg
ER -