Abstract
the estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we propose “conditional simulation” (CS), which gives predictions at an old point that in all bootstrap samples equal the observed value. CS accounts for the randomness of the estimated GP parameters. We use the CS predictor variance in the “expected improvement” criterion of “efficient global optimization” (EGO). To quantify the resulting small-sample performance, we experiment with multi-modal test functions. Our main conclusion is that EGO with classic Kriging seems quite robust; EGO with CS only tends to perform better in expensive simulation with small samples.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2013 Winter Simulation Conference (WSC 2013) |
Editors | R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, M.E. Kuhl |
Place of Publication | Washington DC |
Publisher | IEEE |
Pages | 969-979 |
ISBN (Print) | 9781479920778 |
Publication status | Published - 2013 |
Fingerprint
Cite this
}
Conditional simulation for efficient global optimization. / Kleijnen, Jack P.C.; Mehdad, E.
Proceedings of the 2013 Winter Simulation Conference (WSC 2013). ed. / R. Pasupathy; S.-H. Kim; A. Tolk; R. Hill; M.E. Kuhl. Washington DC : IEEE, 2013. p. 969-979.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
TY - GEN
T1 - Conditional simulation for efficient global optimization
AU - Kleijnen, Jack P.C.
AU - Mehdad, E.
PY - 2013
Y1 - 2013
N2 - A classic Kriging or Gaussian process (GP) metamodel estimates the variance of its predictor by plugging-inthe estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we propose “conditional simulation” (CS), which gives predictions at an old point that in all bootstrap samples equal the observed value. CS accounts for the randomness of the estimated GP parameters. We use the CS predictor variance in the “expected improvement” criterion of “efficient global optimization” (EGO). To quantify the resulting small-sample performance, we experiment with multi-modal test functions. Our main conclusion is that EGO with classic Kriging seems quite robust; EGO with CS only tends to perform better in expensive simulation with small samples.
AB - A classic Kriging or Gaussian process (GP) metamodel estimates the variance of its predictor by plugging-inthe estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we propose “conditional simulation” (CS), which gives predictions at an old point that in all bootstrap samples equal the observed value. CS accounts for the randomness of the estimated GP parameters. We use the CS predictor variance in the “expected improvement” criterion of “efficient global optimization” (EGO). To quantify the resulting small-sample performance, we experiment with multi-modal test functions. Our main conclusion is that EGO with classic Kriging seems quite robust; EGO with CS only tends to perform better in expensive simulation with small samples.
M3 - Conference contribution
SN - 9781479920778
SP - 969
EP - 979
BT - Proceedings of the 2013 Winter Simulation Conference (WSC 2013)
A2 - Pasupathy, R.
A2 - Kim, S.-H.
A2 - Tolk, A.
A2 - Hill, R.
A2 - Kuhl, M.E.
PB - IEEE
CY - Washington DC
ER -