Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models.In this paper, we discuss a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments.Furthermore, we provide a research agenda listing problems in the design of simulation experiments -as opposed to real world experiments- that require more investigation.We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system; (2) finding robust decisions or policies; and (3) comparing the merits of various decisions or policies.Our discussion emphasizes aspects that are typical for simulation, such as sequential data collection.Because the same problem type may be addressed through different design types, we discuss quality attributes of designs.Furthermore, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, more complicated metamodels require more simulation runs.For the validation of the metamodel estimated from a specific design, we present several procedures.
|Place of Publication||Tilburg|
|Number of pages||40|
|Publication status||Published - 2003|
|Name||CentER Discussion Paper|
- experimental desgin