Simulation Experiments in Practice: Statistical Design and Regression Analysis

Research output: Working paperDiscussion paperOther research output

236 Downloads (Pure)

Abstract

In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independently distributed with a constant variance; moreover, the regression (meta)model of the simulation model’s I/O behaviour is assumed to have residuals with zero means. This article addresses the following practical questions: (i) How realistic are these assumptions, in practice? (ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions do hold? (iv) If not, which alternative statistical methods can then be applied?
Original languageEnglish
Place of PublicationTilburg
PublisherOperations research
Number of pages24
Volume2007-30
Publication statusPublished - 2007

Publication series

NameCentER Discussion Paper
Volume2007-30

Keywords

  • metamodel
  • experimental design
  • jackknife
  • bootstrap
  • common random numbers
  • validation

Fingerprint

Dive into the research topics of 'Simulation Experiments in Practice: Statistical Design and Regression Analysis'. Together they form a unique fingerprint.

Cite this