WSC 2004 Final Abstracts |
Monday 10:30:00 AM 12:00:00 PM
Bayesian Methods
Chair: Jennie La (University of Calgary)
Bayesian Methods for Discrete Event Simulation
Stephen E. Chick (INSEAD)
Abstract:
Bayesian
methods are now used in a variety of ways in discrete-event simulation.
Applications include input modeling, response surface modeling, uncertainty
analysis, and experimental designs for field data collection, selection procedures,
and response surface estimation. This paper reviews some fundamental concepts
of subjective probability and Bayesian statistics that have led to results
in simulation applications.
Monday 1:30:00 PM 3:00:00 PM
Stochastic Petri Nets
Chair: David Munoz (ITAM)
Stochastic Petri Nets for Modelling and Simulation
Peter J. Haas (IBM )
Abstract:
Stochastic
Petri nets (SPNs) have proven to be a powerful and enduring graphically-oriented
framework for modelling and performance analysis of complex systems. This
tutorial focuses on the use of SPNs in discrete-event simulation. After describing
the basic SPN building blocks and discussing the modelling power of the formalism,
we present elements of a steady-state simulation theory for SPNs. Specifically,
we provide conditions on the SPN building blocks that ensure long-run stability
for the underlying marking process (or for a sequence of delays determined
by the marking process) and the validity of estimation procedures such as
the regenerative method, the method of batch means, and spectral methods.
Monday 3:30:00 PM 5:00:00 PM
Kriging Interpolation in Simulation
Chair: Natalie Steiger (University of Maine)
Kriging Interpolation in Simulation: A Survey
Wim C.M. Van Beers and Jack P.C. Kleijnen (Tilburg University)
Abstract:
Many
simulation experiments require much computer time, so they necessitate interpolation
for sensitivity analysis and optimization. The interpolating functions are
‘metamodels’ (or ‘response surfaces’) of the underlying simulation models.
Classic methods combine low-order polynomial regression analysis with fractional
factorial designs. Modern Kriging provides ‘exact’ interpolation, i.e., predicted
output values at inputs already observed equal the simulated output values.
Such interpolation is attractive in deterministic simulation, and is often
applied in Computer Aided Engineering. In discrete-event simulation, however,
Kriging has just started. Methodologically, a Kriging metamodel covers the
whole experimental area; i.e., it is global (not local). Kriging often gives
better global predictions than regression analysis. Technically, Kriging
gives more weight to ‘neighboring’ observations. To estimate the Kriging
metamodel, space filling designs are used; for example, Latin Hypercube Sampling
(LHS). This paper also presents novel, customized (application driven) sequential
designs based on cross-validation and bootstrapping.
Tuesday 8:30:00 AM 10:00:00 AM
Verification, Validation, and Accreditation
Chair: Young Lee (IBM)
Quality Assessment, Verification, and Validation of Modeling and Simulation Applications
Osman Balci (Virginia Tech)
Abstract:
Many
different types of modeling and simulation (M&S) applications are used
in dozens of disciplines under diverse objectives including acquisition,
analysis, education, entertainment, research, and training. M&S application
verification and validation (V&V) are conducted to assess mainly the
accuracy, which is one of many indicators affecting the M&S application
quality. Much higher confidence can be achieved in accuracy if a quality-centered
approach is used. This paper presents a quality model for assessing the quality
of large-scale complex M&S applications as integrated with V&V. The
guidelines provided herein should be useful for assessing the overall quality
of an M&S application.
Tuesday 10:30:00 AM 12:00:00 PM
Network Traffic Modeling
Chair: John Charnes (University of Kansas)
More ``Normal'' Than Normal: Scaling Distributions and Complex Systems
Walter Willinger (AT&T Labs-Research) and David Alderson, John C. Doyle, and Lun Li (California Institute of Technology)
Abstract:
One feature of many naturally occurring or engineered complex systems
is tremendous variability in event sizes. To account for it,
the behavior of these systems is often described
using power law relationships or scaling distributions, which tend to be
viewed as ``exotic'' because of their unusual
properties (e.g., infinite moments). An alternate view is based on
mathematical, statistical, and data-analytic arguments and suggests that
scaling distributions should be viewed as "more normal than Normal''.
In support of this latter view that has been advocated
by Mandelbrot for the last 40 years,
we review in this paper some relevant
results from probability theory and illustrate a
powerful statistical approach for deciding whether the variability
associated with observed event sizes
is consistent with an underlying Gaussian-type (finite variance)
or scaling-type (infinite variance) distribution. We contrast this
approach with traditional model fitting techniques and discuss its
implications for future modeling of complex systems.
Tuesday 1:30:00 PM 3:00:00 PM
Inside Simulation Software
Chair: K. Preston White (University of Virginia)
Inside Discrete-Event Simulation Software: How it Works and Why it Matters
Thomas J. Schriber (University of Michigan) and Daniel T. Brunner (Systemflow Simulations, Inc.)
Abstract:
This
paper provides simulation practitioners and consumers with a grounding in
how discrete-event simulation software works. Topics include discrete-event
systems; entities, resources, control elements and operations; simulation
runs; entity states; entity lists; and entity-list management. The implementation
of these generic ideas in AutoMod, SLX, and Extend is described. The paper
concludes with several examples of “why it matters” for modelers to know
how their simulation software works, including coverage of SIMAN (Arena),
ProModel, and GPSS/H as well as the other three tools.
Tuesday 3:30:00 PM 5:00:00 PM
Input Modeling
Chair: Nilay Argon (University of Wisconsin, Madison)
Dependence Modeling for Stochastic Simulation
Bahar Biller (Carnegie Mellon University) and Soumyadip Ghosh (IBM )
Abstract:
An important step in designing stochastic simulation is modeling the uncertainty in the input environment of the system being
studied. Obtaining a reasonable representation of this uncertainty
can be challenging in the presence of dependencies in the input process.
This tutorial attempts to provide a coherent narrative of the central principles that underlie methods that aim to model
and sample a wide variety of dependent input processes.
Wednesday 8:30:00 AM 10:00:00 AM
Output Analysis
Chair: Andrew Seila (University of Georgia)
Simulation Output Analysis: A Tutorial Based on One Research Thread
Bruce W. Schmeiser (Purdue University)
Abstract:
In this tutorial we discuss simulation output analysis: the problem of
evaluating and reporting the quality of a given stochastic simulation
experiment. We advocate the use of micro/macro replications based on
fixed sample sizes, batch sizes based on mean squared error, and
avoiding confidence intervals and analysis of variance. In addition,
we discuss the problem of evaluating and comparing confidence-interval
procedures and the issue of how to report point estimates concisely.
Wednesday 10:30:00 AM 12:00:00 PM
Military Applications of Agent-based Models
Chair: Bill Biles (University of Louisville)
Military Applications of Agent-Based Simulations
Thomas M. Cioppa, Thomas W. Lucas, and Susan M. Sanchez (Naval Postgraduate School)
Abstract:
Navy
personnel use the REMUS unmanned underwater vehicle to search for submerged
objects. Navigation inac-curacies lead to errors in predicting the location
of objects and thus increase post-mission search times for explosive ordnance
disposal teams. This paper explores components of navigation inaccuracy using
discrete event simulation to model the vehicle’s navigation system and operational
per-formance. The simulation generates data used, in turn, to build statistical
models of the probability of detection, the mean location offset given that
detection occurs, and the location error distribution. Together, these three
models enable operators to explore the impact of various inputs prior to
programming the vehicle, thus allowing them to choose combinations of vehicle
parameters that reduce the offset error between the reported and actual locations.