|
WSC 2003 Final Abstracts |
Analysis Methodology Track
Monday 10:30:00 AM 12:00:00 PM
Simulation Input Modeling
Chair: Shane Henderson (Cornell University)
A Kernel Approach to Estimating the Density of a
Conditional Expectation
Samuel G. Steckley and Shane G. Henderson
(Cornell University)
Abstract:
Given uncertainty in the input model and parameters of
a simulation study, the goal of the simulation study often becomes the
estimation of a conditional expectation. The conditional expectation is
expected performance conditional on the selected model and parameters. The
distribution of this conditional expectation describes precisely, and
concisely, the impact of input uncertainty on performance prediction. In this
paper we estimate the density of a conditional expectation using ideas from
the field of kernel density estimation. We present a result on asymptotically
optimal rates of convergence and examine a number of numerical examples.
Prior and Candidate Models in the Bayesian Analysis
of Finite Mixtures
Russell C. H. Cheng and Christine S.M. Currie
(University of Southampton)
Abstract:
This paper discusses the problem of fitting mixture
models to input data. When an input stream is an amalgam of data from
different sources then such mixture models must be used if the true nature of
the data is to be properly represented. A key problem is then to identify the
different components of such a mixture, and in particular to determine how
many components there are. This is known to be a non-regular/non-standard
problem in the statistical sense and is technically notoriously difficult to
handle properly using classical inferential methods. We discuss a Bayesian
approach and show that there is a theoretical basis why this approach might
overcome the problem. We describe the Bayesian approach explicitly and give
examples showing its application.
A Flexible Automated Procedure for Modeling Complex
Arrival Processes
Michael E. Kuhl and Sachin G. Sumant (Rochester
Institute of Technology) and James R. Wilson (North Carolina State University)
Abstract:
To automate the multiresolution procedure of Kuhl and
Wilson for modeling and simulating arrival processes that exhibit long-term
trends and nested periodic effects (such as daily, weekly, and monthly
cycles), we present a statistical-estimation method that involves the
following steps at each resolution level corresponding to a basic cycle: (a)
transforming the cumulative relative frequency of arrivals within the cycle
(for example, the percentage of all arrivals as a function of the day of the
week within the weekly cycle) to obtain a statistical model with normal,
constant-variance responses; (b) fitting a specially formulated polynomial to
the transformed responses; (c) performing a likelihood ratio test to determine
the degree of the fitted polynomial; and (d) fitting a polynomial of the
degree determined in (c) to the original (untransformed) responses. An example
demonstrates web-based software that implements this flexible approach to
handling complex arrival processes.
Monday 1:30:00 PM 3:00:00 PM
Simulation Output Analysis
Chair:
Russell Cheng (University of Southampton)
Non-Stationary Queue Simulation Analysis Using Time
Series
Rita Marques Brandão (Universidade dos Açores) and Acácio
M.O. Porta Nova (Instituto Superior Técnico)
Abstract:
In this work, we extend the use of time series models
to the output analysis of non-stationary discrete event simulations. In
particular, we investigate and experimentally evaluate the applicability of
ARIMA(p,d,q) models as potential meta-models for simulating queueing systems
under critical traffic conditions. We exploit stationarity-inducing
transformations, in order to efficiently estimate performance measures of
selected responses in the system under study.
Truncation Point Estimation Using Multiple
Replications in Parallel
Falko Bause and Mirko Eickhoff
(Universität Dortmund)
Abstract:
In steady-state simulation the output data of the
transient phase often causes a bias in the estimation of the steady-state
results. A common advice is to cut off this transient phase. Finding an
appropriate truncation point is a well-known problem and is still not
completely solved. In this paper we consider two algorithms for the
determination of the truncation point. Both are based on a technique which
takes the definition of the steady-state phase more closely into
consideration. The capabilities of the algorithms are demonstrated by
comparisons with two methods most often used in practice.
A Wavelet-Based Spectral Method for Steady-State
Simulation Analysis
Emily K. Lada (Old Dominion University), James
R. Wilson (North Carolina State University) and Natalie M. Steiger (University
of Maine)
Abstract:
We develop an automated wavelet-based spectral method
for constructing an approximate confidence interval on the steady-state mean
of a simulation output process. This procedure, called WASSP, determines a
batch size and a warm-up period beyond which the computed batch means form an
approximately stationary Gaussian process. Based on the
log-smoothed-periodogram of the batch means, WASSP uses wavelets to estimate
the batch means log-spectrum and ultimately the steady-state variance constant
(SSVC) of the original (unbatched) process. WASSP combines the SSVC estimator
with the grand average of the batch means in a sequential procedure for
constructing a confidence-interval estimator of the steady-state mean that
satisfies user-specified requirements on absolute or relative precision as
well as coverage probability. An extensive performance evaluation provides
evidence of WASSP's robustness in comparison with some other output analysis
methods.
Monday 3:30:00 PM 5:00:00 PM
Simulation of Large Networks
Chair: John Shortle (George Mason University)
Modeling and Simulation of Telecommunication
Networks for Control and Management
John S. Baras (University of
Maryland College Park)
Abstract:
In this paper we describe methodologies for
telecommunication networks modeling and simulation that are targeted to be
useful as tools in on-line and off-line decision making of the type
encountered in network control, management and planning problems. We describe
the development, validation and use of self-similar and multi-fractal models,
queuing control and performance evaluation, assessing the incremental utility
of various models, hierarchical models based on aggregation, analytic
approximation models for various performance metrics, trade-off and
sensitivity analysis using a multi-objective optimization framework and
automatic differentiation. We also describe four illustrative examples of
applying these methodologies to dynamic network control and management
problems. The examples involve primarily mobile ad hoc wireless and satellite
networks in changing environments.
Efficient Simulation of the National Airspace
System
John F. Shortle, Donald Gross, and Brian L. Mark (George
Mason University)
Abstract:
The National Airspace System (NAS) is a large and
complicated system. Detailed simulation models of the NAS are generally quite
slow, so it can be difficult to obtain statistically valid samples from such
models. This paper presents two methods for reducing the complexity of such
networks to improve simulation time. One method is removal of low-utilization
queues - that is, replacing a queueing node with a delay node, so that
airplanes experience a service time at the node but no queueing time. The
other is removal of nodes by clustering - that is, where groups of nodes are
collapsed into a single node. We employ the methods on simple networks and
show that the reductions yield very little loss in modeling accuracy. We
provide some estimates for the potential speedup in simulation time when using
the methods on large networks.
Propagation of Uncertainty in a Simulation-Based
Maritime Risk Assessment Model Utilizing Bayesian Simulation
Techniques
Jason R.W. Merrick and Varun Dinesh (Virginia
Commonwealth University) and Amita Singh, J. René van Dorp, and Thomas A.
Mazzuchi (George Washington University)
Abstract:
Recent studies in the assessment of risk in maritime
transportation systems have used simulation-based probabilistic techniques.
Amongst them are the San Francisco Bay (SFB) Ferry exposure assessment in
2002, the Washington State Ferry (WFS) Risk Assessment in 1998 and the Prince
William Sound (PWS) Risk Assessment in 1996. Representing uncertainty in such
simulation models is fundamental to quantifying system risk. This paper
illustrates the representation of uncertainty in simulation using Bayesian
techniques to model input and output uncertainty. These uncertainty
representations describe system randomness as well as lack of knowledge about
the system. The study of the impact of proposed ferry service expansions in
San Francisco Bay is used as a case study to demonstrate the Bayesian
simulation technique. Such characterization of uncertainty in simulation-based
analysis provides the user with a greater level of information enabling
improved decision making.
Tuesday 8:30:00 AM 10:00:00 AM
Indifference Zone Selection
Procedures
Chair: E. Jack Chen (BASF)
Inferences from Indifference-Zone Selection
Procedures
E. Jack Chen (BASF Corporation) and W. David Kelton
(University of Cincinnati)
Abstract:
Two-stage indifference-zone selection procedures have
been widely studied and applied. It is known that most indifference-zone
selection procedures also guarantee multiple comparisons with the best
confidence intervals with half-width corresponding to the indifference amount.
We provide the statistical analysis of multiple comparisons with a control
confidence intervals that bound the difference of each design and the unknown
best and multiple comparisons with the best confidence intervals. The
efficiency of selection procedures can be improved by taking into
consideration the differences of sample means, using the variance reduction
technique of common random numbers, and using sequentialized selection
procedures. An experimental performance evaluation demonstrates the validity
of the confidence intervals and efficiency of sequentialized selection
procedures.
Expected Opportunity Cost Guarantees and
Indifference Zone Selection Procedures
Stephen E. Chick (INSEAD)
Abstract:
Selection procedures help identify the best of a finite
set of simulated alternatives. The indifference-zone approach focuses on the
probability of correct selection, but the expected opportunity cost of a
potentially incorrect decision may make more sense in business contexts. This
paper provides the first selection procedure that guarantees an upper bound
for the expected opportunity cost, in a frequentist sense, of a potentially
incorrect selection. The paper therefore bridges a gap between the
indifference-zone approach (with frequentist guarantees) and the Bayesian
approach to selection procedures (which has considered the opportunity cost).
An expected opportunity cost guarantee is provided for all configurations of
the mean, and need not rely upon an indifference zone parameter to determine a
so-called least favorable configuration. Further, we provide expected
opportunity cost guarantees for two existing indifference zone procedures that
were designed to provide probability of correct selection guarantees.
An Indifference-Zone Selection Procedure with
Minimum Switching and Sequential Sampling
L. Jeff Hong and Barry L.
Nelson (Northwestern University)
Abstract:
Statistical ranking and selection (R&S) is a
collection of experiment design and analysis techniques for selecting the
"population" with the largest or smallest mean performance from among a finite
set of alternatives. R&S procedures have received considerable research
attention in the stochastic simulation community, and they have been
incorporated in commercial simulation software. One of the ways that R&S
procedures are evaluated and compared is via the expected number of samples
(often replications) that must be generated to reach a decision. In this paper
we argue that sampling cost alone does not adequately characterize the
efficiency of ranking-and-selection procedures, and we introduce a new
sequential procedure that provides the same statistical guarantees as existing
procedures while reducing the expected total cost of application.
Tuesday 10:30:00 AM 12:00:00 PM
Special Topics on Simulation
Analysis
Chair: Enver Yucesan (INSEAD)
To Batch or Not to Batch
Christos
Alexopoulos and David Goldsman (Georgia Institute of Technology)
Abstract:
When designing steady-state computer simulation
experiments, one is often faced with the choice of batching observations in
one long run or replicating a number of smaller runs. Both methods are
potentially useful in simulation output analysis. We give results and examples
to lend insight as to when one method might be preferred over the other. In
the steady-state case, batching and replication perform about the same in
terms of estimating the mean and variance parameter, though replication tends
to do better than batching when it comes to the performance of confidence
intervals for the mean. On the other hand, batching can often do better than
replication when it comes to point and confidence-interval estimation of the
steady-state mean in the presence of an initial transient. This is not
particularly surprising, and is a common rule of thumb in the folklore.
Better-than-Optimal Simulation Run
Allocation?
Chun-Hung Chen and Donghai He (George Mason University)
and Enver Yücesan (INSEAD)
Abstract:
Simulation is a popular tool for decision making.
However, simulation efficiency is still a big concern particularly when
multiple system designs must be simulated in order to find a best design.
Simulation run allocation has emerged as an important research topic for
simulation efficiency improvement. By allocating simulation runs in a more
intelligent way, the total simulation time can be dramatically reduced. In
this paper we develop a new simulation run allocation scheme. We compare the
new approach with several different approaches. One benchmark approach assumes
that the means and variances for all designs are known so that the
theoretically optimal allocation can be found. It is interesting to observe
that an approximation approach called OCBA does better than this theoretically
optimal allocation. Moreover, a randomized version of OCBA may outperform OCBA
in some cases.
Properties of Discrete Event Systems from their
Mathematical Programming Representations
Wai Kin Chan and Lee W.
Schruben (University of California, Berkeley)
Abstract:
An important class of discrete event systems, tandem
queueing networks, are considered and formulated as mathematical programming
problems where the constraints represent the system dynamics. The dual of the
mathematical programming formulation is a network flow problem where the
longest path equals the makespan of n jobs. This dual network provides an
alternative proof of the reversibility property of tandem queueing networks
under communication blocking. The approach extends to other systems.
Tuesday 1:30:00 PM 3:00:00 PM
Queueing Network Simulation Analysis
Chair: Donald Gross (George Mason University)
Efficient Analysis of Rare Events Associated with
Individual Buffers in a Tandem Jackson Network
Ramya Dhamodaran and
Bruce C. Shultes (University of Cincinnati)
Abstract:
Over the last decade, importance sampling has been a
popular technique for the efficient estimation of rare event probabilities.
This paper presents an approach for applying balanced likelihood ratio
importance sampling to the problem of quantifying the probability that the
content of the second buffer in a two node tandem Jackson network reaches some
high level before it becomes empty. Heuristic importance sampling
distributions are derived that can be used to estimate this overflow
probability in cases where the first buffer capacity is finite and infinite.
The proposed importance sampling distributions differ from previous balanced
likelihood ratio methods in that they are specified as functions of the
contents of the buffers. Empirical results indicate that the relative errors
of these importance sampling estimators is bounded independent of the buffer
size when the second server is the bottleneck and is bounded linearly in the
buffer size otherwise.
Developing Efficient Simulation Methodology for
Complex Queueing Networks
Ying-Chao Hung (National Central
University) and George Michailidis and Derek R. Bingham (The University of
Michigan)
Abstract:
Simulation can provide insight to the behavior of a
complex queueing system by identifying the response surface of several
performance measures such as delays and backlogs. However, simulations of
large systems are expensive both in terms of CPU time and use of available
resources (e.g. processors). Thus, it is of paramount importance to carefully
select the inputs of simulation in order to adequately capture the underlying
response surface of interest and at the same time minimize the required number
of simulation runs. In this study, we present a methodological framework for
designing efficient simulations for complex networks. Our approach works in
sequential and combines the methods of CART (Classification And Regression
Trees) and the design of experiments. A generalized switch model is used to
illustrate the proposed methodology and some useful applications are
described.
Queueing-Network Stability: Simulation-Based
Checking
Jamie R. Wieland, Raghu Pasupathy, and Bruce W. Schmeiser
(Purdue University)
Abstract:
Queueing networks are either stable or unstable, with
stable networks having finite performance measures and unstable networks
having asymptotically many customers as time goes to infinity. Stochastic
simulation methods for estimating steady-state performance measures often
assume that the network is stable. Here, we discuss the problem of checking
whether a given network is stable when the stability-checking algorithm is
allowed only to view arrivals and departures from the network.
Tuesday 3:30:00 PM 5:00:00 PM
Efficient Simulation Procedures
Chair: Bruce Schmeiser (Purdue)
Comparison with a Standard via Fully Sequential
Procedures
Seong-Hee Kim (Georgia Institute of Technology)
Abstract:
We develop fully sequential procedures for comparison
with a standard. The goal is to find systems whose expected performance
measures are larger or smaller than a single system referred as a standard
and, if there is any, to find the one with the largest or smallest
performance. Our procedures allow for unequal variances across systems, the
use of common random numbers and known or unknown expected performance of the
standard. Experimental results are provided to compare the efficiency of the
procedure with other existing procedures.
A Simulation Study on Sampling and Selecting under
Fixed Computing Budget
Loo Hay Lee and Ek Peng Chew (National
University of Singapore)
Abstract:
For many real world problems, when the design space is
huge and unstructured and time consuming simulation is needed to estimate the
performance measure, it is important to decide how many designs should be
sampled and how long the simulation should be run for each design alternative
given that we only have a fixed amount of computing time. In this paper, we
present a simulation study on how the distribution of the performance measure
and the distribution of the estimation error/noise will affect the decision.
From the analysis, it is observed that when the noise is bounded and if there
is a high chance that we can get the smallest noise, then the decision will be
to sample as many as possible, but if the noise is unbounded, then it will be
important to reduce the level of the noise level by assigning more simulation
time to each design alternative.
Simulation-Based Retrospective Optimization of
Stochastic Systems: A Family of Algorithms
Jihong Jin (none) and
Bruce Schmeiser (Purdue University)
Abstract:
We consider optimizing a stochastic system, given only
a simulation model that is parameterized by continuous decision variables. The
model is assumed to produce unbiased point estimates of the system performance
measure(s), which must be expected values. The performance measures may appear
in the objective function and/or in the constraints. We develop a family of
retrospective-optimization (RO) algorithms based on a sequence of sample-path
approximations to the original problem with increasing sample sizes. Each
approximation problem is obtained by substituting point estimators for each
performance measure and using common random numbers over all values of the
decision variables. We assume that these approximation problems can be
deterministically solved within a specified error in the decision variables,
and that this error is decreasing to zero. The computational efficiency of RO
arises from being able to solve the next approximation problem efficiently
based on knowledge gained from the earlier, easier approximation problems.
Wednesday 8:30:00 AM 10:00:00 AM
Issues on Simulation and Optimization
I
Chair: Barry Nelson (Northwestern University)
Robust Simulation-Based Design of Hierarchical
Systems
Charles D. McAllister (Louisiana State University)
Abstract:
Hierarchical design scenarios arise when the
performance of large-scale, complex systems can be affected through the
optimal design of several smaller functional units or subsystems. Monte Carlo
simulation provides a useful technique to evaluate probabilistic uncertainty
in customer-specified requirements, design variables, and environmental
conditions while concurrently seeking to resolve conflicts among competing
subsystems. This paper presents a framework for multidisciplinary
simulation-based design optimization, and the framework is applied to the
design of a Formula 1 racecar. The results indicate that the proposed
hierarchical approach successfully identifies designs that are robust to the
observed uncertainty.
Optimal Experimental Design for Systems
Involving both Quantitative and Qualitative Factors
Navara
Chantarat, Ning Zheng, Theodore T. Allen, and Deng Huang (The Ohio State
University)
Abstract:
Often in discrete-event simulation, factors being
considered are qualitative such as machine type, production method, job
release policy, and factory layout type. It is also often of interest to
create a Response Surface (RS) metamodel for visualization of input-output
relationships. Several methods have been proposed in the literature for RS
metamodeling with qualitative factors but the resulting metamodels may be
expected to predict poorly because of sensitivity to misspecification or bias.
This paper proposes the use of the Expected Integrated Mean Squared Error
(EIMSE) criterion to construct alternative optimal experimental designs. This
approach explicitly takes bias into account. We use a discrete-event
simulation example from the literature, coded in ARENATM, to illustrate the
proposed method and to compare metamodeling accuracy of alternative approaches
computationally.
Controlled Sequential Bifurcation: A New
Factor-Screening Method for Discrete-Event Simulation
Hong Wan,
Bruce Ankenman, and Barry L. Nelson (Northwestern University)
Abstract:
Screening experiments are performed to eliminate
unimportant factors so that the remaining important factors can be more
thoroughly studied in later experiments. Sequential bifurcation (SB) is a
screening method that is well suited for simulation experiments; the challenge
is to prove the "correctness" of the results. This paper proposes Controlled
Sequential Bifurcation (CSB), a procedure that incorporates a two-stage
hypothesis-testing approach into SB to control error and power. A detailed
algorithm is given, performance is proved and an empirical evaluation is
presented.
Wednesday 10:30:00 AM 12:00:00 PM
Issues on Simulation and
Optimization II
Chair: Frederick Wieland
(MITRE)
Some Issues in Multivariate Stochastic Root
Finding
Raghu Pasupathy and Bruce W. Schmeiser (Purdue University)
Abstract:
The stochastic root finding problem (SRFP) involves
finding points in a region where a function attains a prespecified target
value, using only a consistent estimator of the function. Due to the
properties that the SRFP contexts entail, the development of good solutions to
SRFPs has proven difficult, at least in the multi-dimensional setting. This
paper discusses certain key issues, insights and complexities for SRFPs. Some
of these are important in that they point to phenomena that contribute to the
difficulties that arise in the development of efficient algorithms for SRFPs.
Others are simply observations, sometimes obvious, but important for providing
useful insight into algorithm development.
Targeting Aviation Delay through Simulation
Optimization
Frederick Wieland and Thomas Curtis Holden (The MITRE
Corporation)
Abstract:
Analyses of benefits due to changes in the National
Air-space System (NAS) tend to focus on the delay reduction (or similar
metric) given a fixed traffic schedule. In this paper, we explore the use of
simulation optimization to solve for the increased traffic volume that the
proposed change can support given a constant delay. The increased traffic
volume as a result of the change can therefore be considered another benefit
metric. As the NAS is a highly nonlinear stochastic system, the technique
required to compute the increase traffic volume necessarily requires
stochastic optimization methods.
Robust Hybrid Designs for Real-Time Simulation
Trials
Russell C. H. Cheng and Owen D. Jones (University of
Southampton)
Abstract:
Real time simulation trials involve people and are
particularly subject to a number of natural constraints imposed by standard
work patterns as well as to the vagaries of the availability of individuals
and unscheduled upsets. They also typically involve many factors. Well
thought-out simulation experimental design is therefore especially important
if the resulting overall trial is to be efficient and robust. We propose
hybrid experimental designs that combine the safety of matched runs with the
efficiency of fractional factorial designs. This article describes real
experiences in this area and the resulting approach and methodology that has
evolved from these and which has proved effective in practice.