|
WSC 2005 Final Abstracts |
Analysis Methodology A Track
Monday 10:30:00 AM 12:00:00 PM
Rare-Event Simulation
Chair:
Assaf Zeevi (Colombia University)
Importance Sampling Simulation in the Presence
of Heavy Tails
Achal Bassamboo (Kellogg School of Management),
Sandeep Juneja (Tata Institute of Fundamental Research) and Assaf Zeevi
(Columbia Business School)
Abstract:
We consider importance sampling simulation for
estimating rare event probabilities in the presence of heavy-tailed
distributions that have polynomial-like tails. In particular, we prove the
following negative result: there does not exist an asymptotically optimal
state-independent change-of-measure for estimating the probability that a
random walk (respectively, queue length for a single server queue) exceeds a
``high'' threshold before going below zero (respectively, becoming empty).
Furthermore, we derive explicit bounds on the best asymptotic variance
reduction achieved by importance sampling relative to naive simulation. We
illustrate through a simple numerical example that a ``good" state-dependent
change-of-measure may be developed based on an approximation of the
zero-variance measure.
Some Large Deviations Results for Latin Hypercube
Sampling
Shane S. Drew and Tito Homem-de-Mello (Northwestern
University)
Abstract:
Large deviations theory is a well-studied area which
has shown to have numerous applications. The typical results, however, assume
that the underlying random variables are either i.i.d. or exhibit some form of
Markovian dependence. Our interest in this paper is to study the validity of
large deviations results in the context of estimators built with \emph{Latin
Hypercube sampling}, a well-known sampling technique for variance reduction.
We show that a large deviation principle holds for Latin Hypercube sampling
for functions in one dimension and for separable multi-dimensional functions.
Moreover, the upper bound of the probability of a large deviation in these
cases is no higher under Latin Hypercube sampling than it is under Monte Carlo
sampling. We extend the latter property to functions that preserve negative
dependence (such as functions that are monotone in each argument). Numerical
experiments illustrate the theoretical results presented in the paper.
Limit Theorems for the Multilevel Splitting
Algorithm in the Simulation of Rare Events
Frédéric Cérou and
Francois Le Gland (IRISA / INRIA Rennes), Pierre Del Moral (Université de
Nice-Sophia Antipolis) and Pascal Lezaud (CENA / ENAC)
Abstract:
In this article, a genetic-type algorithm based on
interacting particle systems is presented, together with a genealogical model,
for estimating a class of rare events arising for instance in
telecommunication networks, nuclear engineering, etc. The distribution of a
Markov process hitting a rare but critical set is represented in terms of a
Feynman-Kac model in path space. Approximation results obtained previously for
these models are applied here to estimate the probability of the rare events
as well as the probability distribution of the critical trajectories.
Monday 1:30:00 PM 3:00:00 PM
Simulation Optimization: Selection
Procedures I
Chair: Stephen Chick (INSEAD)
Procedures for Feasibility Detection in the
Presence of Multiple Constraints
Demet Batur and Seong-Hee Kim
(Georgia Institute of Technology)
Abstract:
In this paper, we address the problem of finding a set
of feasible or near-feasible systems among a finite number of simulated
systems in the presence of stochastic constraints. Andradottir, Goldsman, and
Kim (2005) present a procedure that detects feasibility of systems in the
presence of one constraint with a pre-specified probability of correctness. We
extend their procedure to the case of multiple constraints by the use of the
Bonferroni inequality. Unfortunately, the resulting procedure tends to be very
conservative when the number of systems or constraints is large. As a remedy,
we present a screening procedure that uses an aggregated observation, which is
a linear combination of the collected observations across stochastic
constraints. Then, we present an accelerated procedure that combine the
extension of Andradottir, Goldsman, and Kim (2005) with the procedure that
uses aggregated observations. Some experimental results that compare the
performance of the proposed procedures are presented.
A Moving Mesh Approach for Simulation Budget
Allocation on Continuous Domains
Mark W. Brantley and Chun-Hung
Chen (George Mason University)
Abstract:
We introduce a moving mesh algorithm for simulation
optimization across a continuous domain. During each iteration, the mesh
movement is determined by allocating simulation runs to partitions of the
domain that are critical in the process of identifying good solutions. The
partition boundaries are then adjusted to equally distribute the allocation
runs between each partition. To test the moving mesh algorithm, we implemented
it using the OCBA method to allocate simulation runs to each partition. But,
the simplicity of the procedure should provide flexibility for it to be used
with other simulation optimization techniques. Results are presented for
several numerical experiments and suggest that the technique has potential
given further development.
New Developments in Ranking and Selection: An
Empirical Comparison of the Three Main Approaches
Jürgen Branke and
Christian Schmidt (Universität Karlsruhe (TH)) and Stephen E. Chick (INSEAD)
Abstract:
Selection procedures are used in many applications to
select the best of a finite set of alternatives, as in discrete optimization
with simulation. There are a wide variety of procedures, which begs the
question of which selection procedure to select. This paper (a) summarizes the
main structural approaches to deriving selection procedures, (b) describes an
innovative empirical testbed, and (c) summarizes esults from work in progress
that provides the most exhaustive assessment of selection procedures to date.
The most efficient and easiest to control procedures allocate samples with a
Bayesian model for uncertainty about the means, and use a new expected
opportunity cost-based stopping rule.
Monday 3:30:00 PM 5:00:00 PM
Simulation Optimization: Selection
Procedures II
Chair: Seong-Hee Kim (Georgia Institute of Technology)
Determination of the 'Best' That Meets a Limit
Standard
Roy R. Creasey Jr., Melanie B. Marks, and Bennie D. Waller
(Longwood University) and K. Preston White, Jr. (University of Virginia)
Abstract:
This paper describes on-going research, where we
compare, via simulation experiments, a stochastic system to a standard. We are
particularly interested in a subset of standards we call limit standards. A
limit standard is a maximum or minimum benchmark derived from requirements,
another model, or the actual system. The problem is to determine if a system
meets the limit standard at customer-defined probabilities. Then, for those
systems that meet the limit standard, identify which system is the “best,” or
results in the lowest probability of reaching the standard. Current comparison
methods are based on expected value and cannot solve this type of problem. We
outline a two-step approach, using methods from acceptance sampling and
ordered statistics, to solve this problem.
Using Parallel and Distributed Computing to Increase
the Capability of Selection Procedures
E. Jack Chen (BASF
Corporation)
Abstract:
We present a framework for deploying selection
procedures in a parallel and distributed environment to identify the
stochastic systems with optimal expected response when the number of
alternatives is large. The goal is to speed up the process of identifying a
good design with a specified probability. We present a sequential selection
procedure and discuss the validity of dividing the entire set of alternative
designs into several groups that can be processed in a parallel and
distributed fashion. The surviving designs from each group are then processed
subsequently. An experimental evaluation demonstrates the validity and
efficiency of the procedure.
Finding the Best in the Presence of a Stochastic
Constraint
Sigrún Andradóttir, David Goldsman, and Seong-Hee Kim
(Georgia Institute of Technology)
Abstract:
Our problem is that of finding the best system - i.e.,
the system with the largest or smallest primary performance measure - among a
finite number of simulated systems in the presence of a stochastic constraint
on a secondary performance measure. In order to solve this problem, we first
find a set that contains only feasible or near-feasible systems (Phase I) and
then choose the best among those systems in the set (Phase II). We present a
statistically valid procedure for Phase I and then propose another procedure
that performs Phases I and II sequentially to find the best feasible system.
Finally, we provide some experimental results for the second procedure.
Tuesday 8:30:00 AM 10:00:00 AM
Plenary: Initial Transient
Chair: Shane Henderson (Cornell University)
Initial Transient Problem for Steady-state Output
Analysis
Peter W. Glynn (Stanford University)
Abstract:
This tutorial is concerned with providing an overview
of the key issues that arise in consideration of the initial transient problem
for steady-state simulations. In addition, we will discuss the related problem
of construction of low-bias estimators.
Tuesday 10:30:00 AM 12:00:00 PM
Input and Output Analysis
Chair: John Fowler (Arizona State University)
A Test of Randomness Based on the Distance Between
Consecutive Random Number Pairs
Matthew J. Duggan, John H. Drew,
and Lawrence M. Leemis (The College of William & Mary)
Abstract:
One of the most popular methods of generating "random"
numbers for use in a computer program employs a Lehmer random number
generator. If constructed properly, these generators will yield streams of
numbers that will appear random but can also be easily replicated. However,
Marsaglia (1968) discovered a potential flaw in these random number
generators, namely, if consecutive n-tuples of the generated numbers are
plotted in n-dimensional space, the points may fall into a small number of
hyperplanes. In this paper, we compare the theoretical distribution of the
distances to the actual distances produced by a specific random number
generator in order to devise a statistical test of randomness for the validity
of the random number generator.
Cycle-Time Quantile Estimation in Manufacturing
Systems Employing Dispatching Rules
Jennifer E. McNeill, John W.
Fowler, and Gerald T. Mackulak (Arizona State University) and Barry L. Nelson
(Northwestern University)
Abstract:
The cycle-time distribution of manufacturing systems
employing dispatching rules other than FIFO can be both highly skewed and have
heavy tails. Previous cycle-time quantile estimation work has suggested that
the Cornish-Fisher expansion can be used in conjunction with discrete-event
simulation to provide cycle-time quantile estimates for a variety of systems
operating under FIFO dispatching without requiring excess data storage.
However, when the cycle-time distribution exhibits heavy skewness and
kurtosis, the accuracy of quantile estimates obtained using the Cornish-Fisher
expansion may degrade, sometimes severely. This paper demonstrates the
degradation and motivates the need for a modification to the Cornish-Fisher
expansion for estimating quantiles under non-FIFO dispatching rules. A
solution approach combining a data transformation, the maximum
(minimum)-transformation, with the Cornish-Fisher expansion is presented.
Results show that this provides significant improvements in accuracy over
using the Cornish-Fisher expansion alone while still retaining the advantage
of requiring minimal data storage.
Linear Combinations of Overlapping Variance
Estimators for Simulations
Tűba Aktaran-Kalayci and David Goldsman
(Georgia Institute of Technology)
Abstract:
We examine properties of linear combinations of
overlapping standardized time series area estimators for the variance
parameter of a stationary stochastic process. We find that the linear
combination estimators have lower bias and variance than their overlapping
constituents and nonoverlapping counterparts; in fact, the new estimators also
perform particularly well against the benchmark batch means estimator. We
illustrate our findings with analytical and Monte Carlo examples.
Tuesday 1:30:00 PM 3:00:00 PM
Steady-State Output Analysis
Chair: Emily Lada (SAS Institute, Inc.)
Automated Analysis of Simulation Output
Data
Stewart Robinson (University of Warwick)
Abstract:
Appropriate analysis of simulation output is important
to the success of a simulation study. Many users, however, do not have the
skills required to perform such analyses. One way of overcoming this problem
is to provide automated tools for analyzing simulation output. An Excel based
automated “Analyser” is described that performs an analysis of a single
scenario. The Analyser links to a commercial simulation software package,
SIMUL8, and provides recommendations on warm-up, number of replications and
run-length. Various standard procedures are used in the Analyser with some
adaptations to make them suitable for automation. This research demonstrates
the potential of the approach. A requirement for further development is more
thorough testing of the analysis procedures, particularly for their robustness
and generality in use. Further adaptation of the procedures for automation may
also be required.
Exploring Exponentially Weighted Moving Average
Control Charts to Determine the Warm-up Period
Manuel D. Rossetti
and Zhe Li (University of Arkansas) and Peng Qu (Advanced Micro Devices)
Abstract:
In this paper, we examine the use of exponentially
weighted moving average (EWMA) control charts for the detection of
initialization bias in steady state simulation experiments. EWMA charts have
the interesting property of being more sensitive to shifts in the data as
compared to other control charting techniques. We exploit this sensitivity by
developing a criteria for searching for the deletion point when the EWMA is
applied to the reversed data sequence. This allows us to more easily detect
and count the number of times the smoothed sequence remains in control. Our
results indicate that the procedure can quickly find and recommend a deletion
point. In addition, the properties of the resulting estimators are good if the
dataset that is being analyzed does not have an overtly large amount of biased
data points. We use experimental test cases to illustrate the properties of
the procedure.
Performance Evaluation of ASAP3 for Steady-state
Output Analysis
Natalie M. Steiger (University of Maine), Emily K.
Lada (SAS Institute Inc.) and James R. Wilson (NC State University)
Abstract:
An experimental performance evaluation of ASAP3 is
presented, including several queueing systems with characteristics typically
encountered in practical applications of steady-state simulation analysis
procedures. Based on the method of nonoverlapping batch means, ASAP3 is a
sequential procedure designed to produce a confidence-interval estimator of a
steady-state mean response that satisfies user-specified precision and
coverage-probability requirements. ASAP3 is compared with its predecessor ASAP
and the batch means procedure of Law and Carson (LC) in the following test
problems: (a) queue waiting times in the M/M/1/LIFO, M/H_2/1, and M/M/1 queues
with 80% server utilization; and (b) response (sojourn) times in a central
server model of a computer system. Regarding conformance to the given
precision and coverage-probability requirements, ASAP3 compared favorably with
the ASAP and LC procedures. Regarding the average sample sizes needed to
satisfy progressively more stringent precision requirements, ASAP3's
efficiency was reasonable for the given test problems.
Tuesday 3:30:00 PM 5:00:00 PM
Panel: Analysis Methodlogy
Reflection
Chair: Enver Yücesan (INSEAD)
Analysis Methodology: Are We
Done?
Enver Yücesan (INSEAD), Sigrún Andradóttir and David Goldsman
(Georgia Institute of Technology), Bruce W. Schmeiser (Purdue University) and
Lee W. Schruben (University of California, Berkeley)
Abstract:
Since 1967, the Winter Simulation Conference has been a
forum for the introduction of innovative approaches to effectively analyze
discrete-event simulation experiments. The goal of this panel is to bring
together key contributors to analysis methodology research in order to clarify
areas that they think are essentially complete, and identify areas that need
more work. In doing so, we hope to help provide direction to younger
researchers looking for the "right" problems to work on.
Wednesday 8:30:00 AM 10:00:00 AM
Simulation Optimization I
Chair: Raghu Pasupathy (Virgina Tech)
Two Simulated Annealing Algorithms for Noisy
Objective Functions
Andrei A. Prudius and Sigrún Andradóttir
(Georgia Institute of Technology)
Abstract:
We present two new variants of the simulated annealing
algorithm (with a decreasing cooling schedule) that are designed for solving
discrete simulation optimization problems. We also provide conditions under
which our methods converge almost surely to the set of global optimal
solutions, discuss the implications of our results for both transient and
steady-state simulations, and provide some numerical results.
Discrete Optimization via Simulation Using Coordinate
Search
L. Jeff Hong (The Hong Kong University of Science and
Technology)
Abstract:
In this paper we propose a coordinate search algorithm
to solve the optimization-via-simulation problems with integer-ordered
decision variables. We show that the sequence of solutions generated by the
algorithm converges to the set of local optimal solutions with probability 1
and the estimated optimal values satisfy a central limit theorem. We compare
the coordinate search algorithm to the COMPASS algorithm proposed in Hong and
Nelson (2004) through a set of numerical experiments. We see that the
coordinate search has a better performance.
Stochastic Optimization Using Model Reference Adaptive
Search
Jiaqiao Hu, Michael C. Fu, and Steven I. Marcus (University
of Maryland)
Abstract:
We apply the recently introduced approach of model
reference adaptive search to the setting of stochastic optimization and report
on simulation results.
Wednesday 10:30:00 AM 12:00:00 PM
Simulation Optimization II
Chair: David Goldsman (Georgia Institute of
Technology)
Subcontracting in a Make-to-stock Production
System, IPA Gradients for a SFM
Sameh Al-Shihabi (University of
Jordan)
Abstract:
This work extends the single product, single facility
and single warehouse Make-to-Stock (MTS) model (Zhao and Melamed 2004), by
allowing subcontracting. In addition to the inventory triggering level
employed in their model, a new subcontracting triggering level is used to
manage the subcontracted amount. A stochastic fluid model (SFM) is used to
model the MTS system with subcontracting. In the current model, the
subcontractor is assumed capable of satisfying any difference between the
demand and the in-house production to keep the inventory at the subcontracting
triggering level. The sensitivity of the inventory on-hand, backorders and
amounts subcontracted with respect to the two employed triggering levels are
found using infinitesimal perturbation analysis (IPA).
Performance of Variance Updating Ranking and Selection
Procedures
Gwendolyn J. Malone (DRS Technologies) and Seong-Hee
Kim, David Goldsman, and Demet Batur (Georgia Institute of Technology)
Abstract:
Kim and Nelson (2005) developed two indifference-zone
procedures for steady-state simulation where the goal is to find the system
with the largest or smallest expected steady-state performance measure. One of
the procedures, called KN++, updates a variance estimate as more observations
become available and is proven to be asymptotically valid when there is no
dependence across systems (for example, there is no use of common random
numbers). Their procedure exhibits significant improvement over other existing
procedures for use in steady-state simulation. In this paper, we first present
a modification of KN++ that is asymptotically valid with the use of common
random numbers. Then, we study how well KN++ works when data within a system
are independent and identically distributed, but data between systems may be
positively correlated. Specific applications include the finding-the-best
problem when (i) the data are normal, and (ii) the data are Bernoulli.
Multiobjective Simulation Optimization Using an
Enhanced Genetic Algorithm
Hamidreza Eskandari, Luis Rabelo, and
Mansooreh Mollaghasemi (University of Central Florida)
Abstract:
This paper presents an improved genetic algorithm
approach, based on new ranking strategy, to conduct multi-objective
optimization of simulation modeling problems. This approach integrates a
simulation model with stochastic nondomination-based multiobjective
optimization technique and genetic algorithms. New genetic operators are
introduced to enhance the algorithm performance of finding Pareto optimal
solutions and its efficiency in terms of computational effort. An elitism
operator is employed to ensure the propagation of the Pareto optimal set, and
a dynamic expansion operator to increase the population size. An importation
operator is adapted to explore some new regions of the search space. Moreover,
new concepts of stochastic and significant dominance are introduced to improve
the definition of dominance in stochastic environments.