|
WSC 2007 Final Abstracts |
Analysis Methodology A Track
Monday 10:30:00 AM 12:00:00 PM
Ranking and Selection Procedures
Chair: Seong-Hee Kim (Georgia Institute of
Technology)
Controlled Sequential Bifurcation for Software
Reliability Study
Jun Xu and Feng Yang (West Virginia University)
and Hong Wan (Purdue University)
Abstract:
Computer simulation is an appealing approach for the
reliability analysis of structure-based software systems as it can accommodate
important complexities present in realistic systems. When the system is
complicated, a screening experiment to quickly identify important factors
(components) can significantly improve the efficiency of analysis. The
challenge is to guarantee the correctness of the screening results with
stochastic simulation responses. Control Sequential Bifurcation (CSB) is a new
screening methods for simulation experiments. With appropriate hypothesis
testing procedures, CSB can simultaneously control the Type I error and power.
The past research, however, has focused on responses with normal
distributions. This paper proposes a CSB procedure with binary responses for
software reliability analysis. A conditional sequential test for equality of
two proportions is implemented to guarantee the the desired error control.
New Greedy Myopic and Existing Asymptotic
Sequential Selection Procedures: Preliminary Empirical
Results
Stephen E. Chick (INSEAD) and Juergen Branke and Christian
Schmidt (Institute AIFB)
Abstract:
Statistical selection procedures can identify the best
of a finite set of alternatives, where "best" is defined in terms of the
unknown expected value of each alternative's simulation output. One effective
Bayesian approach allocates samples sequentially to maximize an approximation
to the expected value of information (EVI) from those samples. That existing
approach uses both asymptotic and probabilistic approximations. This paper
presents new EVI sampling allocations that avoid most of those approximations,
but that entail sequential myopic sampling from a single alternative per stage
of sampling. We compare the new and old approaches empirically. In some
scenarios (a small, fixed total number of samples, few systems to be
compared), the new greedy myopic procedures are better than the original
asymptotic variants. In other scenarios (with adaptive stopping rules, medium
or large number of systems, high required probability of correct selection),
the original asymptotic allocations perform better.
A Tournament Framework for the Ranking and
Selection Problem
Enver Yucesan (INSEAD)
Abstract:
A tournament can be broadly defined as a procedure that
ranks agents, where they exhibit their performance in a noisy environment. By
observing the agents' performance, the organizer computes their ranking and
rewards them according to the revealed ranking. The organizer’s challenge is
therefore to determine the optimal tournament format that identifies the best
agent in the most effective fashion. Tournaments thus provide a natural
framework for ranking and selection (R&S) via simulation, which represents
a set of approaches developed to complement the modeling flexibility of
simulation with the efficiency of statistical techniques for effective
decision making. In this paper, following the introduction of a general
framework to represent various tournament formats and to assess their
predictive power, we will report preliminary experimental results on the
effectiveness of tournaments in identifying the best simulated system with the
desired probability of correct selection in the presence of costs.
Monday 1:30:00 PM 3:00:00 PM
Output Analysis
Chair: Hernan
Awad (University of Miami)
Low Bias Integrated Path
Estimators
James Michael Calvin (New Jersey Institute of
Technology)
Abstract:
We consider the problem of estimating the time-average
variance constant for a stationary process. A previous paper described an
approach based on multiple integrations of the simulation output path, and
described the efficiency improvement that can result compared with the method
of batch means (which is a special case of the method). In this paper we
describe versions of the method that have low bias for moderate simulation run
lengths. The method constructs an estimator based on applying a quadratic
function to the simulation output. The particular quadratic form is chosen to
minimize variance subject to constraints on the order of the bias. Estimators
that are first-order and second-order unbiased are described.
Replicated Batch Means for Steady-State Simulations
with Initial Transients
Christos Alexopoulos and Sigrun Andradottir
(Georgia Institute of Technology), Nilay Tanik Argon (University of North
Carolina at Chapel Hill) and David Goldsman (Georgia Institute of Technology)
Abstract:
We provide asymptotic expressions for the expected
value and variance of the replicated batch means variance estimator when the
stochastic process being simulated has an additive initial transient. These
expressions explicitly show how the initial transient and the autocorrelation
in the data affect the performance of the estimator. We apply our results to
study how many replications will minimize the asymptotic bias of the variance
estimator for a simple example.
Finite-sample Performance Guarantees for
One-dimensional Stochastic Root Finding
Samuel M. T. Ehrlichman and
Shane G. Henderson (Cornell University)
Abstract:
We study the one-dimensional root finding problem for
increasing convex functions. We give gradient-free algorithms for both exact
and inexact (stochastic) function evaluations. For the stochastic case, we
supply a probabilistic convergence guarantee in the spirit of
selection-of-the-best methods. A worst-case bound on the work performed by the
algorithm shows an improvement over naïve stochastic bisection.
Monday 3:30:00 PM 5:00:00 PM
Experimental Designs for Simulation
Chair: Hong Wan (Purdue University)
Metamodeling for Cycle Time-throughput-product Mix
Surfaces Using Progressive Model Fitting
Feng Yang and Jingang Liu
(West Virginia University) and Mustafa Tongarlak, Bruce E. Ankenman, and Barry
L. Nelson (Northwestern University)
Abstract:
A simulation-based methodology is proposed to map the
mean of steady-state cycle time as a function of throughput and product mix
for manufacturing systems. Nonlinear regression models motivated by queueing
analysis are assumed for the underlying response surface. To insure efficiency
and control estimation error, simulation experiments are built up sequentially
using a multistage procedure to collect data for the fitting of the models.
The resulting response surface is able to provide a cycle-time estimate for
any throughput and any product mix, and thus allows the decision maker to
instantly investigate options and trade offs regarding their production
planning.
Subset Selection and Optimization for Selecting
Binomial Systems Applied to Supersaturated Design Generation
Ning
Zheng and Theodore Allen (The Ohio State University)
Abstract:
The problem of finding the binomial population with the
highest success probability is considered when the number of binomial
populations is large. A new rigorous indifference zone subset selection
procedure for binomial populations is proposed with the proof of the
corresponding least favorable configuration. For cases involving large numbers
of binomial populations, a simulation optimization method combining the
proposed subset selection procedure with an elitist Genetic Algorithm (GA) is
proposed to find the highest-mean solution. Convergence of the proposed GA
frame work are established under general assumptions. The problem of deriving
supersaturated screening designs is described and used to illustrate the
application of all methods. Computational comparisons are also presented for
the problem of generating supersaturated experimental designs.
Determining Efficient Simulation Run Lengths for
Real Time Decision Making
Russell C. H. Cheng (University of
Southampton)
Abstract:
Suppose that there are a number of alternative ways of
operating a system, and a performance measure is available for comparing them.
Simulation runs can be carried out to estimate this measure for different
alternatives, but there are too many for all to be examined because there is a
strict limit to the time available for simulations. If the accuracy with which
the performance measure can be estimated increases with run length, then a
balance has to be made between making few runs where the performance measure
is accurately estimated and making a large number of runs but with the
performance measure poorly estimated. We analyse how the best run length can
be selected to ensure that an alternative is found with a good performance
measure. This problem can arise in real time decision making, and we give a
real example arising in the provision of fire service emergency cover.
Tuesday 8:30:00 AM 10:00:00 AM
Simulation Optimization
Chair:
Jeff Hong (Hong Kong University of Science and
Technology)
Stochastic Trust Region Gradient-free Method
(STRONG)- A New Response-Surface-Based Algorithm in Simulation
Optimization
Kuo-Hao Chang (Purdue University), L. Jeff Hong (Hong
Kong University of Science and Technology) and Hong Wan (Purdue University)
Abstract:
Response Surface Methodology (RSM) is a metamodel-based
optimization method. Its strategy is to explore small subregions of the
parameter space in succession instead of attempting to explore the entire
parameter space directly. This method has been widely used in simulation
optimization. However, RSM has two significant shortcomings: Firstly, it is
not automated. Human involvements are usually required in the search process.
Secondly, RSM is heuristic without convergence guarantee. This paper proposes
Stochastic Trust Region Gradient-Free Method (STRONG) for simulation
optimization with continuous decision variables to solve these two problems.
STRONG combines the traditional RSM framework with the trust region method for
deterministic optimization to achieve convergence property and eliminate the
requirement of human involvement. Combined with appropriate experimental
designs and specifically efficient screening experiments, STRONG has the
potential of solving high-dimensional problems efficiently.
Kriging Metamodeling in Constrained Simulation
Optimization: An Explorative Study
William E. Biles (University of
Louisville), Jack P. C. Kleijnen and Wim C. M. van Beers (Tilburg University)
and Inneke van Nieuwenhuyse (European University College)
Abstract:
This paper describes two experiments exploring the
potential of Kriging for constrained simulation optimization. Both experiments
study an (s, S) inventory system, where the objective is to find the optimal
values of s and S. The objective function and constraints used in these two
experiments differ, as does the approach used to determine the optimum
combination predicted by the Kriging model. The results of these experiments
indicate that the Kriging methodology offers opportunities for solving
constrained optimization problems in simulation; future research will focus on
further refining the optimization procedure.
Allocation of Simulation Runs for Simulation
Optimization
Alireza Kabirian and Sigurdur Olafsson (Iowa State
University)
Abstract:
Simulation optimization (SO) is the process of finding
the optimum design of a system whose performance measure(s) are estimated via
simulation. We propose some ideas to improve overall efficiency of the
available SO methods and develop a new approach that primarily deals with
continuous two dimensional problems with bounded feasible region. Our search
based method, called Adaptive Partitioning Search (APS), uses a neural network
as meta-model and combines various exploitation strategies to locate the
optimum. Our numerical results show that in terms of the number of evaluations
(simulation runs) needed, the APS algorithm converges much faster to the
optimum design than two well established methods used as benchmark.
Tuesday 10:30:00 AM 12:00:00 PM
Advances in Rare-Event Simulation
Methodology
Chair: Jose Blanchet (Harvard
University)
Importance Sampling of Compounding
Processes
Jose H. Blanchet (Harvard University) and Bert Zwart
(Georgia Institute of Technology)
Abstract:
Compounding processes, also known as perpetuities, play
an important role in many applications; in particular, in time series analysis
and mathematical finance. Apart from some special cases, the distribution of a
perpetuity is hard to compute, and large deviations estimates sometimes
involve complicated constants which depend on the complete distribution.
Motivated by this, we propose provably efficient importance sampling
algorithms which apply to qualitatively different cases, leading to light and
heavy tails. Both algorithms have the non-standard feature of being
state-dependent. In addition, in order to verify the efficiency, we apply
recently developed techniques based on Lyapunov inequalities.
Path-sampling for State-dependent Importance
Sampling
Jose H. Blanchet and Jingchen Liu (Harvard University)
Abstract:
State-dependent importance sampling (SDIS) has proved
to be particularly useful in simulation (specially in rare event analysis of
stochastic systems). One approach for designing SDIS is to mimic the
zero-variance change-of-measure by using a likelihood ratio that is
proportional to an asymptotic approximation that may be available for the
problem at hand. Using such approximation poses the problem of computing the
corresponding normalizing constants at each step. In this paper, we propose
the use of path-sampling, which allows to estimate such normalizing constants
in terms of one dimensional integrals. We apply path-sampling to estimate the
tail of the delay in a G/G/1 queue with regularly varying input. We argue that
such tail estimation can be performed with good relative precision in
quadratic complexity (in terms of the tail parameter) - assuming that
path-sampling is combined with an appropriate change-of-measure proposed by
Blanchet and Glynn (2007a).
Efficient Suboptimal Rare-event
Simulation
Xiaowei Zhang (Stanford University), Jose H. Blanchet
(Harvard University) and Peter W. Glynn (Stanford University)
Abstract:
Much of the rare-event simulation literature is
concerned with the development of asymptotically optimal algorithms. Because
of the difficulties associated with applying these ideas to complex models,
this paper focuses on sub-optimal procedures that can be shown to be much more
efficient than conventional crude Monte Carlo. We provide two such examples,
one based on ``repeated acceptance/rejection'' as a mean of computing tail
probabilities for hitting time random variables and the other based on
filtered conditional Monte Carlo.
Tuesday 1:30:00 PM 3:00:00 PM
Rare-Event Simulation
Chair:
Pierre L'Ecuyer (University of Montreal)
Rare-event Simulation for a Multidimensional
Random Walk with T-distributed Increments
Jose H. Blanchet and
Jingchen Liu (Harvard University)
Abstract:
We consider the problem of efficient estimation via
simulation of first passage time probabilities for a multidimensional random
walk with t distributed increments. In addition of being a natural
generalization of the problem of computing ruin probabilities in insurance -
in which the focus is a one dimensional random walk - this problem captures
important features of large deviations for multidimensional heavy-tailed
processes (such as the role played by the mean of the random walk in
connection to the spatial location of the target set). We develop a
state-dependent importance sampling estimator for this class of
multidimensional problems. Then, we argue - using techniques based on Lyapunov
type inequalities - that our estimator is strongly efficient.
Estimating the Probability of a Rare Event Over a
Finite Time Horizon
Pieter-Tjerk de Boer (University of Twente),
Pierre L'Ecuyer (Universite de Montreal) and Gerardo Rubino and Bruno Tuffin
(IRISA/INRIA Rennes)
Abstract:
We study an approximation for the zero-variance change
of measure to estimate the probability of a rare event occuring when a
continuous-time Markov chain reaches a given set of states before some fixed
time limit. The jump rates of the chain are expressed as functions of a rarity
parameter in a way that the probability of the rare event goes to zero when
the rarity parameter goes to zero. After giving a general expression for the
zero-variance change of measure in this situation, we develop an approximation
of it via a power series and show that this approximation provides a bounded
relative error when the rarity parameter goes to zero. We illustrate the
performance of our approximation on numerical examples of highly reliable
Markovian systems, and compare it to a previously proposed heuristic. We also
exhibit the exact zero-variance change of measure and compare it with the
approximations.
Ant-based Approach for Determining the Change of
Measure in Importance Sampling
Poul Einar Heegaard (Norwegian
University of Science and Technology) and Werner Sandmann (University of
Bamberg)
Abstract:
Importance Sampling is a potentially powerful variance
reduction technique to speed up simulations where the objective depends on the
occurrence of rare events. However, it is crucial to find a change of the
underlying probability measure yielding estimators with significantly reduced
variance compared to direct estimators. In this paper, we present a new
dynamic and adaptive method for this purpose. The method is inspired by
ant-based systems that are in widespread use for solving optimization
problems. No intimate knowledge of the model under consideration is necessary.
Instead, the method adapts to it. Different commonly used modeling paradigms
such as queueing and reliability models, amongst many others, are supported by
describing the new method in terms of a transition class formalism. Simulation
results demonstrate the accuracy of the obtained estimates, and details of the
adapted change of measure are investigated to gain insights into the inner
workings of the method.
Tuesday 3:30:00 PM 5:00:00 PM
Combining Simulation and
Optimization
Chair: Tito Homem-de-Mello (Northwestern University)
Sequential Sampling for Solving Stochastic
Programs
Guzin Bayraksan (The University of Arizona) and David
Morton (The University of Texas at Austin)
Abstract:
We develop a sequential sampling procedure for solving
a class of stochastic programs. A sequence of feasible solutions, with at
least one optimal limit point, is given as input to our procedure. Our
procedure estimates the optimality gap of a candidate solution from this
sequence, and if that point estimate is sufficiently small then we stop.
Otherwise, we repeat with the next candidate solution from the sequence with a
larger sample size. We provide conditions under which this procedure: (i)
terminates with probability one and (ii) terminates with a solution which has
a small optimality gap with a prespecified probability.
Non-linear Control Variates for Regenerative
Steady-State Simulation
Sujin Kim (Purdue University) and Shane G.
Henderson (Cornell University)
Abstract:
We assume the existence of a parameterized family of
control variates that could be used in a regenerative steady-state simulation.
We show how such controls can be generated in the Markov-process setting,
discuss the optimization problem of searching for a good choice of
parameterization, and develop a strong law and central limit theorem for the
resulting estimator.
Implications of Heavy Tails on Simulation-based
Ordinal Optimization
Mark Broadie, Minsup Han, and Assaf Zeevi
(Columbia University)
Abstract:
We consider the problem of selecting the best system
using simulation-based ordinal optimization. This problem has been studied
mostly in the context of light-tailed distributions, where both Gaussian-based
heuristics and asymptotically optimal procedures have been proposed. The
latter rely on detailed knowledge of the underlying distributions and give
rise to an exponential decay of the probability of selecting the incorrect
system. However, their implementation tends to be computationally intensive.
In contrast, in the presence of heavy tails the probability of selecting the
incorrect system only decays polynomially, but this is achieved using simple
allocation schemes that rely on little information of the underlying
distributions. These observations are illustrated via several numerical
experiments and are seen to be consistent with asymptotic theory.
Wednesday 8:30:00 AM 10:00:00 AM
Advances in Simulation Output
Analysis
Chair: Emily Lada (SAS Institute)
Confidence Interval Estimation Using
Linear Combinations of Overlapping Variance Estimators
Tuba
Aktaran-Kalayci (University at Buffalo (SUNY)), David Goldsman (Georgia
Institute of Technology) and James R. Wilson (North Carolina State University)
Abstract:
We develop new confidence-interval estimators for the
mean and variance parameter of a steady-state simulation output process. These
confidence intervals are based on optimal linear combinations of overlapping
estimators for the variance parameter. We present analytical and
simulation-based results exemplifying the potential of this technique for
improvements in accuracy for confidence intervals.
Folded Standardized Time Series Area Variance
Estimators for Simulation
Claudia Antonini (Universidad Simon
Bolivar), Christos Alexopoulos and David Goldsman (Georgia Institute of
Technology) and James R. Wilson (North Carolina State University)
Abstract:
We estimate the variance parameter of a stationary
simulation-generated process using ``folded'' versions of standardized time
series area estimators. We formulate improved variance estimators based on the
combination of multiple folding levels as well as the use of batching. The
improved estimators preserve the asymptotic bias properties of their
predecessors but have substantially lower variance. A Monte Carlo example
demonstrates the efficacy of the new methodology.
Sbatch: A Spaced Batch Means Procedure for
Simulation Analysis
Emily K. Lada (SAS Institute Inc.) and James R.
Wilson (North Carolina State University)
Abstract:
We discuss SBatch, a simplified procedure for
steady-state simulation analysis that is based on spaced batch means,
incorporating many advantages of its predecessors ASAP3 and WASSP while
avoiding many of their disadvantages. SBatch is a sequential procedure
designed to produce a confidence-interval estimator for the steady-state mean
response that satisfies user-specified precision and coverage-probability
requirements. First SBatch determines a batch size and an interbatch spacer
size such that beyond the initial spacer, the spaced batch means approximately
form a stationary first-order autoregressive process whose lag-one correlation
does not significantly exceed 0.8. Next SBatch delivers a correlation-adjusted
confidence interval based on the sample variance and lag-one correlation of
the spaced batch means as well as the grand mean of all the individual
observations beyond the initial spacer. In an experimental evaluation, SBatch
compared favorably with ASAP3 and WASSP.
Wednesday 10:30:00 AM 12:00:00 PM
Input Modeling
Chair:
Michael Kuhl (Rochester Institute of Technology)
A Method for Fast Generation of Bivariate
Poisson Random Vectors
Kaeyoung Shin and Raghu Pasupathy (Virginia
Tech)
Abstract:
It is well known that trivariate reduction - a method
to generate two dependent random variables from three independent random
variables - can be used to generate Poisson random variables with specified
marginal distributions and correlation structure. The method, however, works
only for positive correlations. Moreover, the proportion of feasible positive
correlations that can be generated through trivariate reduction deteriorates
rapidly as the discrepancy between the means of the target marginal
distributions increases. We present a specialized algorithm for generating
Poisson random vectors, through appropriate modifications to trivariate
reduction. The proposed algorithm covers the entire range of feasible
correlations in two dimensions, and preliminary tests have demonstrated very
fast preprocessing and generation times.
Classification Analysis for Simulation of Machine
Breakdowns
Lanting Lu, Christine S. M. Currie, and Russell C. H.
Cheng (University of Southampton) and John Ladbrook (Ford)
Abstract:
Machine failure is often an important factor in
throughput of manufacturing systems. To simplify the inputs to the simulation
model for complex machining and assembly lines, we have derived the Arrows
classification method to group similar machines, where one model can be used
to describe the breakdown times for all of the machines in the group and
breakdown times of machines can be represented by finite mixture model
distributions. The Two-Sample Cramer-von Mises statistic is used to measure
the similarity of two sets of data. We evaluate the classification procedure
by comparing the throughput of a simulation model when run with mixture models
fitted to individual machine breakdown times; mixture models fitted to group
breakdown times; and raw data. Details of the methods and results of the
grouping processes will be presented, and will be demonstrated using an
example.
Analysis and Generation of Random Vectors with
Copulas
Christoph Strelen (University of Bonn) and Feras Nassaj
(Fraunhofer Institute)
Abstract:
Copulas are used in finance and insurance for modeling
stochastic dependency. They comprehend the entire dependence structure, not
only the correlations. Here they are estimated from measured samples of random
vectors. The copula and the marginal distributions of the vector elements
define a multivariate distribution of the sample which can be used to generate
random vectors with this distribution. This can be applied as well to time
series. A programmed algorithm is proposed. It is fast and allows for random
vectors with high dimension, for example 100.