|
WSC 2008 Final Abstracts |
Analysis Methodology Track
Monday 10:30:00 AM 12:00:00 PM
Comparison Using Simulation
Chair: Shane Henderson (Cornell University)
Comparing Two Systems: Beyond Common Random
Numbers
Samuel M. T. Ehrlichman and Shane G. Henderson (Cornell
University)
Abstract:
Suppose one wishes to compare two closely related
systems via stochastic simulation. Common random numbers (CRN) involves using
the same streams of uniform random variates as inputs for both systems to
sharpen the comparison. One can view CRN as a particular choice of copula that
gives the joint distribution of the inputs of both systems. We discuss the
possibility of using more general copulae, including simple examples that show
how this can outperform CRN.
Run-Length Variability of Two-Stage Multiple
Comparisons with the Best for Steady-state Simulations and Its Implications
for Choosing First-Stage Run Lengths
Marvin K Nakayama (New Jersey
Institute of Technology)
Abstract:
We analyze the asymptotic behavior of two-stage
procedures for multiple comparisons with the best (MCB) for comparing the
steady-state means of alternative systems using simulation. The two procedures
we consider differ in how they estimate the variance parameters of the
alternatives in the first stage. One procedure uses a consistent estimator,
and the other employs an estimator based on one of Schruben's standardized
time series (STS) methods. While both methods lead to mean total run lengths
that are of the same asymptotic order of magnitude, the limiting variability
of the run lengths is strictly smaller for the method based on a consistent
variance estimator. We also provide some analysis showing how to choose the
first-stage run length.
Comparison of Bayesian Priors for Highly Reliable
Limit Models
Roy R Creasey (Longwood University), Preston White
(University of Virginia) and Linda B Wright and Cheryl F Davis (Longwood
University)
Abstract:
Limit standards are probability interval requirements
for proportions. Simulation literature has focused on finding the confidence
interval of the population proportion, which is inappropriate for limit
standards. Further, some Frequentist approaches cannot be utilized for highly
reliable models, or models which produce no or few non-conforming trials.
Bayesian methods provide approaches that can be utilized for all limit
standard models. We consider a methodology developed for Bayesian reliability
analysis, where historical data is used to define the a priori distribution of
proportions p, and the customer desired a posteriori maximum probability is
utilized to determine sample size for a replication.
Monday 1:30:00 PM 3:00:00 PM
Efficient Ranking and Selection
Procedures I
Chair: John Shortle (George Mason University)
A Preliminary Study of Optimal Splitting for
Rare-Event Simulation
John F Shortle and Chun-Hung Chen (George
Mason University)
Abstract:
Efficiency is a big concern when using simulation to
estimate rare-event probabilities, since a huge number of simulation
replications may be needed in order to obtain a reasonable estimate of such a
probability. Furthermore, when multiple designs must be compared, and each
design requires simulation of a rare event, then the total number of samples
across all designs can be prohibitively high. This paper presents a new
approach to enhance the efficiency for rare-event simulation. Our approach is
developed by integrating the notions of level splitting and optimal computing
budget allocation. The goal is to determine the optimal numbers of simulation
runs across designs and across a number of splitting levels so that the
variance of the rare-event estimator is minimized.
A New Perspective on Feasibility
Determination
Roberto Szechtman (Naval Postgraduate School) and
Enver Yucesan (INSEAD)
Abstract:
We consider the problem of feasibility determination in
a stochastic setting. In particular, we wish to determine whether a system
belongs to a given set G based on a performance measure estimated through
Monte Carlo simulation. Our contribution is two-fold: (i) we characterize
fractional allocations that are asymptotically optimal; and (ii) we provide an
easily implementable algorithm, rooted in stochastic approximation theory,
that results in sampling allocations that provably achieve in the limit the
same performance as the optimal allocations. The finite-time behavior of the
algorithm is also illustrated on two small examples.
Restricted Subset Selection
E Jack Chen
(BASF Corporation)
Abstract:
This paper develops procedures for electing a set of
normal populations with unknown means and unknown variances in order that the
final subset of selected populations satisfies the following requirements:
with probability at least P*, the selected subset will contain a population or
"only and all" of those populations whose mean lies less than the distance d*
from the smallest mean. The size of the selected subset is random, however, at
most m populations will finally be chosen. A restricted subset attempts to
exclude populations that are deviated more than d* from the smallest mean.
Here P*, d*, and m are users specified parameters. The procedure can be used
when the unknown variances across populations are unequal. An experimental
performance evaluation demonstrates the validity and efficiency of these
restricted subset selection procedures.
Monday 3:30:00 PM 5:00:00 PM
Efficient Ranking and Selection
Procedures II
Chair: Chun-Hung Chen (George Mason University)
An Efficient Ranking and Selection Procedure for
a Linear Transient Mean Performance Measure
Douglas J. Morrice (The
University of Texas at Austin) and Mark W. Brantley and Chun-Hung Chen (George
Mason University)
Abstract:
We develop a Ranking and Selection procedure for
selecting the best configuration based on a transient mean performance
measure. The procedure extends the OCBA approach to systems whose means are a
function of some other variable such as time. In particular, we characterize
this as a prediction problem and imbed a regression model in the OCBA
procedure. In this paper, we analyze the linear case and discuss a number of
extensions. Additionally, we provide some motivating examples for this
approach.
Update on Economic Approach to Simulation Selection
Problems
Stephen E. Chick (INSEAD) and Noah Gans (Wharton)
Abstract:
This paper summarizes new analytical and empirical
results for the economic approach to simulation selection problems that we
introduced two years ago. The approach seeks to help managers to maximize the
expected net present value (NPV) of system design decisions that are informed
by simulation. It considers the time value of money, the cost of simulation
sampling, and the time and cost of developing simulation tools. This economic
approach to decision making with simulation is therefore an alternative to the
statistical guarantees or probabilistic convergence results of other
commonly-used approaches to simulation optimization. Empirical results are
promising. This paper also retracts a claim that was made regarding the
existence of Gittins' indices for these problems - their existence remains an
open question.
The Knowledge-Gradient Stopping Rule for Ranking
and Selection
Peter Frazier and Warren Buckler Powell (Princeton
University)
Abstract:
We consider the ranking and selection of normal means
in a fully sequential Bayesian context. By considering the sampling and
stopping problems jointly rather than separately, we derive a new composite
stopping/sampling rule. The sampling component of the derived composite rule
is the same as the previously introduced LL1 sampling rule, but the stopping
rule is new. This new stopping rule significantly improves the performance of
LL1 as compared to its performance under the best other generally known
adaptive stopping rule, EOC Bonf, outperforming it in every case tested.
Tuesday 8:30:00 AM 10:00:00 AM
Efficient Simulation Techniques
Chair: Pirooz Vakili (Boston University)
Monotonicity and Stratification
Gang Zhao
and Pirooz Vakili (Boston University)
Abstract:
In utilizing the technique of stratification, the user
needs to first partition/stratify the sample space; the next task is to
determine how to allocate samples to strata. How to best perform the second
task is well understood and analyzed and there are effective and generic
recipes for sample allocation. Performing the first task, on the other hand,
is generally left to the user who has limited guidelines at her/his disposal.
We review explicit and implicit stratification approaches considered in the
literature and discuss their relevance to simulation studies. We then discuss
the different ways in which monotonicity plays a role in optimal
stratification.
Control Variate Technique: A Constructive
Approach
Tarik Borogovac and Pirooz Vakili (Boston University)
Abstract:
The technique of control variates requires that the
user identify a set of variates that are correlated with the estimation
variable and whose means are known to the user. We relax the known mean
requirement and instead assume the means are to be estimated. We argue that
this strategy can be beneficial in parametric studies, analyze the properties
of controlled estimators, and propose a class of generic and effective
controls in a parametric estimation setting. We discuss the effectiveness of
the estimators via analysis and simulation experiments.
Efficient Simulation for Tail Probabilities of
Gaussian Random Field
Robert J. Adler (Technion-Israel Institute of
Technology) and Jose H. Blanchet and Jingchen Liu (Columbia University)
Abstract:
We are interested in computing tail probabilities for
the maxima of Gaussian random fields. In this paper, we discuss two special
cases: random fields defined over a finite number of distinct point and fields
with finite Karhunen-Loeve expansions. For the first case we propose an
importance sampling estimator which yields asymptotically zero relative error.
Moreover, it yields a procedure for sampling the field conditional on it
having an excursion above a high level with a complexity that is uniformly
bounded as the level increases. In the second case we propose an estimator
which is asymptotically optimal. These results serve as a first step analysis
of rare-event simulation for Gaussian random fields.
Tuesday 10:30:00 AM 12:00:00 PM
Input Modeling
Chair: Jack
Chen (BASF Corporation)
Functional Data Analysis for Non Homogeneous
Poisson Processes
Fermín Mallor and Martín Gastón (Public
University of Navarre) and Teresa León (University of Valencia)
Abstract:
In this paper we intend to illustrate how Functional
Data Analysis (FDA) can be very useful for simulation input modelling. In
particular, we are interested in the estimation of the cumulative mean
function of a non-homogeneous Poisson Process (NHPP). Both parametric and
nonparametric methods have been developed to estimate it from observed
independent streams of arrival times. As far as we know, these data have not
been analyzed as functional data. The basic idea underlying of FDA is treating
a functional observation as a single datum rather than as a large set of data
on its own. A considerable effort is being made in order to adapt some
standard statistical methods for functional data, for instance Principal
Components Analysis, ANOVA, classification techniques, boot-strap confidence
bands, or outlier detection. We have studied a set of real data making use of
these techniques and obtaining very good results.
Reliable Simulation with Input Uncertainties Using an
Interval-Based Approach
Ola G. Batarseh and Yan Wang (University of
Central Florida)
Abstract:
Uncertainty associated with input parameters and models
in simulation has gained attentions in recent years. The sources of
uncertainties include lack of data and lack of knowledge about physical
systems. In this paper, we present a new reliable simulation mechanism to help
improve simulation robustness when significant uncertainties exist. The new
mechanism incorporates variabilities and uncertainties based on imprecise
probabilities, where the statistical distribution parameters in the simulation
are intervals instead of precise real numbers. The mechanism generates random
interval variates to model the inputs. Interval arithmetic is applied to
simulate a set of scenarios simultaneously in each simulation run. To ensure
that the interval results bound those from the traditional real-valued
simulation, a generic approach is also proposed to specify the number of
replications in order to achieve the desired robustness. This new reliable
simulation mechanism can be applied to address input uncertainties to support
robust decision making.
Smooth Flexible Models of Nonhomogeneous Poisson
Processes Using One or More Process Realizations
Michael E Kuhl and
Shalaka C Deo (Rochester Institute of Technology) and James R Wilson (North
Carolina State University)
Abstract:
We develop and evaluate a semiparametric method to
estimate the mean-value function of a nonhomogeneous Poisson process (NHPP)
using one or more process realiza-tions observed over a fixed time interval.
To approximate the mean-value function, the method exploits a specially
formulated polynomial that is constrained in least-squares estimation to be
nondecreasing so the corresponding rate function is nonnegative and smooth
(continuously differentiable). An experimental performance evaluation for two
typical test problems demonstrates the method’s ability to yield an accurate
fit to an NHPP based on a single process realization. A third test problem
shows how the method can estimate an NHPP based on multiple realizations of
the process.
Tuesday 1:30:00 PM 3:00:00 PM
Metamodels
Chair: Russell Cheng
(University of Southampton)
Stochastic Kriging for Simulation
Metamodeling
Barry L Nelson, Jeremy Staum, and Bruce Ankenman
(Northwestern University)
Abstract:
We extend the basic theory of kriging, as applied to
the design and analysis of deterministic computer experiments, to the
stochastic simulation setting. Our goal is to provide flexible,
interpolation-based metamodels of simulation output performance measures as
functions of the controllable design or decision variables. To accomplish this
we characterize both the intrinsic uncertainty inherent in a stochastic
simulation and the extrinsic uncertainty about the unknown response surface.
We use tractable examples to demonstrate why it is critical to characterize
both types of uncertainty, derive general results for experiment design and
analysis, and present a numerical example that illustrates the stochastic
kriging method.
Selecting the Best Linear Simulation
Metamodel
Russell Cheng (University of Southampton)
Abstract:
We consider the output of a simulation model of a
system about which little is initially known. This output is often dependent
on a large number of factors. It is helpful, in examining the behaviour of the
system, to find a statistical metamodel containing only those factors most
important in influencing this output. The problem is therefore one of
selecting a parsimonious metamodel that includes only a subset of the factors,
but which nevertheless adequately describes the behaviour of the output. The
total number of possible submodels from which we are choosing grows
exponentially with the number of factors, so a full examination of all
possible submodels rapidly becomes intractable. We show how resampling can
provide a simple solution to the problem, by allowing potentially good
submodels to be rapidly identified. This resampling approach also allows a
systematic statistical comparison of good submodels to be made.
Data Enhancement, Smoothing, Reconstruction and
Optimization by Kriging Interpolation
Hasan Gunes and Hakki Ergun
Cekli (Istanbul Technical University) and Ulrich Rist (Universitaet Stuttgart)
Abstract:
The performance of Kriging for enhancement, smoothing,
reconstruction and optimization of a test data set is investigated.
Specifically, the ordinary two-dimensional Kriging and 2D line-Kriging
interpolation are investigated and compared with the well-known digital
filters for data smoothing. We used an analytical 2D synthetic test data with
several minima and maxima. Thus, we could perform detailed analyses in a
well-controlled manner in order to assess the effectiveness of each procedure.
We have demonstrated that Kriging method can be used effectively to enhance
and smooth a noisy data set and re-construct large missing regions (black
zones) in lost data. It has also been shown that, with the appropriate
selection of the correlation function (variogram model) and its correlation
parameter, one can control the ‘degree’ of smoothness in a robust way.
Finally, we illustrate that Kriging can be a viable ingredient in constructing
effective global optimization algorithms in conjunction with simulated
annealing.
Tuesday 3:30:00 PM 5:00:00 PM
Output Analysis
Chair: Wheyming
Song (National Tsing Hua University)
Skart: A Skewness- and Autoregression-Adjusted
Batch-Means Procedure for Simulation Analysis
Ali Tafazzoli (NC
State University), James R. Wilson (North Carolina State University), Emily K.
Lada (SAS Institute Inc) and Natalie M. Steiger (Maine Business School)
Abstract:
We discuss Skart, an automated batch-means procedure
for constructing a skewness- and autoregression-adjusted confidence interval
for the steady-state mean of a simulation output process. Skart is a
sequential procedure designed to deliver a confidence interval that satisfies
user-specified requirements concerning not only coverage probability but also
the absolute or relative precision provided by the half-length. Skart exploits
separate adjustments to the half-length of the classical batch-means
confidence interval so as to account for the effects on the distribution of
the underlying Student's t-statistic that arise from nonnormality and
autocorrelation of the batch means. Skart also delivers a point estimator for
the steady-state mean that is approximately free of initialization bias. In an
experimental performance evaluation involving a wide range of test processes,
Skart compared favorably with other simulation analysis methods - namely, its
predecessors ASAP3, WASSP, and SBatch as well as ABATCH, LBATCH, the
Heidelberger-Welch procedure, and the Law-Carson procedure.
A Large Deviations View of Asymptotic Efficiency
for Simulation Estimators
Sandeep Juneja (Tata Institute of
Fundamental Research) and Peter Glynn (Stanford University)
Abstract:
Consider a simulation estimator alpha(c) based on
expending c units of computer time, to estimate a quantity alpha. One measure
of efficiency is to attempt to minimize P(|alpha(c) - alpha| > epsilon) for
large c. This helps identify estimators with less likelihood of witnessing
large deviations. In this article we establish an exact asymptotic for this
probability when the underlying samples are independent and a weaker large
deviations result under more general dependencies amongst the underlying
samples.
Displaying Statistical Point Estimators: The
Leading-Digit Procedure
Wheyming T. Song (National Tsing Hua
University) and Bruce Schmeiser (Purdue University)
Abstract:
We propose a procedure for reporting a statistical
point estimator and its precision for statistical experiments such as
simulation experiments. Based on three criteria - loss of statistical
information, number of characters required, and likelihood of user
misinterpretation - we advocate our procedure for use when reporting many
point estimators in tabular form. The procedure discards meaningless digits of
the point estimator, and all but the left-most non-zero digit of the standard
error. These two resulting values are separated by the ``;'' sign.
Wednesday 8:30:00 AM 10:00:00 AM
Output Analysis and SPC
Chair: Seong-Hee Kim (Georgia Institute of
Technology)
The More Plot: Displaying Measures of Risk &
Error From Simulation Output
Barry L Nelson (Northwestern
University)
Abstract:
The focus on mean (long-run average) performance as the
primary output measure produced by simulation experiments diminishes the
usefulness of simulation for characterizing risk. Confidence intervals on
means are often misinterpreted as measures of future risk, when in fact they
are measures of error. We introduce the Measure of Risk & Error (MORE)
plot as a way to display and make intuitive the concepts of risk and error and
thus support sound experiment design and correct decision making.
A Distribution-Free Tabular Cusum Chart for
Correlated Data with Automated Variance Estimation
Joongsup Jay
Lee, Christos Alexopoulos, David Goldsman, Seong-Hee Kim, and Kwok-Leung Tsui
(Georgia Institute of Technology) and James R. Wilson (North Carolina State
University)
Abstract:
We formulate and evaluate distribution-free statistical
process control (SPC) charts for monitoring an autocorrelated process when a
training data set is used to estimate the marginal mean and variance of the
process as well as its variance parameter (i.e., the sum of covariances at all
lags). We adapt variance-estimation techniques from the simulation literature
for automated use in DFTC-VE, a distribution-free tabular CUSUM chart for
rapidly detecting shifts in the mean of an autocorrelated process. Extensive
experimentation shows that our variance-estimation techniques do not seriously
degrade the performance of DFTC-VE compared with its performance using exact
knowledge of the variance parameter; moreover, the performance of DFTC-VE
compares favorably with that of other competing distribution-free SPC charts.
Implementable MSE-Optimal Dynamic Partial-Overlapping
Batch Means Estimators for Steady-State Simulations
Wheyming Tina
Song and Mingchang Chih (National Tsing Hua University)
Abstract:
Estimating the variance of the sample mean from a
stochastic process is essential in assessing the quality of using the sample
mean to estimate the population mean which is the fundamental question in
simulation experiments. Most existing studies for estimating the variance of
the sample mean from simulation output assume simulation run length is known
in advance. This paper proposes an implementable batch-size selection
procedure for estimating the variance of the sample mean without requiring
that the sample size or simulation run length a priori.
Wednesday 10:30:00 AM 12:00:00 PM
QMC Methods in Finance
Chair: Pierre L'Ecuyer (DIRO, Université de
Montréal)
Simulation of a Lévy Process by PCA Sampling to
Reduce the Effective Dimension
Pierre L'Ecuyer (DIRO, Université de
Montréal) and Jean-Sébastien Parent-Chartier and Maxime Dion (Université de
Montréal)
Abstract:
For a Lévy process monitored at s observation times, we
want to estimate the expected value of some function of the observations by
RQMC. For the case of a Brownian motion, PCA sampling has been proposed to
reduce the effective dimension of the problem by using an eigen-decomposition
of the covariance matrix of the vector of observations. We show how this
method applies to other Lévy processes, and we examine its effectiveness in
improving RQMC efficiency empirically. The idea is to simulate a Brownian
motion at s observation points using PCA, transform its increments into
independent uniforms over (0,1), then transform these uniforms again by
applying the inverse distribution function of the increments of the Lévy
process.
Fast Simulation of Equity-Linked Life Insurance
Contracts with a Surrender Option
Carole Bernard and Christiane
Lemieux (University of Waterloo)
Abstract:
In this paper, we consider equity-linked life insurance
contracts that give their holder the possibility to surrender their policy
before maturity. Such contracts can be valued using simulation methods
proposed for the pricing of American options, but the mortality risk must also
be taken into account when pricing such contracts. Here, we use the
least-squares Monte Carlo approach of Longstaff and Schwartz coupled with
quasi-Monte Carlo sampling and a control variate in order to construct
efficient estimators for the value of such contracts. We also show how to
incorporate the mortality risk into these pricing algorithms without
explicitly simulating it.
On the Approximation Error in High Dimensional Model
Representation
Xiaoqun Wang (Department of Mathematical Sciences,
Tsinghua University)
Abstract:
Mathematical models are often described by multivariate
functions, which are usually approximated by a sum of lower dimensional
functions. A major problem is the approximation error introduced and the
factors that affect it. This paper investigates the error of approximating a
multivariate function by a sum of lower dimensional functions in high
dimensional model representations. Two kinds of approximations are studied,
namely, the approximation based on the ANOVA (analysis of variance)
decomposition and the approximation based on the anchored decomposition. We
prove new theorems for the expected error of approximation based on anchored
decomposition when the anchor is chosen randomly and establish the
relationship of the expected errors with the global sensitivity indices of
Sobol'. The expected error gives indications on how good or how bad could be
the approximation based on anchored decomposition and when the approximation
is good or bad. Methods for choosing good anchors are presented.