|
WSC 2001 Final Abstracts |
Analysis Methodology Track
Monday 10:30:00 AM 12:00:00 PM
Input Modeling and Its Impact
Chair: Bahar Delar and Barry Nelson (Northwestern University)
Modeling and Generating Multivariate Time Series with
Arbitrary Marginals and Autocorrelation Structures
Bahar Deler and
Barry L. Nelson (Northwestern University)
Abstract:
Providing accurate and automated input modeling support
is one of the challenging problems in the application of computer simulation.
In this paper, we present a general-purpose input-modeling tool for
representing, fitting, and generating random variates from multivariate input
processes to drive computer simulations. We explain the theory underlying the
suggested data fitting and data generation techniques, and demonstrate that
our framework fits models accurately to both univariate and multivariate input
processes.
Generating Daily Changes in Market Variables Using a
Multivariate Mixture of Normal Distributions
Jin Wang (Valdosta
State University)
Abstract:
The mixture of normal distributions provides a useful
extension of the normal distribution for modeling of daily changes in market
variables with fatter-than-normal tails and skewness. An efficient analytical
Monte Carlo method is proposed for generating daily changes using a
multivariate mixture of normal distributions with arbitrary covariance matrix.
The main purpose of this method is to transform (linearly) a multivariate
normal with an input covariance matrix into the desired multivariate mixture
of normal distributions. This input covariance matrix can be derived
analytically. Any linear combination of mixtures of normal distributions can
be shown to be a mixture of normal distributions.
Accounting for Input Model and Parameter Uncertainty
in Simulation
Faker Zouaoui (Sabre, Inc.) and James R. Wilson (NC
State University)
Abstract:
Taking into account input-model, input-parameter, and
stochastic uncertainties inherent in many simulations, our Bayesian approach
to input modeling yields valid point and confidence-interval estimators for a
selected posterior mean response. Exploiting prior information to specify the
prior plausibility of each candidate input model and to construct prior
distributions on the model's parameters, we combine this information with the
likelihood function of sample data to compute posterior model probabilities
and parameter distributions. Our Bayesian Simulation Replication Algorithm
involves: (a) estimating parameter uncertainty by sampling from the posterior
parameter distributions on selected runs; (b) estimating stochastic
uncertainty by multiple independent replications of those runs; and (c)
estimating model uncertainty by weighting the results of (a) and (b) using the
corresponding posterior model probabilities. We allocate runs in (a) and (b)
to minimize final estimator variance subject to a computing-budget constraint.
An experimental performance evaluation demonstrates the advantages of this
approach.
Monday 1:30:00 PM 3:00:00 PM
Simulation Optimization
Chair:
Julius Atlason (University of Michigan)
Towards a Framework for Black-Box Simulation
Optimization
Sigurdur Ólafsson and Jumi Kim (Iowa State University)
Abstract:
Optimization using simulation has increased in
popularity in recent years as more and more simulation packages now offer
optimization features. At the same time, academic research in this area has
grown, but more work is needed to bring results from the academic community to
solve practical problems. This paper describes an effort in this direction. We
present a framework that can be used to effectively solve large
combinatorial-type simulation optimization problems in an automated or
semi-automated manner.
Global Random Optimization by Simultaneous
Perturbation Stochastic Approximation
John L. Maryak (Johns Hopkins
University, Applies Physics Lab) and Daniel C. Chin (Johns Hopkins University,
Applied Physics Lab)
Abstract:
A desire with iterative optimization techniques is that
the algorithm reach the global optimum rather than get stranded at a local
optimum value. Here, we examine the global convergence properties of a
"gradient free" stochastic approximation algorithm called "SPSA," that has
performed well in complex optimization problems. We establish two theorems on
the global convergence of SPSA. The first provides conditions under which SPSA
will converge in probability to a global optimum using the well-known method
of injected noise. In the second theorem, we show that, under different
conditions, "basic" SPSA without injected noise can achieve convergence in
probability to a global optimum. This latter result can have important
benefits in the setup (tuning) and performance of the algorithm. The
discussion is supported by numerical studies showing favorable comparisons of
SPSA to simulated annealing and genetic algorithms.
Constrained Optimization Over Discrete Sets Via SPSA
with Application to Non-Separable Resource Allocation
James E.
Whitney, II and Latasha I. Solomon (Morgan State University) and Stacy D. Hill
(Johns Hopkins University-Applied Physics Laboratory)
Abstract:
This paper presents a version of the Simultaneous
Perturbation Stochastic Approximation (SPSA) algorithm for optimizing
non-separable functions over discrete sets under given constraints. The
primary motivation for discrete SPSA is to solve a class of resource
allocation problems wherein the goal is to distribute a finite number of
discrete resources to finitely many users in such a way as to optimize a
specified objective function. The basic algorithm and the application of the
algorithm to the optimal resource allocation problem is discussed and
simulation results are presented which illustrate its performance.
Monday 3:30:00 PM 5:00:00 PM
Simulation in Financial Engineering
Chair: Michael Fu (University of Maryland)
Stopping Simulated Paths Early
Paul
Glasserman (Columbia Business School) and Jeremy Staum (Cornell University)
Abstract:
We provide results about stopping simulation paths
early as a variance reduction technique, adding to our earlier work on this
topic. The problem of pricing a financial instrument with cashflows at
multiple times, such as a mortgage-backed security, motivates this approach,
which is more broadly applicable to problems in which early steps are more
informative than later steps of a path. We prove a limit theorem that
demonstrates that this relative informativeness of simulation steps, not the
number of steps, determines the effectiveness of the method. Next we consider
an extension of the idea of stopping simulation paths early, showing how early
stopping can be random and depend on the state a path has reached, yet still
produce an unbiased estimator. We illustrate the potential effectiveness of
such estimators, and describe directions for future research into their
design.
Efficient Simulation for Discrete Path-Dependent
Option Pricing
James M. Calvin (New Jersey Institute of Technology)
Abstract:
In this paper we present an algorithm for simulating
functions of the minimum and terminal value for a random walk with Gaussian
increments. These expectations arise in connection with estimating the value
of path-dependent options when prices are monitored at a discrete set of
times. The expected running time of the algorithm is bounded above by a
constant as the number of steps increases.
A New Approach to Pricing American-Style
Derivatives
Scott B. Laprise, Michael C. Fu, and Steven I. Marcus
(University of Maryland) and Andrew E. B. Lim (Columbia University)
Abstract:
This paper presents a new approach to pricing
American-style derivatives. By approximating the value function with a
piecewise linear interpolation function, the option holder's continuation
value can be expressed as a summation of European call option values. Thus the
pricing of an American option written on a single underlying asset can be
converted to the pricing of a series of European call options. We provide two
examples of American-style options where this approximation technique yields
both upper and lower bounds on the true option price.
Tuesday 8:30:00 AM 10:00:00 AM
Standardized Time Series Methods
Chair: Andrew Seila (University of Georgia)
Variance Estimation Using Replicated Batch
Means
Sigrún Andradóttir and Nilay Tanik Argon (Georgia Institute
of Technology)
Abstract:
We present a new method for obtaining confidence
intervals in steady-state simulation. In our replicated batch means method, we
do a small number of independent replications to estimate the steady-state
mean of the underlying stochastic process. In order to obtain a variance
estimator, we further group the observations from these replications into
non-overlapping batches. We show that for large sample sizes, the new variance
estimator is less biased than the batch means variance estimator, the
variances of the two variance estimators are approximately equal, and the new
steady-state mean estimator has a smaller variance than the batch means
estimator when there is positive serial correlation between the observations.
For small sample sizes, we compare our replicated batch means method with the
(standard) batch means and multiple replications methods empirically, and show
that the best overall coverage of confidence intervals is obtained by the
replicated batch means method with a small number of replications.
On the MSE Robustness of Batching
Estimators
Yingchieh Yeh and Bruce W. Schmeiser (Purdue University)
Abstract:
Variance is a classical measure of a point estimator's
sampling error. In steady-state simulation experiments, many estimators of
this variance---or its square root, the standard error---depend upon batching
the output data. In practice, the optimal batch size is unknown because it
depends upon unknown statistical properties of the simulation output data.
When optimal batch size is estimated, the batch size used is random.
Therefore, robustness to estimated batch size is a desirable property for a
standard-error estimation method. We argue that a previous measure---the
second derivative of mse with respect to estimated batch size---is
conceptually flawed. We propose a new measure, the second derivative of the
mse with respect to the estimated center of gravity of the non-negative
autocorrelations of the output process. A property of the new robustness
measure is that both mse and robustness yield identical rankings.
Improving Standardized Time Series Methods by
Permuting Path Segments
James M. Calvin and Marvin K. Nakayama (New
Jersey Institute of Technology)
Abstract:
We describe an extension procedure for constructing new
standardized time series procedures from existing ones. The approach is based
on averaging over sample paths obtained by permuting path segments. Analytical
and empirical results indicate that permuting improves standardized time
series methods. We also propose a new standardized time series method based on
maximums.
Tuesday 10:30:00 AM 12:00:00 PM
Input Uncertainty
Chair:
Stephen Chick (University of Michigan)
Accounting for Parameter Uncertainty in Simulation
Input Modeling
Faker Zouaoui (Sabre, Inc.) and James R. Wilson (NC
State Univ)
Abstract:
We formulate and evaluate a Bayesian approach to
probabilistic input modeling. Taking into account the parameter and stochastic
uncertainties inherent in most simulations, this approach yields valid
predictive inferences about the output quantities of interest. We use prior
information to construct prior distributions on the input-model parameters.
Combining this prior information with the likelihood function of sample data
observed on the input processes, we compute the posterior parameter
distributions using Bayes' rule. This leads to a Bayesian Simulation
Replication Algorithm in which: (a) we estimate the parameter uncertainty by
sampling from the posterior distribution of the input model's parameters on
selected simulation runs; and (b) we estimate the stochastic uncertainty by
multiple independent replications of those selected runs. We also formulate
some performance evaluation criteria that are reasonable within both the
Bayesian and frequentist paradigms. An experimental performance evaluation
demonstrates the advantages of the Bayesian approach versus conventional
frequentist techniques.
Reducing Input Parameter Uncertainty for
Simulations
Szu Hui Ng and Stephen E. Chick (Department of
Industrial and Operations Engineering)
Abstract:
Parameters of statistical distributions that are input
to simulations are typically not known with certainty. For existing systems,
or variations on existing systems, they are often estimated from field data.
Even if the mean of simulation output were estimable exactly as a function of
input parameters, there may still be uncertainty about the output mean because
inputs are not known precisely. This paper considers the problem of deciding
how to allocate resources for additional data collection so that input
uncertainty is reduced in a way that effectively reduces uncertainty about the
output mean. The optimal solution to the problem in full generality appears to
be quite challenging. Here, we simplify the problem with asymptotic
approximations in order provide closed-form sampling plans for additional data
collection activities. The ideas are illustrated with a simulation of a
critical care facility.
Resampling Methods for Input
Modeling
Russell R. Barton (The Pennsylvania State University) and
Lee W. Schruben (University of California, Berkeley)
Abstract:
Stochastic simulation models are used to predict the
behavior of real systems whose components have random variation. The
simulation model generates artificial random quantities based on the nature of
the random variation in the real system. Very often, the probability
distributions occurring in the real system are unknown, and must be estimated
using finite samples. This paper shows three methods for incorporating the
error due to input distributions that are based on finite samples, when
calculating confidence intervals for output parameters.
Tuesday 1:30:00 PM 3:00:00 PM
Simulation in Optimization and
Optimization in Simulation
Chair: Shane Henderson (Cornell
University)
A Markov Chain Perspective on Adaptive Monte Carlo
Algorithms
Paritosh Y. Desai and Peter W. Glynn (Stanford
University)
Abstract:
This paper discusses some connections between adaptive
Monte Carlo algorithms and general state space Markov chains. Adaptive
algorithms are iterative methods in which previously generated samples are
used to construct a more efficient sampling distribution at the current
iteration. In this paper, we describe two such adaptive algorithms, one
arising in a finite-horizon computation of expected reward and the other
arising in the context of solving eigenvalue problems. We then discuss the
connection between these adaptive algorithms and general state space Markov
chain theory, and offer some insights into some of the technical difficulties
that arise in trying to apply the known theory for general state space chains
to such adaptive algorithms.
Chessboard Distributions
Soumyadip
Ghosh and Shane G. Henderson (Cornell University)
Abstract:
We review chessboard distributions for modeling
partially specified finite-dimensional random vectors. Chessboard
distributions can match a given set of marginals, a given covariance
structure, and various other constraints on the distribution of a random
vector. It is necessary to solve a potentially large linear program to set up
a chessboard distribution, but random vectors can then be rapidly generated.
Constrained Monte Carlo and the Method of Control
Variates
Roberto Szechtman and Peter W. Glynn (Stanford University)
Abstract:
A constrained Monte Carlo problem arises when one
computes an expectation in the presence of a priori computable constraints on
the expectations of quantities that are correlated with the estimand. This
paper discusses different applications settings in which such constrained
Monte Carlo computations arise, and establishes a close connection with the
method of control variates when the constraints are of equality form.
Tuesday 3:30:00 PM 5:00:00 PM
Comparing Systems via Stochastic
Simulation
Chair: Sigurdur Olafsson (Iowa State University)
Selection-of-the-Best Procedures for Optimization Via
Simulation
Juta Pichitlamken and Barry L. Nelson (Northwestern
University)
Abstract:
We propose fully sequential indifference-zone selection
procedures that are specifically for use within an optimization-via-simulation
algorithm when simulation is costly and partial or complete information on
solutions previously visited is maintained. {\it Sequential Selection with
Memory} guarantees to select the best or near-best alternative with a
user-specified probability when some solutions have already been sampled and
their previous samples are retained. For the case when only summary
information is retained, we derive a modified procedure. We illustrate how our
procedure can be applied to optimization-via-simulation problems and compare
its performance with other methods by numerical examples.
Using Common Random Numbers for Indifference-Zone
Selection
E. Jack Chen (BASF Corporation)
Abstract:
This paper discusses the validty of using common random
numbers (CRNs) with two-stage selection procedures to improve the possibility
of correct selection and discusses the intrinsic subset pre-selection of the
Enhanced Two-Stage Selection (ETSS) procedure. We propose using CRNs with
Rinott's two-stage selection and ETSS procedures when the underlying processes
satisfy certain conditions. An experimental performance evaluation
demonstrates the improvement in the possibility of correct selection of using
CRNs with two-stage selection procedures and the intrinsic subset
pre-selection of the ETSS procedure.
A Genetic Algorithm and an Indifference-Zone Ranking
and Selection Framework for Simulation Optimization
Henrik E.
Hedlund and Mansooreh Mollaghasemi (University of Central Florida)
Abstract:
A methodology for optimization of simulation models is
presented. The methodology is based on a genetic algorithm in conjunction with
an indifference-zone ranking and selection procedure under common random
numbers. An application of this optimization algorithm to a stochastic
mathematical model is provided in this paper.
Wednesday 8:30:00 AM 10:00:00 AM
Stochastic Optimization Using
Simulation
Chair: Sigrun Andradottir (Georgia Institute of
Technology)
Graphical Representation of IPA
Estimation
Michael Freimer (School of ORIE, Cornell University) and
Lee Schruben (Department of IEOR, University of California at Berkeley)
Abstract:
Infinitesimal Perturbation Analysis (IPA) estimators of
the response gradient for a discrete event stochastic simulation are typically
developed within the framework of Generalized semi-Markov processes (GSMPs).
Unfortunately, while mathematically rigourous, GSMPs are not particularly
useful for modeling real systems. In this paper we describe a procedure that
allows IPA gradient estimation to be easily and automatically implemented in
the more general and intuitive modeling context of Event Graphs. The intent is
to make IPA gradient estimation more easily understood and more widely
accessible. The pictorial nature of Event Graphs also provides insights into
the basic IPA calculations and alternative descriptions of conditions under
which the IPA estimator is known to be unbiased.
Monte Carlo Simulation Approach to Stochastic
Programming
Alexander Shapiro (Georgia Institute of Technology )
Abstract:
Various stochastic programming problems can be
formulated as problems of optimization of an expected value function. Quite
often the corresponding expectation function cannot be computed exactly and
should be approximated, say by Monte Carlo sampling methods. In fact, in many
practical applications, Monte Carlo simulation is the only reasonable way of
estimating the expectation function. In this talk we discuss converges
properties of the sample average approximation (SAA) approach to stochastic
programming. We argue that the SAA method is easily implementable and can be
surprisingly efficient for some classes of stochastic programming problems.
Stochastic Modeling of Airlift
Operations
Julien Granger, Ananth Krishnamurthy, and Stephen M.
Robinson (University of Wisconsin-Madison)
Abstract:
Large-scale military deployments require transporting
equipment and personnel over long distances in a short time. Planning an
efficient airlift system is complicated and several models exist in the
literature. Particularly, a study conducted on a deterministic optimization
model developed by the Naval Postgraduate School and the RAND Corporation has
shown that incorporating stochastic events leads to a degradation of
performance. In this paper we investigate the applicability of network
approximation methods to take into account randomness in an airlift network.
Specifically, we show that approximation methods can model key performance
features with sufficient accuracy to permit their use for network improvement,
while requiring only a small fraction of the computational work that would
have been needed had simulation been used for all of the performance
evaluations. Also, we predict that combining simulation and approximation may
work substantially better than either one of these alone.
Wednesday 10:30:00 AM 12:00:00 PM
Steady State Simulation Analysis
Chair: Paul Hyden (Clemson University)
Importance Sampling Using the Semi-Regenerative
Method
James M. Calvin (New Jersey Institute of Technology), Peter
W. Glynn (Stanford University) and Marvin K. Nakayama (New Jersey Institute of
Technology)
Abstract:
We discuss using the semi-regenerative method,
importance sampling, and stratification to estimate the expected cumulative
reward until hitting a fixed set of states for a discrete-time Markov chain on
a countable state space. We develop a general theory for this problem and
present several central limit theorems for our estimators. We also present
some empirical results from applying these techniques to simulate a
reliability model.
Quantile and Histogram Estimation
E. Jack
Chen (BASF Corporation) and W. David Kelton (The Pennsylvania State
University)
Abstract:
This paper discusses implementation of a sequential
procedure to construct proportional half-width confidence intervals for a
simulation estimator of the steady-state quantiles and histograms of a
stochastic process. Our quasi-independent (QI) procedure increases the
simulation run length progressively until a certain number of essentially
independent and identically distributed samples are obtained. We compute
sample quantiles at certain grid points and use Lagrange interpolation to
estimate the p quantile. It is known that order statistics quantile estimator
is asymptotically unbiased when the output sequences satisfy certain
conditions. Even though the proposed sequential procedure is a heuristic
procedure, it does have strong basis. Our empirical results show that the
procedure gives quantile estimates and histograms that satisfy a pre-specified
precision requirement. An experimental performance evaluation demonstrates the
validity of using the QI procedure to estimate the quantiles and histograms.
On-Line Error Bounds for Steady-State Approximations:
A Potential Solution to the Initialization Bias Problem
Enver
Yücesan, Luk N. Van Wassenhove, and Klenthis Papanikas (INSEAD) and Nico M.
van Dijk (University of Amsterdam)
Abstract:
By studying performance measures via reward structures,
on-line error bounds are obtained by successive approximation. These bounds
indicate when to terminate computation with guaranteed accuracy; hence, they
provide insight into steady-state convergence. The method therefore presents a
viable alternative to steady-state computer simulation where the output series
is typically contaminated with initialization bias whose impact on the output
cannot be easily quantified. The method is illustrated on a number of
capacitated queueing networks. The results indicate that the method offers a
practical tool for numerically approximating performance measures of queueing
networks. Results on steady-state convergence further quantify the error
involved in analyzing an inherently transient system using a steady-state
model.
Wednesday 10:30:00 AM 12:00:00 PM
Statistical Tools for Simulation
Design and Analysis
Chair: Michael Freimer (Cornell University)
Simulating Ruin Probabilities in Insurance Risk
Processes with Subexponential Claims
Nam Kyoo Boots (Vrije
University) and Perwez Shahabuddin (Columbia University)
Abstract:
We describe a fast simulation framework for simulating
small ruin probabilities in insurance risk processes with subexponential
claims. Naive simulation is inefficient for estimating small probabilities
since the event of interest is rare, and special simulation techniques like
importance sampling need to be used. An importance sampling change of measure
known as subexponential twisting has been found useful for some rare event
simulations in the subexponential context. We describe a set of conditions on
the process that are sufficient to ensure that the infinite horizon
probability can be estimated in a (work-normalized) large set asymptotically
optimal manner, using this change of measure. These conditions are satisfied
for some large classes of insurance risk processes -- e.g., processes with
Markov-modulated claim-arrivals and claim-sizes -- where the heavy tails are
of the `Weibull type'. We also give similar conditions, that are much weaker,
for the estimation of the finite horizon ruin probability. Finally, we present
experiments supporting our results.
Using Quantile Estimates in Simulating Internet Queues
with Pareto Service Times
Martin J. Fischer and Denise M.
Bevilacqua Masi (Mitretek Systems), Donald Gross and John Shortle (George
Mason University) and Percy H. Brill (University of Windsor)
Abstract:
It is readily apparent how important the Internet is to
modern life. The exponential growth in its use requires good tools for
analyzing congestion. Much has been written recently asserting that classical
queueing models assuming Poisson arrivals or exponential service cannot be
used for the accurate study of congestion in major portions of the Internet.
Internet traffic data indicate that heavy-tailed distributions (e.g., Pareto)
serve as better models in many situations for packet service lengths. But
these distributions may not possess closed-form analytic Laplace transforms;
hence, much of standard queueing theory cannot be used. Simulating such queues
becomes essential; however, previous research pointed out difficulties in
obtaining the usual moment performance measures such as mean wait in queue. In
this paper, we investigate using quantile estimates of waiting times (e.g.,
median instead of mean), which appear to be considerably more efficient when
service times are Pareto.
Sensitivity Analysis of Censored Output through
Polynomial, Logistic, and Tobit Regression Meta-Models: Theory and Case
Study
Jack P. C. Kleijnen (Tilburg University) and Antonie Vonk
Noordegraaf and Mirjam Nielen (Wageningen University)
Abstract:
This paper focuses on simulation output that may be
censored; that is, the output has a limited range (examples are simulations
that have as output the time to occurrence of a specific event - such as a
‘rare' event - within a fixed time horizon). For sensitivity analysis of such
simulations we discuss three alternatives: (i) traditional polynomial
regression models, (ii) logistic or logit regression, and (iii) tobit
analysis. The case study concerns the control of a specific animal disease
(namely, IBR) in The Netherlands. The simulation experiment has 31
environmental factors or inputs, combined into 64 scenarios - each replicated
twice. Traditional polynomial regression gives some estimated main effects
with wrong signs. Logit regression correctly predicts whether simulation
output is censored or not, for 92% of the scenarios. Tobit analysis does not
give effects with wrong signs; it correctly predicts censoring, for 89% of the
scenarios.