|
WSC 2006 Abstracts |
Analysis Methodology A Track
Monday 10:30:00 AM 12:00:00 PM
Steady-State Simulations
Chair:
James Wilson (North Carolina State University)
On an Initial Transient Deletion Rule with Rigorous
Theoretical Support
Hernan Awad (University of Miami) and Peter W.
Glynn (Stanford University)
Abstract:
We study an initial transient deletion rule proposed by
Glynn and Iglehart. We argue that it has desirable properties both from a
theoretical and practical standpoint; we discuss its bias reducing properties,
and its use both in the single replication setting and in the multiple
replications / parallel processing context.
A New Approach for Parallel Steady-state
Simulations
Ming-hua Hsieh (National Chengchi University, Taiwan)
Abstract:
We propose a new procedure for building confidence
interval estimators of steady-state parameters in discrete event simulations.
The procedure uses parallel processors to generate independent replications
and constructs the confidence interval estimator by solving a generalized
least square problem. The most appealing theoretical feature of the proposed
procedure is that the precision of the resulted estimator can be improved by
simply increasing the number of processors (or independent replications) while
the simulated time length is fixed on an appropriate level on each processor.
Experiments conducted on M/M/1 queue waiting time processes in heavy traffic
confirm this theoretical property.
Performance Evaluation of Spectral Procedures for
Simulation Analysis
Emily K. Lada (SAS Institute Inc.) and James R.
Wilson (North Carolina State University)
Abstract:
We summarize an experimental performance evaluation of
WASSP and the Heidelberger-Welch (HW) algorithm, two sequential spectral
procedures for steady-state simulation analysis. Both procedures approximate
the log-smoothed-periodogram of the batch means after suitable data-truncation
to eliminate the effects of initialization bias, finally delivering a
confidence-interval estimator for the mean response that satisfies
user-specified half-length and coverage-probability requirements. HW uses a
Cramer-von Mises test for initialization bias based on the method of
standardized time series; and then HW fits a quadratic polynomial to the
batch-means log-spectrum. In contrast WASSP uses the von Neumann randomness
test and the Shapiro-Wilk normality test to obtain an approximately stationary
Gaussian batch-means process whose log-spectrum is approximated via wavelets.
Moreover, unlike HW, WASSP estimates the final sample size required to satisfy
the user's confidence-interval requirements. Regarding closeness of
conformance to both confidence-interval requirements, we found that WASSP
outperformed HW in the given test problems.
Monday 1:30:00 PM 3:00:00 PM
Optimization, Importance Sampling, and
Ranking and Selection
Chair: Raghu Pasupathy (Virginia Tech)
On Choosing Parameters in
Retrospective-Approximation Algorithms for
Simulation-Optimization
Raghu Pasupathy (Virginia Tech)
Abstract:
The Simulation-Optimization (SO) problem is a
constrained optimization problem where the objective function is observed with
error, usually through an oracle such as a simulation. Retrospective
Approximation (RA) is a general technique that can be used to solve SO
problems. In RA, the solution to the SO problem is approached using solutions
to a sequence of approximate problems, each of which is generated using a
specified sample size and solved to a specified error tolerance. In this
paper, our focus is parameter choice in RA algorithms, where the term
parameter is broadly interpreted. Specifically, we present (i) conditions that
guarantee convergence of estimated solutions to the true solution; (ii)
convergence properties of the sample-size and error-tolerance sequences that
ensure that the sequence of estimated solutions converge to the true solution
in an optimal fashion; and (iii) a numerical procedure that efficiently solves
the generated approximate problems for one-dimensional SO.
Optimal Resource Allocation in Two Stage
Sampling of Input Distributions
Achal Bassamboo (Northwestern
University) and Sandeep Juneja (Tata Institute of Fundamental Research)
Abstract:
Consider a performance measure that is evaluated via
Monte Carlo simulation where input distributions to the underlying model may
involve two stage sampling. The settings of interest include the case
where in the first stage physical samples from the distribution are collected.
In the second stage, Monte Carlo sampling is done from the observed empirical
distribution. We also consider the sampling-importance resampling (SIR)
algorithm. Here it is difficult to sample directly from the desired input
distribution, and these samples are generated in two stages. In the first
stage, a large number of samples are generated from a distribution convenient
from the sampling viewpoint. In the second, a resampling is done from the
samples generated in the first stage so that asymptotically the new samples
have the desired distribution. We discuss how to allocate computational and
other effort optimally among the two stages to minimize the estimator's
resultant mean square error.
Ranking and Selection with Multiple
“Targets”
Douglas Morrice (The University of Texas at Austin) and
John Butler (The Ohio State University)
Abstract:
Managers of large industrial projects often measure
performance by multiple attributes. In previous work, we developed a
multiattribute ranking and selection procedure to allow tradeoffs between
conflicting objectives. More recent developments in ranking and selection
incorporate the notion of “constraints”, or “targets”, that must be satisfied.
In this paper we demonstrate how some forms of single attribute utility
functions can be used to create a target or constraint. We re-analyze our
original problem under the assumption that there are unacceptable levels for
some attributes.
Monday 3:30:00 PM 5:00:00 PM
Simulation and Applications
Chair: Soumyadip Ghosh (IBM T. J. Watson Research
Center)
Monitoring Variability of Autocorrelated Processes
Using Standardized Time Series Variance Estimators
Seong-Hee Kim
(Georgia Institute of Technology)
Abstract:
We consider the problem of monitoring variability of
autocorrelated processes. This paper combines variance estimation techniques
from the simulation literature with a statistical process control chart from
statistical process control (SPC) literature. The proposed SPC method does not
require any assumptions on the distribution of the underlying process and uses
a variance estimate from each batch as a basic observation. The control limits
of the chart are determined analytically. The proposed chart is tested using
stationary processes with both normal and non-normal marginals.
A Comparison of Sample-Path-Based
Simulation-Optimization and Stochastic Decomposition for Multi-Location
Transshipment Problems
Lei Zhao (Tsinghua University) and Suvrajeet
Sen (University of Arizona)
Abstract:
Because of its applicability, as well as its
generality, research in the area of simulation-optimization continues to
attract significant attention. These methods, most of which rely on the
statistically motivated search techniques, are at their best when very little
is known about the structure of the function (e.g., function evaluations are
treated as "black-box" function-calls). In some applications such as the one
discussed in this paper, objective function values may be obtained through
linear/network flow optimization models. In such cases, the objective function
may be convex, and in such circumstances, very large instances can be solved
using stochastic programming techniques. This paper presents a computational
case for using such techniques, whenever applicable.
Computing Worst-Case Tail Probabilities in Credit
Risk
Soumyadip Ghosh (IBM TJ Watson Research Center) and Sandeep
Juneja (Tata Institute of Fundamental Research)
Abstract:
Simulation is widely used to measure credit risk in
portfolios of loans, bonds, and other instruments subject to possible default.
This analysis requires performing the difficult modeling task of capturing the
dependence between obligors adequately. Current methods assume a form for the
joint distribution of the obligors and match its parameters to given
dependence specifications, usually correlations. The value-at-risk risk
measure (a function of its tail quantiles) is then evaluated. This procedure
is naturally limited by the form assumed, and might not approximate well the
"worst-case" possible over all joint distributions that match the given
specification. We propose a procedure that approximates the joint distribution
with chessboard distributions, and provides a sequence of improving
estimates that asymptotically approach this "worst-case" value-at-risk. We use
it to experimentally compare the quality of the estimates provided by the
earlier procedures.
Tuesday 8:30:00 AM 10:00:00 AM
Optimization I
Chair: Wai Kin
Chan (Rensselaer Polytechnic Institute)
A Testbed of Simulation-Optimization
Problems
Raghu Pasupathy (Virginia Tech) and Shane G Henderson
(Cornell University)
Abstract:
We propose a testbed of simulation-optimization
problems. The purpose of the testbed is to encourage development and
constructive comparison of simulation-optimization techniques and algorithms.
We are particularly interested in increasing attention to the finite-time
performance of algorithms, rather than the asymptotic results that one often
finds in the literature.
Simulation-based Response Surface Computation in the
Presense of Monotonicity
Eunji Lim and Peter W Glynn (Stanford
University)
Abstract:
In many stochastic models, it is known that the
response surface corresponding to a particular performance measure is monotone
in the underlying parameter. For example, the steady-state mean waiting time
for customers in a single server queue is known to be monotone in the service
rate. In other contexts, the simulator may believe, on the basis of intuition,
that the response surface is monotone. This paper describes an appropriate
methodology for incorporating such monotonicity constraints into one's
response surface estimator.
Response Gradient Estimation Using Mathematical
Programming Models of Discrete-Event System Sample Paths
Wai Kin
(Victor) Chan (Rensselaer Polytechnic Institute) and Lee W. Schruben
(University of California, Berkeley)
Abstract:
This paper illustrates the use of mathematical
programming in computing gradient estimators. Consistency property of these
estimators is established under the usual assumptions for IPA gradient
estimator consistency. A finite difference tolerance limit is introduced. For
complex discrete-event systems, more concise linear programming
representations are developed. These new representations provide a direct way
of calculating gradient estimates.
Tuesday 10:30:00 AM 12:00:00 PM
Selection Procedures I
Chair:
Tuba Aktaran-Kalayci (University at Buffalo, SUNY)
Simulation Selection Problems: Overview of an
Economic Analysis
Stephen E. Chick (INSEAD) and Noah Gans (OPIM
Department - Wharton School)
Abstract:
This paper summarizes a new approach that we recently
proposed for ranking and selection problems, one that maximizes the expected
NPV of decisions made when using stochastic or discrete-event simulation. The
expected NPV models not only the economic benefit from implementing a selected
system, but also the marginal costs of simulation runs and discounting due to
simulation analysis time. Our formulation assumes that facilities exist to
simulate a fixed number of alternative systems, and we pose the problem as a
"stoppable" Bayesian bandit problem. Under relatively general conditions, a
Gittins index can be used to indicate which system to simulate or implement.
We give an asymptotic approximation for the index that is appropriate when
simulation outputs are normally distributed with known but potentially
different variances for the different systems.
Selection and Multiple-Comparison Procedures for
Regenerative Systems
Marvin K Nakayama (New Jersey Institute of
Technology)
Abstract:
We propose two-stage methods for selection and multiple
comparisons with the best (MCB) of steady-state performance measures of
regenerative systems. We assume the systems being compared are simulated
independently, and the methods presented are asymptotically valid as the
confidence-interval width parameter shrinks and the first-stage run length
grows at a rate that is at most the inverse of the square of the
confidence-interval width parameter. When the first-stage run length is
asymptotically negligible compared to the total run length, our procedures are
asymptotically efficient. We provide an asymptotic comparison of our
regenerative MCB procedures with those based on standardized time series (STS)
methods in terms of mean and variance of total run length. We conclude that
regenerative MCB methods are strictly better than STS MCB methods for any
fixed number of batches, but the two become equivalent as the number of
batches grows large.
Integration of Statistical Selection with Search
Mechanism for Solving Multi-Objective Simulation-Optimization
Problems
Loo Hay Lee, Ek Peng Chew, and Suyan Teng (National
University of Singapore)
Abstract:
In this paper, we consider a multi-objective simulation
optimization problem with three features: huge solution space, high
uncertainty in performance measures, and multi-objective problem which
requires a set of non-dominated solutions. Our main purpose is to study how to
integrate statistical selection with search mechanism to address the above
difficulties, and to present a general solution framework for solving such
problems. Here due to the multi-objective nature, statistical selection is
done by the multi-objective computing budget allocation (MOCBA) procedure. For
illustration, MOCBA is integrated with two meta-heuristics: multi-objective
evolutionary algorithm (MOEA) and nested partitions (NP) to identify the
non-dominated solutions for two inventory management case study problems.
Results show that, the integrated solution framework has improved both search
efficiency and simulation efficiency. Moreover, it is capable of identifying a
set of non-dominated solutions with high confidence.
Tuesday 1:30:00 PM 3:00:00 PM
Optimization II
Chair: Sujin Kim
(Purdue University)
On-Line Instrumentation for Simulation-Based
Optimization
Anna Persson, Henrik Grimm, and Amos Ng (Centre for
Intelligent Automation)
Abstract:
Traditionally, a simulation-based optimization (SO)
system is designed as a black-box in which the internal details of the
optimization process is hidden from the user and only the final optimization
solutions are presented. As the complexity of the SO systems and the
optimization problems to be solved increases, instrumentation – a technique
for monitoring and controlling the SO processes – is becoming more important.
This paper proposes a white-box approach by advocating the use of
instrumentation components in SO systems, based on a component-based
architecture. This paper argues that a number of advantages, including
efficiency enhancement, gaining insight from the optimization trajectories and
higher controllability of the SO processes, can be brought out by an on-line
instrumentation approach. This argument is supported by the illustration of an
instrumentation component developed for a SO system designed for solving
real-world multi-objective operation scheduling problems.
Adaptation of the UOBYQA Algorithm for Noisy
Functions
Geng Deng and Michael C. Ferris (University of
Wisconsin-Madison)
Abstract:
In many real-world optimization problems, the objective
function may come from a simulation evaluation so that it is (a) subject to
various levels of noise, (b) not differentiable, and (c) computationally hard
to evaluate. In this paper, we modify Powell's UOBYQA algorithm to handle
those real-world simulation problems. Our modifications apply Bayesian
techniques to guide appropriate sampling strategies to estimate the objective
function. We aim to make the underlying UOBYQA algorithm proceed efficiently
while simultaneously controlling the amount of computational effort.
SPSA Algorithms with Measurement
Reuse
Mohammed Shahid Abdulla (Indian Institute of Science) and
Shalabh Bhatnagar (Indian Insitute of Science)
Abstract:
Four algorithms, all variants of Simultaneous
Perturbation Stochastic Approximation (SPSA), are proposed. The original
one-measurement SPSA uses an estimate of the gradient of objective function
L containing an additional bias term not seen in two-measurement SPSA.
As a result, the asymptotic covariance matrix of the iterate convergence
process has a bias term. We propose a one-measurement algorithm that
eliminates this bias, and has asymptotic convergence properties making for
easier comparison with the two-measurement SPSA. The algorithm, under certain
conditions, outperforms both forms of SPSA with the only overhead being the
storage of a single measurement. We also propose a similar algorithm that uses
perturbations obtained from normalized Hadamard matrices. The convergence w.p.
1 of both algorithms is established. We extend measurement reuse to design two
second-order SPSA algorithms and sketch the convergence analysis. Finally, we
present simulation results on an illustrative minimization problem.
Tuesday 3:30:00 PM 5:00:00 PM
Estimation and Input Modeling
Chair: Hong Wan (Purdue University)
Experimental Evaluation of Integrated Path
Estimators
James Michael Calvin (New Jersey Institute of
Technology)
Abstract:
We describe the results of numerical experiments
evaluating the efficiency of variance estimators based on integrated sample
paths. The idea behind the estimators is to compute a vector of integrated
paths and combine them to form an estimator of the time-average variance
constant that is used, for example, in the construction of confidence
intervals. When used in conjunction with batching, the approach generalizes
the method of non-overlapping batch means. Compared with non-overlapping batch
means, the estimators require longer to compute, have smaller variance and
larger bias. We show that for long enough simulation run lengths, the
efficiency (the reciprocal of running time multiplied by mean-squared error)
of integrated path estimators can be much greater than that of non-overlapping
batch means; the numerical experiments show an efficiency improvement by up to
a factor of ten.
Empirical Evaluation of Data-based Density
Estimation
E Jack Chen (BASF Corporation) and W David Kelton
(University of Cincinnati)
Abstract:
This paper discusses implementation of a sequential
procedure to estimate the steady-state density of a stochastic process. The
procedure computes sample densities at certain points and uses Lagrange
interpolation to estimate the density f(x). Even though the proposed
sequential procedure is a heuristic, it does have strong basis. Our empirical
results show that the procedure gives density estimates that satisfy a
pre-specified precision requirement. An experimental performance evaluation
demonstrates the validity of using the procedure to estimate densities.
Generating Multivariate Mixture of Normal
Distributions Using A Modified Cholesky Decomposition
Jin Wang and
Chunlei Liu (Valdosta State University)
Abstract:
Mixture of normals is a more general and flexible
distribution for modeling of daily changes in market variables with fat tails
and skewness. An efficient analytical Monte Carlo method was proposed by Wang
and Taaffe for generating daily changes using a multivariate mixture of normal
distributions with arbitrary covariance matrix. However the usual Cholesky
Decomposition will fail if the covariance matrix is not positive definite. In
practice, the covariance matrix is unknown and has to be estimated. The
estimated covariance may be not positive definite. We propose a modified
Cholesky decomposition for semi-definite matrices and also suggest an optimal
semi-definite approximation for indefinite matrices.
Wednesday 8:30:00 AM 10:00:00 AM
Selection Procedures II
Chair: Roy Creasey, Jr. (Longwood University)
Combined Ranking and Selection with Control
Variates
Shing Chih Tsai and Barry L Nelson (Northwestern
University)
Abstract:
Nelson and Staum derived R&S procedures that employ
control-variate estimators (CVs) instead of sample means to obtain more
statistical efficiency. However, control-variate estimators require more
computational effort than sample means, and effective controls must be
identified. We present a new CV screening procedure to avoid much of the
computation cost. We also present a two-stage CV combined procedure which
captures the ability to eliminate inferior systems in the first stage and the
statistical efficiency of control variates for selection in the second stage.
An empirical evaluation is provided.
Comparison of Limit Standards Using a Sequential
Probability Ratio Test
Roy R Creasey Jr (Longwood University) and
K. Preston White (University of Virginia)
Abstract:
We continue our research into the comparison, via
simulation experiments, of a stochastic system to a limit standard. A limit
standard is defined as a maximum and/or minimum standard. We have found that
evaluation methods using proportions provide a statistically valid comparison
of this family of standards. In this paper, we present the use of Wald's
sequential probability ratio test (SPRT) as a comparison method. We define a
fully sequential analysis procedure, as well as initial test results.
Performance Evaluations of Comparison-with-a-standard
Procedures
E Jack Chen (BASF Corporation)
Abstract:
We investigate the performance of a heuristic
sequential procedure to compare a finite number of designs with respect to a
single standard. The goal is to identify the best design, and if the chosen
best design is not the standard, to determine whether the chosen best design
is better than the standard. We give preferential status to the standard
because there are costs and time involved in replacing the standard. We
accomplish this goal by extending indifference-zone selection procedures. An
experimental performance evaluation demonstrates the validity and efficiency
of our sequential procedures.
Wednesday 10:30:00 AM 12:00:00 PM
Experiment Design
Chair:
Russell Barton (The Pennsylvania State University)
A Polytope Method for Estimating Factor Main
Effects
Bruce Ankenman (Northwestern University) and Russell Cheng
and Susan Lewis (University of Southampton)
Abstract:
Consider the problem of identifying important factors
influencing a response in a simulation experiment where the number of factors
is large. When the direction of the effect of factors is known, the method of
sequential bifurcation is effective for quickly removing noninfluential
factors. Though good, the method is not fully efficient in that not all the
information available is fully utilized. We present a method based on a
polytope construction that makes use of all available information and which is
therefore more efficient. In this paper we focus on the deterministic case to
highlight its theoretical foundation. The method can however be extended to
the stochastic case. Numerical examples are given comparing the new method
with sequential bifurcation showing its improved performance.
A Two-Phase Maxi-Min Algorithm for Forward-Inverse
Experiment Design
Russell R. Barton (The Pennsylvania State
University)
Abstract:
In customer-driven design of systems or products, one
has performance targets in mind and would like to identify system design
parameters that yield the target performance vector. Since most simulation
models predict performance given design parameter values, this identification
must be done iteratively through an optimization search procedure. In some
cases it would be preferable to find design parameter values directly via an
explicit inverse model. Regression and other forms of approximation
'metamodels' provide estimates of simulation model outputs as a function of
design parameters. It is possible to design fitting experiments (DOE’s) that
allow simultaneous fitting of both forward and inverse metamodels. This paper
discusses the potential for this strategy and shows a simple two-phase DOE
strategy using a maxi-min measure of DOE quality.
A Hybrid Method for Simulation Factor
Screening
Hua Shen and Hong Wan (Purdue University)
Abstract:
Factor screening is performed to eliminate unimportant
factors so that the remaining important factors can be more thoroughly studied
in later experiments. Controlled Sequential Bifurcation (CSB) and Controlled
Sequential Factorial Design (CSFD) are two new screening methods for
discrete-event simulations. Both methods use hypothesis testing procedures to
control the Type I Error and power of the screening results. The scenarios at
which each method is most efficient are complementary. This paper proposes a
two-stage hybrid approach to combine CSB and CSFD. The new method usually has
the same error control as CSB and CSFD. The efficiency, on the other hand, is
usually much better than either component method.