WSC 2004 Final Abstracts |
Sunday 1:00:00 PM 2:30:00 PM
Simulation Optimization I
Chair: Russell Cheng (University of Southampton)
Optimization by Simulation Metamodelling Methods
Russell C.H. Cheng and Christine S.M. Currie (University of Southampton)
Abstract:
We
consider how simulation metamodels can be used to optimize the performance
of a system that depends on a number of factors. We focus on the situation
where the number of simulation runs that can be made is limited, and where
a large number of factors must be included in the metamodel. Bayesian methods
are particularly useful in this situation and can handle problems for which
classical stochastic optimization can fail. We describe the basic Bayesian
methodology, and then an extension to this that fits a quadratic response
surface which, for function minimization, is guaranteed to be positive definite.
An example is presented to illustrate the methods proposed in this paper.
Automated Response Surface Methodology for Stochastic Optimization Models with Unknown Variance
Robin P. Nicolai, Rommert Dekker, Nanda Piersma, and Gerrit J. van Oortmarssen (Erasmus University Rotterdam )
Abstract:
Response
Surface Methodology (RSM) is an optimization tool that was introduced in
the early 50´s by Box and Wilson (1951). In this paper we are interested
in finding the best settings for an automated RSM procedure when there is
very little information about the objective function. We will present a framework
of the RSM procedures that is founded in recognizing local optima in the
presence of noise. We emphasize both stopping rules and restart procedures.
The results show that considerable improvement is possible over the proposed
settings in the existing literature.
Stochastic Approximation with Simulated Annealing as an Approach to Global Discrete-Event Simulation Optimization
Matthew H. Jones and K. Preston White Jr. (University of Virginia)
Abstract:
This
paper explores a new approach to global, stochastic, simulation optimization
which combines stochastic approximation (SA) with simulated annealing (SAN).
SA directs a search of the response surface efficiently, using a conservative
number of simulation replications to approximate the local gradient of a
probabilistic loss function. SAN adds a random component to the SA search,
needed to escape local optima and forestall premature termination. Using
a limited set of simple test problems, we compare the performance of SA/SAN
with the commercial package OptQuest. Results demonstrate that SA/SAN can
outperform OptQuest when properly tuned. Further results demonstrate that
a multi-start approach not only greatly improves the coverage and robustness
of SA/SAN, but also provides insights potentially useful in directing iterative
improvement of the gain coefficients before each new start. This preliminary
study is sufficiently encouraging to invite further research on the SA/SAN
approach.
Sunday 3:00:00 PM 4:30:00 PM
Simulation Optimization II
Chair: Bruce Schmeiser (Purdue University)
Abstract:
This paper
presents a new meta heuristic algorithm based on the search method called
simulated annealing, and its application to solving multi objective simulation
optimiza-tion problems. Since the simulated annealing search method has been
extensively applied as a modern heuristic to solve single objective simulation
optimization problems, a modification to this method has been developed in
order to solve multi objective problems. The efficiency of this new algorithm
was tested on a real case problem modeled under discrete simulation.
Simulation-Based Optimization Using Simulated Annealing With Confidence Interval
Talal M. Alkhamis and Mohamed A. Ahmed (Kuwait University)
Abstract:
This
paper develops a variant of Simulated Annealing (SA) algorithm for solving
discrete stochastic optimization prob-lems where the objective function is
stochastic and can be evaluated only through Monte Carlo simulations. In
the proposed variant of SA, the Metropolis criterion depends on whether the
objective function values indicate statisti-cally significant difference
at each iteration. The differ-ences between objective function values are
considered to be statistically significant based on confidence intervals
associated with these values. Unlike the original SA, our method uses a
constant temperature. We show that the con-figuration that has been visited
most often in the first m it-erations converges almost surely to a global
optimizer.
Retrospective Approximation Algorithms for the Multidimensional Stochastic Root-Finding Problem
Raghu Pasupathy and Bruce W. Schmeiser (Purdue University)
Abstract:
The
stochastic root-finding problem (SRFP) is that of solving a system of q equations
in q unknowns using only an oracle that provides estimates of the function
values. This paper presents a family of algorithms to solve the multidimensional
(q >= 1) SRFP, generalizing Chen and Schmeiser's one-dimensional retrospective
approximation (RA) family of algorithms. The fundamental idea used in the
algorithms is to generate and solve, with increasing accuracy, a sequence
of approximations to the SRFP. We focus on a specific member of the family,
called the Bounding RA algorithm, which finds a sequence of polytopes that
progressively decrease in size while approaching the solution. The algorithm
converges almost surely and exhibits good practical performance with no user
tuning of parameters, but no convergence proofs or numerical results are
included here.
A Meta-Heuristic Based on Simulated Annealing for Solving Multiple-Objective Problems in Simulation Optimization
Eduardo Alberto Avello, Felipe F. Baesler, and Reinaldo J. Moraga (Universidad del BioBio)
Monday 10:30:00 AM 12:00:00 PM
Simulation Optimization III
Chair: Sigrun Andradottir (Georgia Tech)
Global Likelihood Optimization Via the Cross-Entropy Method, with an Application to Mixture Models
Zdravko Botev and Dirk P Kroese (The University of Queensland)
Abstract:
Global likelihood maximization is an important aspect of many
statistical analyses. Often the
likelihood function is highly multi-extremal. This presents a significant
challenge to standard search procedures, which often settle too quickly into
an inferior local maximum. We present a new approach based on the
cross-entropy (CE) method, and illustrate its use for the analysis
of mixture models.
Efficient Simulation-Based Discrete Optimization
Seth D. Guikema, Rachel A. Davidson, and Zehra Çaðnan (Cornell University)
Abstract:
In
many practical applications of simulation it is desirable to optimize the
levels of integer or binary variables that are inputs for the simulation
model. In these cases, the objective function must often be estimated through
an expensive simulation process, and the optimization problem is NP-hard,
leading to a computationally difficult problem. We investigate efficient
solution methods for this problem, and we propose an approach that reduces
the number of runs of the simulation by using ridge regression to approximate
some of the simulation calls. This approach is shown to significantly decrease
the computational cost but at a cost of slightly worse solution values.
Simulation Optimization Using Balanced Explorative and Exploitative Search
Andrei A. Prudius and Sigrun Andradottir (Georgia Institute of Technology)
Abstract:
We
present a new random search method for solving simulation optimization problems.
Our approach emphasizes the need for maintaining the right balance between
exploration and exploitation during various stages of the search. Exploration
represents global search for promising solutions within the entire feasible
region, while exploitation involves local search of promising subregions.
Preliminary numerical results are provided that show the performance of the
method applied to solve deterministic and stochastic optimization problems.
Monday 1:30:00 PM 3:00:00 PM
Simulation Methodology and Application
Chair: John Shortle (George Mason University)
Abstract:
Recently, simulation-based
methods have been successfully used for solving challenging stochastic optimization
problems and equilibrium models. Here we report some of the recent progress
we had in broadening the applicability of so-called the sample-path method
to include the solution of certain stochastic mathematical programs with
equilibrium constraints. We first describe the method and the class of stochastic
mathematical programs with complementarity constraints that we are interested
in solving and then outline a set of sufficient conditions for its almost-sure
convergence. We also illustrate an application of the method to solving a
toll pricing problem in transportation networks. These developments also
make it possible to solve certain stochastic bilevel optimization problems
and Stackelberg games, involving expectations or steady-state functions,
using simulation.
Make-to-Stock Systems with Backorders: IPA Gradients
Yao Zhao and Benjamin Melamed (Rutgers University)
Abstract:
We consider single-stage, single-product Make-to-Stock systems
with random demand and random service (production) rate,
where demand shortages at the inventory facility are backordered.
The Make-to-Stock system is modeled as a stochastic fluid model (SFM),
in which the traditional discrete arrival, service and departure stochastic processes
are replaced by corresponding stochastic fluid-flow rate processes.
IPA (Infinitesimal Perturbation Analysis) gradients of various performance metrics
are derived with respect to parameters of interest (here, base-stock level and production rate),
and are showed to be unbiased and easy to compute.
The (random) IPA gradients are obtained via sample path analysis under very mild
assumptions, and are inherently nonparametric in the sense that no specific probability law
need be postulated. The formulas derived can be used in simulation, as well
as in real-life systems.
Generating Scheduling Constraints for Discrete Event Dynamic Systems
Wai Kin Chan and Lee W. Schruben (University of California, Berkeley)
Abstract:
In
most scheduling literature, constraints are seemingly generated in an ad-hoc
manner using intuitive arguments. This could result in overlooking some
constraints or including unnecessary constraints. Schruben (2000) has shown
how the dynamics of some discrete event systems can be modeled as the solutions
of optimization programs. In this paper, we use this idea to generate mathematical
programming models systematically for scheduling resources in discrete event
dynamic systems. Two examples are presented: a multiple server queue and
a semiconductor manufacturing cluster tool. An interesting result was that
the mathematical structure of the scheduling program gen-erated from a simulation
of a cluster tool found in the literature leads to a different, more concise
and illuminating cluster tool simulation model that would have been difficult
to discover otherwise. The corresponding optimal scheduling problem is surprising
in that it does not include explicit representation of the resource that
is actually being scheduled!
Solving Stochastic Mathematical Programs with Complementarity Constraints Using Simulation
S. Ilker Birbil (Erasmus University Rotterdam) and Gul Gurkan and Ovidiu Listes (Tilburg University)
Monday 3:30:00 PM 5:00:00 PM
Efficient Simulation Budget Allocation
Chair: Chun-Hung Chen (George Mason University)
A Large Deviations Perspective on Ordinal Optimization
Peter Glynn (Stanford University) and Sandeep Juneja (Tata Institute of Fundamental Research)
Abstract:
We consider the problem of optimal allocation of computing budget
to maximize probability of correct selection in ordinal
optimization setting. This problem has been studied in
literature in an approximate mathematical framework under
assumption that the underlying random variables have a Gaussian
distribution. We use large deviations theory to develop a
mathematically rigorous framework to determine optimal
allocation of computing resources even when the underlying
variables have general, non-Gaussian distributions. Further, in a
simple setting we show that when there exists an indifference
zone, quick stopping rules may be developed that exploit
exponential decay rates of the probability of false selection.
In practice, the distributions of underlying variables are
estimated from generated samples leading to performance
degradation due to estimation errors. On a positive note, we show
that corresponding estimates of optimal allocations converge
to their true values as the number of generated samples
increases to infinity.
Optimal Computing Budget Allocation for Multi-Objective Simulation Models
Loo Hay Lee, Ek Peng Chew, and Suyan Teng (National University of Singapore)
and David Goldsman (Georgia Institute of Technology)
Abstract:
Simulation
plays a vital role in identifying the best system design from among a set
of competing designs. To improve simulation efficiency, ranking and selection
techniques are often used to determine the number of simulation replications
required so that a pre-specified level of correct selection is guaranteed
at a modest possible computational expense. As most real-life systems are
multi-objective in nature, in this paper, we consider a multi-objective ranking
and selection problem, where the system designs are evaluated in terms of
more than one performance measure. We incorporate the concept of Pareto optimality
into the ranking and selection scheme, and try to find all of the non-dominated
designs rather than a single best one. A simple sequential solution method
is proposed to allocate the simulation replications. Computational results
show that the proposed algorithm is efficient in terms of the total number
of replications needed to find the Pareto set.
Optimal Computing Budget Allocation Under Correlated Sampling
Michael C. Fu (University of Maryland), Jian-Qiang Hu (Boston University),
Chun-Hung Chen (George Mason University) and Xiaoping Xiong (University
of Maryland)
Abstract:
We consider the optimal computing budget allocation (OCBA) problem
where the simulated designs are correlated.
The exact optimal allocation is presented for two designs,
and an approximation proposed for more than two designs.
Numerical experiments for several examples compare the approximation
with some other allocation procedures.
Results on convergence rates and the non-Gaussian setting are also discussed.
Tuesday 8:30:00 AM 10:00:00 AM
Adaptive Sampling
Chair: Shane Henderson (Cornell University)
Abstract:
Monte Carlo
simulation techniques that use function approximations have been successfully
applied to approximately price multi-dimensional American
options. However, for many pricing problems the time required to get
accurate estimates can still be prohibitive, and this motivates the
development of variance reduction techniques.
In this paper, we describe a zero-variance
importance sampling measure for American options. We
then discuss how function approximation may be used to
approximately learn this measure; we test this idea in
simple examples. We also note that the zero-variance
measure is fundamentally connected to a duality result
for American options. While our methodology is
geared towards developing an estimate of an accurate
lower bound for the option price, we observe that importance
sampling also reduces variance in estimating the upper bound
that follows from the duality.
An Adaptive Approach to Fast Simulation of Traffic Groomed Optical Networks
Chih-Chieh Hsu and Michael Devetsikiotis (North Carolina State University)
Abstract:
In this paper, we consider the fast simulation of traffic groomed
optical networks, in which multiple sub-rate traffic streams may
be carried on the same wavelength. For real-sized systems, call
blocking probabilities may be rare events and require a large
amount of CPU power to simulate. We present two importance
sampling methods for fast simulation. For a light-load case, we
prove that static IS using the Standard Clock (S-ISSC) method does
indeed have bounded relative error (BRE) even in multi-class case.
For a non-light-load case, we suggest, alternatively,
\emph{adaptive ISSC} (A-ISSC) which calculates the relative
possibility of reaching each target in every step of simulation.
Using A-ISSC, biasing methods which are proven to be optimal or
have bounded error can be extended to multi-dimensional cases
while still retaining a very favorable performance.
Adaptive Control Variates
Sujin Kim and Shane G. Henderson (Cornell University)
Abstract:
Adaptive
Monte Carlo methods are specialized Monte Carlo simulation techniques where
the methods are adaptively tuned as the simulation progresses. The primary
focus of such techniques has been in adaptively tuning importance sampling
distributions to reduce the variance of an estimator. We instead focus on
adaptive control variate schemes, developing asymptotic theory for the performance
of two adaptive control variate estimators. The first estimator is based
on a stochastic approximation scheme for identifying the optimal choice of
control variate. It is easily implemented, but its performance is sensitive
to certain tuning parameters, the selection of which is nontrivial. The second
estimator uses a sample average approximation approach. It has the advantage
that it does not require any tuning parameters, but it can be computationally
expensive and requires the availability of nonlinear optimization software.
Function-Approximation-Based Importance Sampling for Pricing American Options
Nomesh Bolia and Sandeep Juneja (Tata Institute of Fundamental Research) and Paul Glasserman (Columbia Business School)
Tuesday 10:30:00 AM 12:00:00 PM
Ranking and Selection
Chair: Barry Nelson (Northwestern University)
Abstract:
We present
a new sequential, eliminating procedure for selecting the best system in
a single-factor Bernoulli-response experiment with an odds-ratio indifference
zone, where best refers to the system with the largest probability of success
on any given trial. Analytical results show that the proposed procedure
is more efficient than existing procedures in that it requires fewer observations
to identify the best system when only two systems are being considered.
Empirical results show that the procedure is more efficient for comparing
up to five systems when the specified odds-ratio indifference zone is greater
than two.
Better Selection of the Best
Huaiyu Harry Ma and Thomas R. Willemain (Rensselaer Polytechnic Institute)
Abstract:
We
present a new method of selecting the best of several competing system designs
on the basis of expected steady-state performance. The method uses a new
form of time-series bootstrap in a sequential hypothesis testing frame-work.
We illustrate the efficiency of the new method by comparing its results against
the current state of the art in a series of simulation experiments. The new
method achieves equal or greater probability of correct selection of the
best system with substantial savings in the number of simulation observations
required for the decision.
Combined Pattern Search and Ranking and Selection for Simulation Optimization
Todd A. Sriver and James W. Chrissis (Air Force Institute of Technology)
Abstract:
A
new algorithm class is presented for optimization of stochastic simulation
models. The algorithms, which combine generalized pattern search (GPS) with
ranking and selection (R&S), require "black-box" simulation evaluations
and are applicable to problems with mixed variables (continuous, discrete
numeric, and categorical). Implementation of the Mixed-variable Generalized
Pattern Search with Ranking and Selection (MGPS-RS) algorithm with three
different R&S procedures is demonstrated and tested on a small set of
standard test functions. Results of this preliminary performance evaluation
are summarized and compared with existing search methods.
An Odds-Ratio Indifference-Zone Selection Procedure for Bernoulli Populations
Jamie R. Wieland and Barry L. Nelson (Northwestern University)
Tuesday 1:30:00 PM 3:00:00 PM
Output Analysis I
Chair: Emily Lada (North Carolina State University)
Abstract:
Confidence intervals for the median of estimators or other
quantiles were proposed
as a substitute for usual confidence intervals
in terminating and steady-state simulation.
This is adequate since
for many estimators the median and the expectation
are close together or coincide, particularly if the sample
size is large.
Grouping data into batches
is useful for median confidence intervals.
The novel confidence intervals are easy to obtain,
the variance of the estimator is not used.
They are well suited for correlated simulation output data,
apply to functions of estimators,
and in simulation they seem to be particularly accurate,
namely they follow the confidence level better
than other confidence intervals.
This paper states their accuracy which is the
difference between the nominal confidence level
and the actual coverage.
The accuracy is evaluated with analytical models and
simulation.
For the estimation of quantiles by order statistics,
the new confidence intervals are exact.
Evaluation of Methods Used to Detect Warm-Up Period in Steady State Simulation
Prasad S. Mahajan and Ricki G. Ingalls (Oklahoma State University)
Abstract:
This
paper reviews the performance of various methods used to detect the warm
up length in steady state discrete event simulation. An evaluation procedure
is used to com-pare the methods. The methods are applied to the output generated
by a simple job shop model. The performance of the methods is tested at different
levels of utilizations. Various measures of goodness are used to assess the
effec-tiveness of the methods.
Steady-State Simulation Analysis Using Asap3
Natalie M. Steiger (University of Maine), Emily K. Lada (Statistical and
Applied Mathematical Sciences Institute), James R. Wilson and Jeffrey A.
Joines (North Carolina State University) and Christos Alexopoulos and David
Goldsman (Georgia Institute of Technology)
Abstract:
We discuss ASAP3, a refinement of the batch means algorithms ASAP and
ASAP2. ASAP3 is a sequential procedure designed to produce a
confidence-interval estimator for the expected response of a
steady-state simulation that satisfies user-specified precision and
coverage-probability requirements. ASAP3 operates as follows: the
batch size is increased until the batch means pass the Shapiro-Wilk
test for multivariate normality; and then ASAP3 fits a first-order
autoregressive (AR(1)) time series model to the batch means. If
necessary, the batch size is further increased until the
autoregressive parameter in the AR(1) model does not significantly
exceed 0.8. Next ASAP3 computes the terms of an inverse
Cornish-Fisher expansion for the classical batch means t-ratio based
on the AR(1) parameter estimates; and finally ASAP3 delivers a
correlation-adjusted confidence interval based on this expansion.
ASAP3 compared favorably with other batch means procedures (namely,
ABATCH, ASAP2, ASAP, and LBATCH) in an extensive experimental
performance evaluation.
The Accuracy of a New Confidence Interval Method
Johann Christoph Strelen (Universität Bonn)
Tuesday 3:30:00 PM 5:00:00 PM
Output Analysis II
Chair: E. Chen (BASF Corporation)
Abstract:
We consider the steady state output analysis problem for a process
that satisfies a functional central limit theorem. We construct
asymptotically valid confidence intervals for the steady-state mean
using normalized excursions of the simulated process.
An alternative approach splits the output at the last zero of
the normalized path.
The methods use a bounded memory of size determined by the user.
Experimental Performance Evaluation of Histogram Approximation for Simulation Output Analysis
E. Jack Chen (BASF Corporation) and W. David Kelton (University of Cincinnati)
Abstract:
We
summarize the results of an experimental performance evaluation of using
an empirical histogram to approximate the steady-state distribution of the
underlying stochastic process. We use a runs test to determine the required
sample size for simulation output analysis and construct a histogram by computing
sample quantiles at certain grid points. The algorithm dynamically increases
the sample size so that histogram estimates are asymptotically unbiased.
Characteristics of the steady-state distribution, such as the mean and variance,
can then be estimated through the empirical histogram. The preliminary experimental
results indicate that the natural estimators obtained based on the empirical
distribution are fairly accurate.
Performance Evaluation of a Wavelet-based Spectral Method for Steady-state Simulation Analysis
Emily K. Lada (Statistical and Applied Mathematical Sciences Institute),
James R. Wilson (North Carolina State University), Natalie M. Steiger (University
of Maine) and Jeffrey A. Joines (North Carolina State University)
Abstract:
We
summarize the results of an experimental performance evaluation of WASSP,
an automated wavelet-based spectral method for constructing an approximate
confidence interval on the steady-state mean of a simulation output process
so that the delivered confidence interval satisfies user-specified requirements
on absolute or relative precision as well as coverage probability. We applied
WASSP to test problems designed specifically to explore its efficiency and
robustness in comparison with ASAP3 and the Heidelberger-Welch algorithm,
two sequential procedures based respectively on the methods of nonoverlapping
batch means and spectral analysis. Concerning efficiency, WASSP compared
favorably with its competitors, often requiring smaller sample sizes to deliver
confidence intervals with the same nominal levels of precision and coverage
probability. Concerning robustness against the statistical anomalies commonly
encountered in simulation studies, WASSP outperformed its competitors, delivering
confidence intervals whose actual half-lengths and coverage probabilities
were frequently closer to the corresponding user-specified nominal levels.
Simulation Output Analysis Based on Excursions
James M. Calvin (New Jersey Institute of Technology)
Wednesday 8:30:00 AM 10:00:00 AM
Simulation Analysis I
Chair: John Fowler (Arizona State University)
Abstract:
Discrete event simulation
modelling has been extensively used in modelling complex systems. Although
it offers great conceptual-modelling flexibility, it is both computationally
expensive and data intensive. There are several examples of simulation models
that generate millions of observations to achieve satisfactory point and
confidence interval estimations for the model variables. In these cases,
it is exceptionally cumbersome to conduct the required output and sensitivity
analysis in a spreadsheet or statistical package. In this paper, we highlight
the advantages of employing data warehousing techniques for storing and analyzing
simulation output data. The proposed data warehouse environment is capable
of providing the means for automating the necessary algorithms and procedures
for estimating different parameters of the simulation. These include initial
transient in steady-state simulations and point and confidence interval estimations.
Previously developed models for evaluating patient flow through hospital
departments are used to demonstrate the problem and the proposed solutions.
An Efficient Screening Methodology for a Priori Assessed Non-Influential Factors
Thomas M. Cioppa (US Army Training and Doctrine Command Analysis Center)
Abstract:
The
choice of which factors to choose to vary in a simula-tion to effect a change
in the measure of interest is diffi-cult. Many factors are a priori judged
not to effect the measure of interest. There is often no subsequent analysis
of whether this judgment is valid or not. The proposed methodology outlines
a sequence of limited runs to assess the accuracy of the a priori belief
of non-influential fac-tors. These runs can either identify if an influential
fac-tor(s) has been omitted or confirm a priori beliefs. Execut-ing these
sequences of runs before focusing on the suspected influential factors can
contribute to subsequent analysis by confirming that suspected non-influential
fac-tors do not significantly impact the measure of interest. An example
of the methodology using an agent based model is presented.
Variance-Based Sampling for Cycle Time - Throughput Confidence Intervals
Rachel T. Johnson (Arizona State Univeristy) and Sonia E. Leach, John W.
Fowler, and Gerald T. Mackulak (Arizona State University)
Abstract:
In
the analysis of a manufacturing system, the analyst is often interested in
the change in mean cycle time as a func-tion of different throughput (start
rate) levels. Since the variance of the mean cycle time generally increases
as the start rate increases, an equal allocation of simulation effort at
each simulated throughput level will result in confidence intervals of different
widths. This paper discusses an ap-proach for generating nearly equal sized
mean cycle time confidence intervals at selected throughput levels when the
computing budget is fixed (limited). The simulation effort allocation procedure
described in this paper determines the proportion of the fixed budget to
allocate at each through-put level based on the asymptotic variance.
A Data Warehouse Environment for Storing and Analyzing Simulation Output Data
Christos Vasilakis, Elia El-Darzi, and Panagiotis Chountas (University of Westminster)
Wednesday 10:30:00 AM 12:00:00 PM
Simulation Analysis II
Chair: Donald Gross (George Mason University)
Abstract:
Calvin and
Nakayama previously introduced permuting as a way of improving existing standardized
time series methods. The basic idea is to split a simulated sample path into
non-overlapping segments, permute the segments to construct a new sample
path, and apply a standardized time series scaling function to the new path.
Averaging over all permuted paths yields the permuted estimator. This paper
discusses applying permutations to the weighted area estimator of Goldsman
and Schruben. Empirical results seem to indicate that compared to not permuting,
permuting can reduce the length and variability of the resulting confidence
interval half widths but with additional computational overhead and some
degradation in coverage; however, the decrease in coverage is not as bad
as with batching.
Input Modeling Using Quantile Statistical Methods
Abhishek Gupta and Emanuel Parzen (Texas A&M University)
Abstract:
This
paper applies quantile data analysis to input modeling in simulation. We
introduce the use of QIQ plots to identify suitable distributions fitting
the data and comparison distribution P-P plots to test the fit. Two examples
illustrate the utility of these quantile statistical methods for input modeling.
Popular distribution fitting software often give confusing results, which
is usually a set of distributions differing marginally in the test statistic
value. The methods discussed in this paper can be used for further analysis
of the software results.
Overlapping Variance Estimators for Simulations
Christos Alexopoulos and David Goldsman (Georgia Institute of Technology),
Nilay Tanik Argon (University of Wisconsin-Madison) and Gamze Tokol (Decision
Analytics, LLC)
Abstract:
We
examine properties of overlapped versions of the standardized time series
area and Cramér-von Mises estimators for the variance parameter of a stationary
stochastic process, e.g., a steady-state simulation output process. We find
that the overlapping estimators have the same bias properties as, but lower
variance than, their nonoverlapping counterparts; the new estimators also
perform well against the benchmark batch means estimator. We illustrate our
findings with analytical and Monte Carlo examples.
Permuted Weighted Area Estimators
James M. Calvin and Marvin K. Nakayama (New Jersey Institute of Technology)