An Enhanced Two-Stage Selection Procedure
E.
Jack Chen and W. David Kelton (University of Cincinnati)
Abstract:
This paper discusses implementation of a two-stage
procedure to determine the simulation run length for selecting the best of k
designs. We purpose an Enhanced Two-Stage Selection (ETSS) procedure. The
number of additional replications at the second stage for each design is
determined by both the variances of the sample means and the differences of
the sample means of alternative designs. We show that the ETSS procedure gives
valid selections with significantly reduced simulation replications compared
to Rinott's procedure. An experimental performance evaluation demonstrates the
validity of the ETSS procedure.
Optimal Selection Probability in the Two-Stage
Nested Partitions Method for Simulation-Based Optimization
Sigurdur
Ólafsson and Nithin Gopinath (Iowa State University)
Abstract:
We investigate a new algorithm for simulation-based
optimization where the number of alternatives is finite but very large. This
algorithm draws on recent work in adaptive random search and from
ranking-and-selection. We show how the ranking-and-selection approach can
significantly improve performance of the random search and demonstrate the
importance of the probability of correct selection.
Improved Decision Processes through Simultaneous
Simulation and Time Dilation
Paul Hyden (Cornell University) and
Lee Schruben (University of California at Berkeley)
Abstract:
Simulation models are often not used to their full
potential in the decision-making process. The default simulation strategy of
simple serial replication of fixed length runs means that we often waste time
generating information about uninteresting models and we only provide a
decision at the very end of our study. New simulation techniques such as
simultaneous simulation and time dilation have been developed to produce
improved decisions at any time with limited or even reduced demands on
analysts. Furthermore, we have the tools to determine whether a study should
be terminated early or extended based on the demands of the
decision-responsible managers and the time-crunched analysts. By collecting
information from multiple models at the same time and using this information
to continuously update the allocation of finite computational resources, we
are able to more effectively leverage every minute of calendar time toward
making the best choice. Strategies and tactics are discussed and highlighted
through the implementation and analysis of a job shop model. Target success
probabilities are achieved faster while achieving goals in study length
flexibility at low cost to analyst time.
Finding Important Independent Variables through
Screening Designs: A Comparison of Methods
Linda Trocine and Linda
C. Malone (University of Central Florida)
Abstract:
Once a simulation model is developed, designed
experiments may be employed to efficiently optimize the system. Designed
experiments are used on "real" production systems as well. The first step is
to screen for important independent variables. Several screening methods are
compared and contrasted in terms of efficiency, effectiveness, and robustness.
These screening methods range from the classical factorial designs and
two-stage group screening to new, more novel designs including sequential
bifurcation and iterated fractional factorial designs (IFFD). Conditions for
the use of the methods are provided along with references on how to use them.
A Comparison of Five Steady-State Truncation Heuristics
for Simulation
K. Preston White, Jr. (University of Virginia),
Michael J. Cobb (Univeristy of Virginia) and Stephen C. Spratt (St. Onge
Company)
Abstract:
We compare the performance of five well-known
truncation heuristics for mitigating the effects of initialization bias in the
output analysis of steady-state simulations. Two of these rules are variants
of the MSER heuristic studied by White (1997); the remaining rules are
adaptations of bias-detection tests based on the seminal work of Schruben
(1982). Each heuristic was tested in each of a 168 different experiments. Each
experiment comprised multiple tests on different realizations of the sample
path of a second-order autoregressive process with known (deterministic) bias
function. Different experiments employed alternative process parameters,
generating a range of damped and underdamped stochastic responses. These were
combined with alternative damped, underdamped, and mean shift bias functions.
The performance of each rule was evaluated based on the ability of the rule to
remove bias from the mean estimator for the steady-state process. Results
confirmed that four of the five rules were effective and reliable,
consistently yielding truncated sequences with reduced bias. In general, the
MSER heuristics outperformed the three rules based on bias detection, with
Spratt’s (1998) MSER-5 the most effective and robust choice for a
general-purpose method.
A Perspective of Batching Methods in a Simulation
Environment of Multiple Replications in Parallel
Edjair Mota
(University of Amazonas), Adam Wolisz (Technical University of Berlin) and
Krysztof Pawlikowski (University of Canterbury)
Abstract:
Discrete event simulation is frequently time-consuming
either because modern dynamic systems, such as telecommunication networks, are
becoming increasingly complex and/or a great number of observations is
required to yield reasonably accurate results. An interesting approach to
reduce the time duration of simulation is that of concurrently running
multiple replications in parallel (MRIP) on a number of processors connected
via networking and averaging the results adequately. We present the results of
our research on the suitability of batch-means-based procedures in such
distributed stochastic simulation.
Nonparametric Adaptive Importance Sampling for Rare Event
Simulation
Yun Bae Kim and Deok Seon Roh (Sung Kyun Kwan
University) and Myeong Yong Lee (Korea Telecom R&D Group)
Abstract:
Simulating rare events in telecommunication networks
such as estimation for cell loss probability in Asynchronous Transfer Mode
(ATM) networks requires a major simulation effort due to the slight chance of
buffer overflow. Importance Sampling (IS) is applied to accelerate the
occurrence of rare events. Importance Sampling depends on a biasing scheme to
make the estimator from IS unbiased. Adaptive Importance Sampling (AIS)
employs an estimated sampling distribution of IS to the system of interest
during the course of simulation. In this study, we propose a Nonparametric
Adaptive Importance Sampling (NAIS) technique, a non-parametrically modified
version of AIS, and estimate the probability of rare event occurrence in an
M/M/1 queueing model. Compared with classical Monte Carlo simulation and AIS,
the computational efficiency and variance reductions gained via NAIS are
reasonable. A possible extension of NAIS with regards to random number
generation is also discussed.
Analyzing Transformation-Based Simulation
Metamodels
Maria de los A. Irizarry (University of Puerto Rico),
Michael E. Kuhl (Louisiana State University) and Emily K. Lada, Sriram
Subramanian, and James R. Wilson (North Carolina State University)
Abstract:
We present a technique for analyzing a simulation
meta-model that has been constructed using a variance-stabilizing
transformation. To compute a valid confidence interval for the expected value
of the original simulation response at a selected factor-level combination
(design point), we first compute the corresponding confidence interval for the
transformed response at that factor-level combination and then untransform the
endpoints of the resulting confidence interval. Taking the midpoint of the
untransformed confidence interval as our point estimator of the expected
simulation response at the selected factor-level combination and approximating
the variance of this point estimator via the delta method, we formulate an
approximate two-sample Student t-test for validating our metamodel-based
estimator versus the results of making independent runs of the simulation at
the selected factor-level combination. We illustrate this technique in a case
study involving the design of a manufacturing cell, and we compare our results
with those of a more conventional approach to analyzing transformed-based
simulation meta-models. A Monte Carlo performance evaluation shows that
significantly better confidence-interval coverage is maintained with the
proposed procedure over a wide range of values for the residual variance of
the transformed metamodel.
On the Use of Control Variates in the
Simulation of Medium Access Control Protocols
Andrés
Suárez-González, Cándido López-García, José C. López-Ardao, and Manuel
Fernández-Veiga (Universidade de Vigo)
Abstract:
Simulation is an essential tool for performance
evaluation of communication networks. We are interested in the waiting time W
of packets. The Control Variates method takes profit of the knowledge about
another stochastic process strongly correlated with W to reduce the
uncertainty in the estimation of its mean. We analyze the usefulness of the
cycle time as a control stochastic process for Medium Access Control (MAC)
protocols with polling service discipline, showing its potential and
drawbacks. We propose a control variate that overcomes the disadvantages of
cycle time and show its behavior in a case study. This new control variate
will also be useful in the case of other MAC protocols.
Multi-Response Simulation Optimization using
Stochastic Genetic Search within a Goal Programming
Framework
Felipe F. Baesler and José A. Sepúlveda (University of
Central Florida)
Abstract:
This study presents a new approach to solve
multi-response simulation optimization problems. This approach integrates a
simulation model with a genetic algorithm heuristic and a goal programming
model. The genetic algorithm technique offers a very flexible and reliable
tool able to search for a solution within a global context. This method was
modified to perform the search considering the mean and the variance of the
responses. In this way, the search is performed stochastically, and not
deterministically like most of the approaches reported in the literature. The
goal programming model integrated with the genetic algorithm and the
stochastic search present a new approach able to lead a search towards a
multi-objective solution.
A Practical Approach to Sample-Path Simulation
Optimization
Michael C. Ferris, Todd S. Munson, and Krung
Sinapiromsaran (University of Wisconsin)
Abstract:
We propose solving continuous parametric simulation
optimizations using a deterministic nonlinear optimization algorithm and
sample-path simulations. The optimization problem is written in a modeling
language with a simulation module accessed with an external function call.
Since we allow no changes to the simulation code at all, we propose using a
quadratic approximation of the simulation function to obtain derivatives.
Results on three different queueing models are presented that show our method
to be effective on a variety of practical problems.
Simulation Optimization Using Tabu
Search
Berna Dengiz and Cigdem Alabas (Gazi University)
Abstract:
Investigation of the performance and operation of
complex systems in manufacturing or other environments, analytical models of
these systems become very complicated. Because of the complex stochastic
characteristic of the systems, simulation is used as a tool to analyze them.
The trust of such simulation analysis usually is to determine the optimum
combination of factors that effect the considered system performance. The
purpose of this study is to use a tabu search algorithm in conjunction with a
simulation model of a JIT system to find the optimum number of kanbans.