The Database Monte Carlo Approach to Obtaining
Effective Control Variates
Tarik Borogovac (Boston University)
Abstract:
The effectiveness of the method of control variates
depends on the often challenging task of identifying controls that are highly
correlated with the estimation variable. We introduce a generic approach for
obtaining effective controls. The core idea of the approach is to extract
information at nominal parameter values and use this information to gain
estimation efficiency at neighboring parameters. We generate a database of
appropriate random elements and construct effective (single or multiple)
control variates using database evaluations at nominal parameters. This
approach is closely related to the method of Common Random Numbers, and the
problem of selecting an appropriate set of controls can be formulated as the
selection of a desirable basis in a properly defined Hilbert space. Our
experimental results have shown dramatic gains in computational efficiency in
problems considered from the areas of computational finance, physics, and
biology.
Simulation of Contact Centers
Eric Buist
and Pierre L'Ecuyer (Université de Montréal)
Abstract:
This research project deals with the design and
implementation of an efficient tool for simulating contact centers of various
sizes and complexity. It also deals with realistic modeling of many aspects of
contact centers not considered by most simulation tools. We will examine how
to model aspects such as agents' absenteeism and non-adherence, recourse,
etc., and perform sensitivity analyses to measure their impact on performance.
We will also study variance reduction techniques, examine the issues they
raise, implement them efficiently in the context of contact center simulation,
and experiment with them to determine their impact on efficiency. We will also
study and experiment with a combination of some variance reduction techniques.
In particular, our study of stratification combined with control variates has
revealed unexpected interaction effects and we examine ways of handling them.
We are also experimenting with splitting methods to reduce the variance.
Using Flexible Points in a Developing
Simulation
Joseph C. Carnahan and Paul F. Reynolds (University of
Virginia)
Abstract:
Coercion is a semi-automated simulation adaptation
technology that uses subject-matter expert insight about model abstraction
alternatives, called flexible points, to change the behavior of a simulation.
Coercion has been successfully applied to legacy simulations, but never before
to a simulation under development. In this paper, we describe coercion of a
developing simulation and compare it with our experience coercing legacy
simulations. Using a simulation of selective dissolution in alloys as a case
study, we observe that applying coercion early in the development process can
be very beneficial, aiding subject matter experts in formalizing assumptions
and discovering unexpected interactions. We also discuss the development of
new coercion tools and a new language (Flex ML) for working with flexible
points.
Stochastic Trust Region Gradient-Free Method
(STRONG) -A New RSM-based Algorithm for Simulation
Optimization
Kuo-Hao Chang (Purdue University)
Abstract:
Response Surface Methodology (RSM) is a metamodel-based
optimization method. Its strategy is to explore small subregions of the
parameter space in succession instead of attempting to explore the entire
parameter space directly. This method has been widely used in simulation
optimization. However, RSM has two significant shortcomings: Firstly, it is
not automated. Human involvements are usually required in the search process.
Secondly, RSM is heuristic without convergence guarantee. This paper proposes
Stochastic Trust Region Gradient-Free Method (STRONG) for simulation
optimization with continuous decision variables to solve these two problems.
STRONG combines the traditional RSM framework with the trust region method for
deterministic optimization to achieve convergence property and eliminate the
requirement of human involvement. Combined with appropriate experimental
designs and specifically efficient screening experiments, STRONG has the
potential of solving high-dimensional problems efficiently.
Information Technology for Servicization of Cutting
Tool Supply Chain
Chen Yang Cheng and Vittal Prabhu (Penn State
University)
Abstract:
A new type of business model in the cutting tool supply
chain has moved in the past several years from just providing selling cutting
tools to also providing services. This servicization includes tool procurement
management, quality control, inventory control, repair, and sharpening. The
emergence of information technology, such as radio frequency identification
(RFID) and web services, offers the possibility of enhancing the excutabilty
and efficiency of these services. However, the multi-function combination of
information technology potentially increases the complexity of service
processes, and decreases the usability and system performance of these
processes. In order to solve this problem, this paper analyzes the service
process from three dimensions that evaluate usability, complexity and
performance measurement. Each dimension addresses service processes from a
different viewpoint which helps to analyze the business processes and further
improve them through the analysis.
A Simulation and Optimization-based Approach for
Railway Scheduling
Pavankumar Murali, Maged Dessouky, and Fernando
Ordonez (University of Southern California)
Abstract:
We address the problem of routing and scheduling of
freight trains on a complex railway network. The primary objectives are to
minimize the delay in transporting shipments through a railway network, and to
reject a shipment that could potentially overload the network. Simulation
modeling is used to develop regression models that accurately estimate the
travel time delay on a sub-network as a function of the trackage
configuration, train type, and capacity and traffic on that sub-network. These
delay estimation functions are fed into an integer programming model that
suitably routes trains through a railway network based on the statistical
expectation of running times in order to balance the railroad traffic and
avoid deadlocks.
Parallel Cross-Entropy Optimization
Gareth
Evans (University of Queensland)
Abstract:
The Cross-Entropy (CE) method is a modern and effective
optimization method well suited to parallel implementations. There is a vast
array of problems today, some of which are highly complex and can take weeks
or even longer to solve using current optimization techniques. This paper
presents a general method for designing parallel CE algorithms for Multiple
Instruction Multiple Data (MIMD) distributed memory machines using the Message
Passing Interface (MPI) library routines. We provide examples of its
performance for two well-known test-cases: the (discrete) Max-Cut problem and
(continuous) Rosenbrock problem. Speedup factors and a comparison to
sequential CE methods are reported.
Simulating Gang Violence In An Asymmetric
Environment
Sam Huddleston and Jon Fox (UVA)
Abstract:
The United States is entering the fourth year of the
Operation Iraqi Freedom (OIF) and continues to commit resources at an alarming
rate. And for most of us reading or watching the major media sources, one of
our questions remains “will any plan work in Iraq?” Based on recent literature
examining the similarities between insurgency and gang violence, this study
will attempt to structure an agent based simulation for modeling gang crime
within a US city. The external adjustments to quality of life options and law
enforcement staffing offer the potential to improve understanding of
asymmetric environment.
Allocation of Simulation Runs for Simulation
Optimization
Alireza Kabirian and Sigurdur Olafsson (Department of
Industrial and Manufacturing Systems Engineering, Iowa State University)
Abstract:
Simulation optimization (SO) is the process of finding
the optimum design of a system whose performance measure(s) are estimated via
simulation. We propose some ideas to improve overall efficiency of the
available SO methods and develop a new approach that primarily deals with
continuous two dimensional problems with bounded feasible region. Our search
based method, called Adaptive Partitioning Search (APS), uses a neural network
as meta-model and combines various exploitation strategies to locate the
optimum. Our numerical results show that in terms of the number of evaluations
(simulation runs) needed, the APS algorithm converges much faster to the
optimum design than two well established methods used as benchmark.
Enhanced Modeling Using Entities In An
Integrated Process-Driven and Event-Driven Environment
Vishnu S.
Kesaraju (Wright State University)
Abstract:
In process-driven simulation models, the system can be
represented by blocks or system networks through which entities flow to mimic
real life system objects. In event-driven models, the system can be
represented by event graphs, which focus on the abstraction of the event
rather than on observable physical entities. A new simulation framework that
integrates process- and event-driven approaches offers a powerful combination
of tools to the modeler. The integrated Entity/Event (IE2) framework has two
main components: an E2 (Entity/Event) Integrator and an IE2 model. One of the
main goals for the design of the framework is to preserve the elegantly simple
logic to process events, even when processing of entities is taking place
simultaneously. An important feature of standard event graphs is
parameterization of event vertices. The framework based on an integrated
entity/event approach has been further enhanced to allow parameterization by
explicitly representing entities at the event-driven level.
Importance Sampling Estimation for the Probability
of Overflow in Serve the Longest Queue System
Kevin Leder (Brown
University)
Abstract:
An importance sampling algorithm for estimating the
probability that at least one queue overflows in the serve the longer queueing
system is presented. We prove that this algorithm is asymptotically efficient,
in order to show this it is necessary to find an explicit formula for the
large deviation rate of the rare event of interest. Therefore we also
explicitly identify the large deviations rate for the probability of buffer
overflow. The results presented in this paper hold for an arbitrary number of
queues being served by the server. The only restriction placed on the arrival
and service rates is that the system be stable.
Simulation-based Decision Making for Maintenance
Policy Selection for Complicated Systems
Zhaojun Li (Wichita State
University)
Abstract:
This paper investigates the performance of degrading
systems under structural constraints applying discrete event simulation. The
maintenance polices under consideration for such system are minimum repair,
failure replacement and preventive maintenance. The system performance is
measured in terms of two criteria, long-term availability and average cost
rate, and the optimal maintenance policy is selected based on the two criteria
using compromise programming method. Unlike many methods formulating the
maintenance problem as Markov processes by assuming exponentially distributed
mean time to failure and mean time to repair, the simulation model can deal
with most time distributions such as Weibull and Lognormal, which are more
practical in real application. Due to the intractability of Markovian
formulation and the ease of obtaining performance indices by simulation, the
simulation method is an effective tool to facilitate decision making. The
simulation model is validated by comparing simulation results with analytical
results from a Markovian formulation.
Classification Analysis for Simulation of Machine
Breakdowns
Lanting Lu, Christine S. M. Currie, and Russell C. H.
Cheng (University of Southampton) and John Ladbrook (Ford Motor Company)
Abstract:
Machine failure is often an important factor in
throughput of manufacturing systems. To simplify the inputs to the simulation
model for complex machining and assembly lines, we have derived the Arrows
classification method to group similar machines, where one model can be used
to describe the breakdown times for all of the machines in the group and
breakdown times of machines can be represented by finite mixture model
distributions. The Two-Sample Cramer-von Mises statistic is used to measure
the similarity of two sets of data. We evaluate the classification procedure
by comparing the throughput of a simulation model when run with mixture models
fitted to individual machine breakdown times; mixture models fitted to group
breakdown times; and raw data. Details of the methods and results of the
grouping processes will be presented, and will be demonstrated using an
example.
Semi-Automatic Simulation Component
Reuse
Yariv N. Marmor and Avigdor Gal (Technion - Israel Institute
of Technology)
Abstract:
Simulation reuse is a special case of code reuse, where
a developer writes a component once and can then reuse it. However, two main
characteristics differentiate it from other types of code reuse: (1)
Simulation code, in many cases, is built by non-expert developers. (2)
Simulation may be used in many completely different application areas, if only
the similarity of its components can be recognized. In this work, we aim at
improving simulation reuse by providing a technique for recognizing
similarities among simulation pieces of code. We offer a methodology for
semi-automatic support for the process of simulation component reuse. Our
methodology is based on a table-based modeling of simulation components,
hierarchical clustering of existing components and then a careful walk-through
of a designer through the hierarchy for the identification of relevant
components. To illustrate our approach, we make use of three real-world case
studies involving resource scheduling.
A Longitudinal Study of the Impact of
Information Technology on Organizational Knowledge Management Processes Using
Agent-based Modeling
Srikanth Mudigonda (University of
Missouri-St.Louis)
Abstract:
Prior research on knowledge management (KM) and its
relationship with information technologies (ITs) and organizational
performance has typically been conducted using cross-sectional studies under a
limited set of environmental and organizational conditions. Consequently, the
longitudinal impact of ITs on the relationships among KM processes, individual
and organizational knowledge, and performance has not been investigated.
Synthesizing the literature drawn from cognitive science, artificial
intelligence, computational modeling of organizations, and organizational
behavior, the proposed dissertation will investigate these relationships over
an extended period. It will use agent-based modeling, with distinct
representations of: knowledge, KM processes (e.g., socialization, exchange),
organizational tasks, and how task performance benefits from KM. By studying
these aspects before and after the introduction of different ITs, the effect
of ITs on KM and firm performance will be examined. Field interviews at
organizations will be used to validate the model.
Hierarchical Planning and Multi Level Scheduling
for Simulation-based Probabilistic Risk Assessment
Hamed S. Nejad
(Center for Risk and Reliability, University of Maryland, College Park) and
Ali Mosleh (Center for Risk and Reliability)
Abstract:
Simulation of dynamic complex systems, specifically
those comprised of large numbers of components with stochastic behaviors, for
the purpose of probabilistic risk assessment, faces challenges in every aspect
of the problem. Scenario generation confronts many impediments, one being the
problem of handling the large number of scenarios without compromising
completeness. Probability estimation and consequence determination processes
must also be performed under real world constraints on time and resources. In
the approach outlined in this paper, hierarchical planning is utilized to
generate a relatively small but complete group of high level risk scenarios to
represent the unsafe behaviors of the system. Multi-level scheduling makes the
probability estimation and consequence determination processes more efficient
and affordable. The scenario generation and scheduling processes both benefit
from an updating process that takes place after a number of simulation runs by
fine-tuning the scheduler's level adjustment parameters and refining the
planner's high level system model.
Phased Approach to Simulation of Security
Algorithms for Ambient Intelligent (AmI) Environments
Muaz Niazi
(Foundation University, FUIMCS) and Abdul Rauf Baig (National University-FAST)
Abstract:
The finalization of AmI simulation requires actual
installation of sensors and actuators in the real-world. On one hand, having
to model a system with an eventual extensive human user interaction, makes
modeling difficult. On the other hand, building an effective simulation before
actual sensor deployment, is an important requirement for success of the
perceived system. In this work, we focus on creation of an effective
simulation for algorithms for security in ambient intelligent environments. We
propose using a phased approach to simulation design in AmI environments. The
first phase involves focusing on a model of an effective agent-based
simulation is developed for the algorithms. Next, interface is developed for
the simulation tools and the hardware which includes the sensors and the
actuators. Finally, the simulation is executed in the real-world environment.
As a case study, we present a simulation of an algorithm for
authentication-free algorithm for open access resources.
Code Analysis and CS–XML
Kara A. Olson
and C. Michael Overstreet (Old Dominion University) and E. Joseph Derrick
(Radford University)
Abstract:
The automated analysis of model specifications is an
area that historically receives little attention in the simulation research
community but which can offer significant benefits. A common objective in
simulation is enhanced understanding of a system; model specification analysis
can provide insights not otherwise available as well as time and cost savings
in model development. The Condition Specification (CS) (Overstreet and Nance
1985) represents a model specification form that is amenable to analysis. This
paper discusses the motivations for and the creation of CS-XML; a translator
for CSes into XML-based Condition Specifications; and a translator for CS-XML
into fully-executable C/C++ code. It presents initial results from analysis
efforts using CodeSurfer (Anderson et al. 2003), a software static analysis
tool, and discusses future work. In conclusion, it is argued that CS-XML can
provide an essential foundation for Web Services that support the analysis of
discrete-event simulation models.
Agent Based Simulation Model to Predict Performance
of Teams Working Under Unstructured Job Environments.
Jose A Rojas
(Florida International University/ Universidad del Turabo) and Ronald
Giachetti (Florida International University)
Abstract:
The focus of this research is to develop a
computational tool to study and design teams working under complex job
environments. The Team Coordination model is an agent-based simulation model
developed to study coordination and performance of teams. The job structure is
modeled as a conditional network, in which some tasks are the result of
probabilistic outcomes of predecessor tasks and tasks duration are random
variables. The simulation model will be used as a tool to determine which team
design configuration will perform best on a particular job.
Appraisal of Airport Alternatives in Greenland by
the use of Risk Analysis and Monte Carlo Simulation
Kim Bang
Salling and Steen Leleur (Technical University of Denmark)
Abstract:
The research to be presented consists of an appraisal
study of three airport alternatives in Greenland by the use of an adapted
version of the Danish CBA-DK model. The assessment model is based on both a
deterministic calculation by the use of conventional cost-benefit analysis and
a stochastic calculation, where risk analysis is carried out using Monte Carlo
simulation. The feasibility risk adopted in the model is based on assigning
probability distributions to the uncertain model parameters. Two probability
distributions are presented, the Erlang and normal distribution respectively
assigned to the construction cost and the travel time savings. The obtained
model results aim to provide an input to informed decision-making based on an
account of the level of desired risk as concerns feasibility risks. This level
is presented as the probability of obtaining at least a benefit-cost ratio of
a specified value. Finally, some conclusions and a perspective are presented.
Agent-based Simulation Framework for Supply
Chain Planning in the Lumber Industry
Luis Antonio Santa Eulalia
and Sophie D'Amours (Universite Laval) and Jean-Marc Frayret (Ecole
Polytechnique de Montreal and FOR@C Research Consortium)
Abstract:
Agent-based simulation is considered a promising
approach for supply chain (SC) planning. Although there have been many
relevant advances on how to specify, design, and implement agent-based
simulation systems for SC planning, the related literature does not thoroughly
address the analysis phase. In this early phase, simulation stakeholders
discuss and decide which kind of simulation experiments have to be performed
and their requirements. Consequently, it considerably influences the whole
development process and the resulting simulation environment. Thus, this work
proposes an agent-based simulation framework for modeling SC planning systems
in the analysis phase. In addition, it proposes a formal method for converting
the analysis model into specification and design models. Another contribution
of this work is the instantiation of the proposed framework into a particular
model of the lumber industry. This model is then validated by means of an
agent-based simulation platform being developed for this industry sector in
Canada.
A New Method for Reverse Engineering the Visual
System
Diglio A. Simoni (RTI International)
Abstract:
Arguably our understanding of the world is based to a
very large extent on our visual perceptual abilities. We describe a new type
of robust perception-based image analysis system derived from psychophysical
observations of eye scan patterns of expert observers. Psychophysical metrics
that describe the visual system's real-time selection of image regions during
active visual search processes are recast as fuzzy predicates to form the
foundation of a rule set that simulates the perceptual and cognitive
strategies used by the expert observers. This results in a simulated search
mechanism composed of a bottom-up neural network-based sensory processing
model coupled with a top-down fuzzy expert system model of search decision
processes that helps redesign and perfect the supervised and unsupervised
machine analysis of work-related or research imagery. The use of
supercomputing resources for this research is highlighted.
IBatch: An Autoregressive—Batch-Means Procedure
for Steady-State Simulation Output Analysis
Ali Tafazzoli (North
Carolina State University)
Abstract:
We develop IBatch, a new procedure for steady-state
simulation output analysis which can be considered as an extension of the
classical method of nonoverlapping batch means. IBatch addresses the
correlation, nonnormality, and start-up problems by exploiting the properties
of a first-order autoregressive time series model of the suitably truncated
batch means with a sufficiently large batch size. This approach yields an
approximately unbiased estimator of the variance of the batch means as well as
an asymptotically valid confidence interval for the steady-state mean of the
underlying output process. An experimental performance evaluation demonstrates
the potential of IBatch to provide a completely automated, robust, and
transparent method for steady-state simulation output analysis. Major Advisor:
Dr. James R. Wilson
Discrete Stochastic Optimization using Simplex
Interpolation
Honggang Wang and Bruce W. Schmeiser (Purdue
University)
Abstract:
Optimizing a stochastic system with a set of discrete
design variables is an important and difficult problem. Much recent research
has developed efficient methods for stochastic problems where the objective
functions can only be estimated by simulation oracles. Due to the expense of
simulation and typical large search space, most of the present approaches are
either non-convergent or converge slowly. We propose a method using continuous
search with simplex data interpolation to solve a wide class of discrete
stochastic optimization problems. Adopting simplex interpolation for the
discrete stochastic problem, we create a continuous piecewise-linear
stochastic optimization problem. A retrospective framework provides a sequence
of deterministic approximating problems that can be solved using continuous
optimization techniques such as bundle methods or Shor's r-algorithm that
guarantee desirable convergence properties. Numerical experiments show that
our method finds the optimal solution orders of magnitude faster than random
search algorithms, including the recently developed COMPASS.
Simulating Soundscape Evaluations in Urban Open
Spaces
Lei Yu and Jian Kang (Sheffield University)
Abstract:
Urban open spaces play a dramatic role in current urban
renaissance. These spaces are important for healthy life and attract strong
public interests. As a part of physical environment, acoustic comfort is an
essential component to be considered by city designers and acousticians.
Subjective evaluations of acoustic comfort are crucial in the acoustic design
process. How-ever, it is difficult to predict the subjective evaluations, as
there are a large number of variables in terms of the physical and social
environments which could affect the evaluations. In this study, therefore,
artificial neural network (ANN) techniques have been introduced to predict
subjective evaluations of acoustic comfort and sound level annoyance. In this
presentation, the modeling process is illustrated and the results are
discussed. It is shown that the ANN approach is an efficient way to predict
subjective evaluations at the design stage.