Efficient Generation of Cycle Time-Throughput Curves
through Simulation and Metamodeling
Feng Yang, Bruce E. Ankenman,
and Barry L. Nelson (Northwestern University)
Abstract:
A cycle time-throughput (CT-TH) curve plays an
important role in strategic planning for manufacturing systems. In this
research, we seek to quantify the relationship of the moments and percentiles
of cycle time to throughput rate via simulation experiments. The estimation of
CT-TH moment curves is based on a nonlinear regression metamodel supported by
queueing theory, and the adequacy of the model has been proved through our
numerical experiments. Utilizing the estimated moment curves of cycle time, we
proposed to estimate the CT-TH percentile curves in an indirect method
assuming that the underlying distribution of cycle time is a generalized
gamma, a highly flexible distribution. More specifically, we fit metamodels
for the first three CT-TH moment curves throughout the throughput range of
interest, determine the parameters of the generalized gamma by matching
moments, and then obtain percentiles by inverting the distribution. To insure
efficiency and control estimation error, simulation experiments are
sequentially built up to optimize some design criterion.
A Real-Time Knowledge-Based Decission Support
System for Health Care Quality Improvement Using Discrete Event Simulation
Techniques
Alexander Komashie, Ali Mousavi, and Mustafa Ozbayrak
(Brunel University )
Abstract:
The quality of health care is increasingly receiving
more and more emphases in almost all developed countries. Traditional health
care Quality Assurance (QA) models have always been implemented in retrospect
depending heavily on surveys. The objective of this research is to develop a
novel and more reliable approach to the monitoring and improvement of health
care quality. This approach is based on a real-time simulation system that
monitors a Health Care Quality Index (HCQI). This system will map the HCQI
which is a function of process factors (e.g. waiting time), against the
patients’ expectations and hence give health care managers the ‘local’
information needed to continuously adjust performance without waiting for an
annual survey. Being a real-time system, managers will have the ability to run
fast-forward simulation to help predict future demands and make decisions
accordingly. This will be a key to Continuous Quality Improvement (CQI) in
health care.
Scenario Planning for Simulation-Based
Probabilistic Risk Assessment
Hamed Nejad (University of Maryland)
Abstract:
Simulation is used for the probabilistic risk
assessment of complex systems that include hardware, software, and human
elements. Since assessing the risk of such systems requires that a large
number of scenarios be considered, a Planner component has been added to the
simulation environment. This component solicits high level information such as
system’s structure and functional behavior, and uses it to automatically
generate and prioritize scenarios that will be used in risk assessment.
Because of the hierarchical configuration of the Planner’s knowledge-base,
scenarios can easily be modified to assess system risks when parts of the
system are modified for risk management. As such, the analyst is able to
compare the results of risk assessment -end-state probabilities as well as
worse case scenarios- in different settings. The planning process is dynamic
and simulation feedback is used to update the list of scenarios and/or their
level of priority as needed.
Optimal Vehicle Scheduling & Layout for
Automated Material Handling Systems(AMHS)
Sangwon Chae (University
of California, Irvine)
Abstract:
This paper describes a scheduling of Automatic Material
Handling System(AMHS) for improving productivity and reducing delivery time.
In the paper, we simulate various scenarios and then analyze them for finding
the optimal solution. In order to find the optimal solution, we use scenarios
of vehicle scheduling and various layouts for AMHS in intra-bay. The paper
shows the simulation to improve overall performance on AMHS by reducing the
delivery time and raising productivity.
A Novel Approach to Studying Cell Communications
Signalling
Jasmina Panovska (Heriot-Watt University)
Abstract:
In this paper we study the communication signals within
a population of cells by extending the theory developed by Kummer and Ocone
for the cell cycle. We show how the basic framework can be extrapolated from
the cell level to the next organizational one, namely the population of cells.
We define the basic quantities, such as the equivalent of the metabolic
temperature and metabolic energy and we show how these are used to
characterise the process of interest. We develop the basis for a
non-equilibrium statistical theory for the system of cells and we establish
the relationship between the variation in the local variables (eg. temperature
and energy) and the variation in the cell concentration. The results from the
theory are compared to those using the more classical approach of
reaction-diffusion equations.
Estimation and Model Selection of the
Interest Rates
Pouyan Mashayekh Ahangarani (USC)
Abstract:
A variety of continuous time series models of the short
term riskless rate are estimated using Maximum Likelihood method on
discretized models. Then the best model will be found that can fit the data
better. A number of well-known models perform poorly in the comparison.
Indirect Inference method is used for the best model in order to obtain
consistent estimates. At the end, an empirical application of the stochastic
model for interest rates will be used for pricing the call options of Nokia
Company.
Quasi-Monte Carlo Simulation in a LIBOR Market
Model
Xuefeng Jiang (Northwestern University) and John R. Birge
(University Of Chicago)
Abstract:
Quasi-Monte Carlo (QMC) methods have been extensively
applied to pricing complex derivatives. We apply quasi-Monte Carlo simulation
to LIBOR market models based on the Brace-Gatarek-Musiela/Jamshidian
framework. We price exotic interest-rate derivatives and compare the results
using pseudo random sequences and different low discrepancy sequences.
Convergence of Strikes in Variance and Volatility
Swaps
Ashish Jain (Columbia Business School)
Abstract:
In this work we study the convergence of strikes in
variance and volatility swaps with frequency of sampling and determine the
convergence rate of discrete time strikes to continuous time strikes. In our
work, we study three different models of underlying evolution Black Scholes,
Heston Stochastic Volatility model and Affine Jump diffusion processes to
calculate the strike of the swap. First we determine the analytical value of
the strike of variance swap in all these three models and then using convexity
adjustment formula we calculate the approximate fair strike of volatility
swap. For studying the convergence rate of strikes we compute the strikes for
different sampling frequency using Monte Carlo Simulation. We found that
convergence rate is linear which is also supported from the theoretical
results we have proved.
What Value is Microsoft’s .Net to Discrete Event
Simulations?
Adelaide Carvalho and Michael Pidd (Lancaster
University)
Abstract:
Developers of simulation software have responded to the
increasing demand for customised solutions by adding new features and tools to
their simulation packages. This has led to monolithic applications with
functionalities constantly extended by addition of templates, add-ons, etc in
a generalising-customising-generalising development cycle. Though successful
so far, this approach may be approaching its limit. An alternative approach is
to compose simulation packages from prefabricated components that users may
select, modify and assemble to acquire functionality to suit each model. This
approach requires component-based paradigms and integration mechanisms. We
investigate the value of .Net integration philosophy for development of
discrete event simulation models. The DotNetSim prototype is used to
investigate how software components developed within different packages can be
linked into a single simulation application deployed as web services.
DotNetSim consists of a graphical modelling environment and a base simulation
engine.
Examining the Actual Benefits of Distributed
Simulation for Supply Chain Management
Alexey Artamonov (Lancaster
University)
Abstract:
Supply chains are not new, but their importance has
grown in recent years due to globalisation, tough competition and the
increasingly networked nature of business. Simulation has long been applied in
production-inventory systems and more recently in supply chain management.
Given that supply chains are distributed systems it seems sensible to consider
the application of distributed computation to their simulation, as is evident
from the number of recent research papers dedicated to distributed supply
chain simulation (DSCS). However, attention is seldom paid to the actual
advantages of this novel technology and the possible obstacles that must be
overcome before real-world applications become commonplace. My research
considers the potential drivers which might make this technology attractive
and uses a DSCS testbed, implemented in AutoMod, to analyse whether these
drivers can be achieved.
Adaptive Control Variates for American Option
Pricing
Sam Ehrlichman and Shane G. Henderson (Cornell University)
Abstract:
Recently, Kim and Henderson (2004) have proposed a
scheme for variance reduction in Monte Carlo simulations that uses stochastic
optimization to compute an effective control variate. We apply this technique
to the problem of American option pricing. While our work is similar in spirit
to work by Bolia and Juneja (2005), the nonlinear procedure allows us to
consider a more flexible class of approximating functions for our control
variate; in particular, we are freed from having to make a particular choice
of basis function parameterization up front.
Using Computer Simulations in Disaster
Preparedness Exercises: A Study of Factors that Influence Learning and
Performance in a First Responder Agency.
Daniel Joseph O'Reilly
(Wayne State University)
Abstract:
This a dissertation study investigating the
effectiveness of computer simulation as an instructional tool in disaster and
mass emergency (including homeland security) preparedness with regard to
learning and field response performance. The effectiveness will be determined
through a summative performance evaluation of a local public health agency to
a full-scale mock drill after receiving preparedness training through computer
simulated instructional program(s). Simulation variables of fidelity,
richness, prior experience of participants with emergencies, and prior
experience with computer-simulation training will be included in the
assessment. Currently, this study has evaluated applicable learning theories
and the degree to which computer simulation meets the instructional criteria
requirements proposed by these theories. A literature review of computer
simulations in this instructional context has been found to be subjectively
supportive but empirically weak. The objective of this study is to generate
empirical evidence related to computer simulation in this current and
particularly crucial arena of mass emergency preparedness and to contribute to
the body of data on research in computer simulation instruction overall.
Multi-Hypothesis Intention Simulation Based
Agent Architecture
Per M. Gustavsson (University of Skövde)
Abstract:
In the ongoing work with establishing Multi-Hypothesis
Intention Simulation Based Agent Architecture the current status is presented.
The intended architecture is described along with the Fractal Information
Fusion Model. The work with establishing a framework for mapping environmental
entities such as fire, wind into the Command and Control Information Exchange
Data Model and the Coalition battle Management Language will be demonstrated.
The current design and implementation of the mechanisms are discussed.
An Approach to Near Real-Time Dynamic Distributed
System Control under Uncertainty
Kevin Adams (Virginia Tech)
Abstract:
Sensitivity analysis for integer optimization models is
a very time-consuming process not conducive to applications requiring near
real-time performance. The premise of this research is that the sensitivity of
the optimization function in mixed integer non-linear systems creates a
pattern. We demonstrate that a supervised neural network can be used to
identify, classify and learn these patterns. The learned patterns can be
represented in the various weights used in the neural network configuration.
Using historical optimal information represented in the weights, the neural
network can map the input constraints of our system to the optimal solution
through functionally approximation with a high degree of accuracy. This
functional approximation can be calculated in a small fixed determinate amount
of time. In order to maintain good performance in a dynamic environment, the
system uses feedback from an off-line component to identify and classify the
constraint pattern changes. The proposed approach is demonstrated through
simulation and case study to perform extremely well.
Interchanging Discrete Event Simulation
Process-Interaction Models using the Web Ontology Language -
OWL
Lee W. Lacy (University of Central Florida / DRC)
Abstract:
Discrete event simulation development requires
significant investments in time and resources. Descriptions of discrete event
simulation models are associated with world views, including the
process-interaction orientation. Historically, these models have been encoded
using high-level programming languages or special purpose (typically
vendor-specific) simulation languages. These approaches complicate simulation
model reuse and interchange. The current document-centric World Wide Web is
evolving into a Semantic Web that communicates information using ontologies.
The Web Ontology Language – OWL, was used to encode an ontology for
representing discrete event process-interaction models (DEPIM). The DEPIM
ontology was developed using ontology engineering processes. The purpose of
DEPIM is to provide a vendor-neutral open representation to support model
interchange. Model interchange provides an opportunity to improve simulation
quality, reduce development costs, and reduce development times.
Analysis of Production Authorization Card
Schemes using Simulation and Neural Network Metamodels
Corinne
MacDonald and Eldon A. Gunn (Dalhousie University)
Abstract:
We have developed a framework to model and analyze the
performance of complex manufacturing systems operating under a variety of
production control strategies. This framework involves a production
authorization card scheme, which enables emulation of many popular strategies
such as kanban or Base Stock systems. A discrete-event simulation model of the
manufacturing system produces estimates of the multiple system performance
measures, such as average work-in-process inventory and customer service
rates, for combinations of control parameters. Finally, neural network
metamodels are trained to approximate the expected value of these system
performance measures, using a subset of parameter combinations and the
corresponding performance estimates generated by the simulation model. We will
show that this framework provides a flexible means of conducting analysis of
the impact of parameter settings on the performance of the system, and is a
viable alternative to simulation optimization.
The Use of Hyper-Hidden Markov Models in Wireless
Channel Simulation
Antonia Marie Boadi (University of Southern
California)
Abstract:
The transmission of multimedia over satellite, cellular
or mobile wireless networks introduces impairments which degrade signal
quality. This thesis proposes a channel simulation model and design philosophy
that represents a shift from the use of traditional network-centric design
requirements to a more comprehensive approach that encompasses perceptual
quality issues. PerFEC, the Perceptually-Sensitive Forward Error Control
agent, employs a flexible, adaptive coding scheme that is both
quality-of-service (QoS) and quality-of-perception (QoP) configurable. PerFEC
is a decision agent that uses decoder output to select an error control scheme
that is appropriate for the prevailing channel conditions. The channel
simulation model is based on a new construct, the Hyper-Hidden Markov Model
(HHMM), whose internal states have dual interpretation: they represent the
user's perceptual quality as well as an estimate of the current
bit-error-rate. This layering of Hidden Markov Models provides insight into
the nonlinear relationship between quality-of-service metrics and perceptual
quality.
Simulation Modeling of the Level of Use of
E-Health System and Optimization of its Effect on Patient Quality of
Life
Abhik Bhattacharya and David H. Gustafson (University of
Wisconsin-Madison)
Abstract:
Comprehensive Health Enhancement Support System (CHESS)
is a disease-specific computer-based system designed to meet information and
support needs. The users of discussion group, the most used service provided
through CHESS, gain enhanced quality of life (QoL). A benefit-based model for
sustainable use of discussion group is developed, validated, and analyzed. The
model deals with the system of opposing forces that link discussion group size
and communication activity and the chances of participation by a member. The
model was calibrated based on a subset of empirical data collected from two
randomized clinical trials of breast cancer patients, while the remaining data
was used to validate. Simulation experiments were conducted to determine the
model’s predictions regarding the impact of CHESS on QoL. The results imply
that the use of communication in electronic support groups will enhance the
QoL of cancer patients balancing the opposing forces of group size and
communication activity.
Selecting the Best System and Determining a Set of
Feasible Systems
Demet Batur (Georgia Institute of Technology,
Industrial and Systems Engineering)
Abstract:
We present two fully sequential indifference-zone
procedures to select the best system from a number of competing simulated
systems, where best is defined by maximum or minimum expected performance.
These two procedures have parabolic continuation regions rather than
triangular continuation regions employed in a number of papers. Our procedures
find the best or near-best system with at least a pre-specified probability of
correctness when basic observations are independent and identically normally
distributed. They allow for unequal and unknown variances across systems and
the use of common random numbers, and show moderate improvement compared to
other existing fully sequential procedures with triangular continuation
regions. We also present procedures for finding a set of feasible or
near-feasible systems with some statistical guarantee among a finite number of
simulated systems in the presence of stochastic constraints on secondary
performance measures. The proposed procedures can handle a large number of
systems and stochastic constraints.